Biographies Characteristics Analysis

Linear dependence and linear independence of a system of vectors. Linearly dependent and linearly independent vectors Find out whether the vectors are linearly independent

Definition. Linear combination of vectors a 1 , ..., a n with coefficients x 1 , ..., x n is called a vector

x 1 a 1 + ... + x n a n .

trivial, if all coefficients x 1 , ..., x n are equal to zero.

Definition. The linear combination x 1 a 1 + ... + x n a n is called non-trivial, if at least one of the coefficients x 1, ..., x n is not equal to zero.

linearly independent, if there is no non-trivial combination of these vectors equal to the zero vector.

That is, the vectors a 1, ..., a n are linearly independent if x 1 a 1 + ... + x n a n = 0 if and only if x 1 = 0, ..., x n = 0.

Definition. The vectors a 1, ..., a n are called linearly dependent, if there is a non-trivial combination of these vectors equal to the zero vector.

Properties of linearly dependent vectors:

    For 2 and 3 dimensional vectors.

    Two linearly dependent vectors are collinear. (Collinear vectors are linearly dependent.)

    For 3-dimensional vectors.

    Three linearly dependent vectors are coplanar. (Three coplanar vectors are linearly dependent.)

  • For n-dimensional vectors.

    n + 1 vectors are always linearly dependent.

Examples of problems on linear dependence and linear independence of vectors:

Example 1. Check whether the vectors a = (3; 4; 5), b = (-3; 0; 5), c = (4; 4; 4), d = (3; 4; 0) are linearly independent.

Solution:

The vectors will be linearly dependent, since the dimension of the vectors is less than the number of vectors.

Example 2. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 1) are linearly independent.

Solution:

x 1 + x 2 = 0
x 1 + 2x 2 - x 3 = 0
x 1 + x 3 = 0
1 1 0 0 ~
1 2 -1 0
1 0 1 0
~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 1 - 0 0 - 0 0 -1 1 0

subtract the second from the first line; add a second line to the third line:

~ 1 - 0 1 - 1 0 - (-1) 0 - 0 ~ 1 0 1 0
0 1 -1 0 0 1 -1 0
0 + 0 -1 + 1 1 + (-1) 0 + 0 0 0 0 0

This solution shows that the system has many solutions, that is, there is a non-zero combination of values ​​of the numbers x 1, x 2, x 3 such that the linear combination of vectors a, b, c is equal to the zero vector, for example:

A + b + c = 0

and this means the vectors a, b, c are linearly dependent.

Answer: vectors a, b, c are linearly dependent.

Example 3. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 2) are linearly independent.

Solution: Let us find the values ​​of the coefficients at which the linear combination of these vectors will be equal to the zero vector.

x 1 a + x 2 b + x 3 c 1 = 0

This vector equation can be written as a system of linear equations

x 1 + x 2 = 0
x 1 + 2x 2 - x 3 = 0
x 1 + 2x 3 = 0

Let's solve this system using the Gauss method

1 1 0 0 ~
1 2 -1 0
1 0 2 0

subtract the first from the second line; subtract the first from the third line:

~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 2 - 0 0 - 0 0 -1 2 0

subtract the second from the first line; add a second to the third line.

In this article we will cover:

  • what are collinear vectors;
  • what are the conditions for collinearity of vectors;
  • what properties of collinear vectors exist;
  • what is the linear dependence of collinear vectors.
Definition 1

Collinear vectors are vectors that are parallel to one line or lie on one line.

Example 1

Conditions for collinearity of vectors

Two vectors are collinear if any of the following conditions are true:

  • condition 1 . Vectors a and b are collinear if there is a number λ such that a = λ b;
  • condition 2 . Vectors a and b are collinear with equal coordinate ratios:

a = (a 1 ; a 2) , b = (b 1 ; b 2) ⇒ a ∥ b ⇔ a 1 b 1 = a 2 b 2

  • condition 3 . Vectors a and b are collinear provided that the cross product and zero vector are equal:

a ∥ b ⇔ a , b = 0

Note 1

Condition 2 not applicable if one of the vector coordinates is zero.

Note 2

Condition 3 applies only to those vectors that are specified in space.

Examples of problems to study the collinearity of vectors

Example 1

We examine the vectors a = (1; 3) and b = (2; 1) for collinearity.

How to solve?

In this case, it is necessary to use the 2nd collinearity condition. For given vectors it looks like this:

The equality is false. From this we can conclude that vectors a and b are non-collinear.

Answer : a | | b

Example 2

What value m of the vector a = (1; 2) and b = (- 1; m) is necessary for the vectors to be collinear?

How to solve?

Using the second collinearity condition, vectors will be collinear if their coordinates are proportional:

This shows that m = - 2.

Answer: m = - 2 .

Criteria for linear dependence and linear independence of vector systems

Theorem

A system of vectors in a vector space is linearly dependent only if one of the vectors of the system can be expressed in terms of the remaining vectors of this system.

Proof

Let the system e 1 , e 2 , . . . , e n is linearly dependent. Let us write a linear combination of this system equal to the zero vector:

a 1 e 1 + a 2 e 2 + . . . + a n e n = 0

in which at least one of the combination coefficients is not equal to zero.

Let a k ≠ 0 k ∈ 1 , 2 , . . . , n.

We divide both sides of the equality by a non-zero coefficient:

a k - 1 (a k - 1 a 1) e 1 + (a k - 1 a k) e k + . . . + (a k - 1 a n) e n = 0

Let's denote:

A k - 1 a m , where m ∈ 1 , 2 , . . . , k - 1 , k + 1 , n

In this case:

β 1 e 1 + . . . + β k - 1 e k - 1 + β k + 1 e k + 1 + . . . + β n e n = 0

or e k = (- β 1) e 1 + . . . + (- β k - 1) e k - 1 + (- β k + 1) e k + 1 + . . . + (- β n) e n

It follows that one of the vectors of the system is expressed through all other vectors of the system. Which is what needed to be proven (etc.).

Adequacy

Let one of the vectors be linearly expressed through all other vectors of the system:

e k = γ 1 e 1 + . . . + γ k - 1 e k - 1 + γ k + 1 e k + 1 + . . . + γ n e n

We move the vector e k to the right side of this equality:

0 = γ 1 e 1 + . . . + γ k - 1 e k - 1 - e k + γ k + 1 e k + 1 + . . . + γ n e n

Since the coefficient of the vector e k is equal to - 1 ≠ 0, we get a non-trivial representation of zero by a system of vectors e 1, e 2, . . . , e n , and this, in turn, means that this system of vectors is linearly dependent. Which is what needed to be proven (etc.).

Consequence:

  • A system of vectors is linearly independent when none of its vectors can be expressed in terms of all other vectors of the system.
  • A system of vectors that contains a zero vector or two equal vectors is linearly dependent.

Properties of linearly dependent vectors

  1. For 2- and 3-dimensional vectors, the following condition is met: two linearly dependent vectors are collinear. Two collinear vectors are linearly dependent.
  2. For 3-dimensional vectors, the following condition is satisfied: three linearly dependent vectors are coplanar. (3 coplanar vectors are linearly dependent).
  3. For n-dimensional vectors, the following condition is satisfied: n + 1 vectors are always linearly dependent.

Examples of solving problems involving linear dependence or linear independence of vectors

Example 3

Let's check the vectors a = 3, 4, 5, b = - 3, 0, 5, c = 4, 4, 4, d = 3, 4, 0 for linear independence.

Solution. Vectors are linearly dependent because the dimension of vectors is less than the number of vectors.

Example 4

Let's check the vectors a = 1, 1, 1, b = 1, 2, 0, c = 0, - 1, 1 for linear independence.

Solution. We find the values ​​of the coefficients at which the linear combination will equal the zero vector:

x 1 a + x 2 b + x 3 c 1 = 0

We write the vector equation in linear form:

x 1 + x 2 = 0 x 1 + 2 x 2 - x 3 = 0 x 1 + x 3 = 0

We solve this system using the Gaussian method:

1 1 0 | 0 1 2 - 1 | 0 1 0 1 | 0 ~

From the 2nd line we subtract the 1st, from the 3rd - the 1st:

~ 1 1 0 | 0 1 - 1 2 - 1 - 1 - 0 | 0 - 0 1 - 1 0 - 1 1 - 0 | 0 - 0 ~ 1 1 0 | 0 0 1 - 1 | 0 0 - 1 1 | 0 ~

From the 1st line we subtract the 2nd, to the 3rd we add the 2nd:

~ 1 - 0 1 - 1 0 - (- 1) | 0 - 0 0 1 - 1 | 0 0 + 0 - 1 + 1 1 + (- 1) | 0 + 0 ~ 0 1 0 | 1 0 1 - 1 | 0 0 0 0 | 0

From the solution it follows that the system has many solutions. This means that there is a non-zero combination of values ​​of such numbers x 1, x 2, x 3 for which the linear combination of a, b, c equals the zero vector. Therefore, the vectors a, b, c are linearly dependent. ​​​​​​​

If you notice an error in the text, please highlight it and press Ctrl+Enter

Linear dependence and vector independence

Definitions of linearly dependent and independent vector systems

Definition 22

Let us have a system of n-vectors and a set of numbers
, Then

(11)

is called a linear combination of a given system of vectors with a given set of coefficients.

Definition 23

Vector system
is called linearly dependent if there is such a set of coefficients
, of which at least one is not equal to zero, that the linear combination of a given system of vectors with this set of coefficients is equal to the zero vector:

Let
, Then

Definition 24 ( through the representation of one vector of the system as a linear combination of the others)

Vector system
is called linearly dependent if at least one of the vectors of this system can be represented as a linear combination of the remaining vectors of this system.

Statement 3

Definitions 23 and 24 are equivalent.

Definition 25(via zero linear combination)

Vector system
is called linearly independent if a zero linear combination of this system is possible only for all
equal to zero.

Definition 26(due to the impossibility of representing one vector of the system as a linear combination of the others)

Vector system
is called linearly independent if not one of the vectors of this system cannot be represented as a linear combination of other vectors of this system.

Properties of linearly dependent and independent vector systems

Theorem 2 (zero vector in the system of vectors)

If a system of vectors has a zero vector, then the system is linearly dependent.

 Let
, Then .

We get
, therefore, by definition of a linearly dependent system of vectors through a zero linear combination (12) the system is linearly dependent. 

Theorem 3 (dependent subsystem in a vector system)

If a system of vectors has a linearly dependent subsystem, then the entire system is linearly dependent.

 Let
- linearly dependent subsystem
, among which at least one is not equal to zero:

This means, by definition 23, the system is linearly dependent. 

Theorem 4

Any subsystem of a linearly independent system is linearly independent.

 From the opposite. Let the system be linearly independent and have a linearly dependent subsystem. But then, according to Theorem 3, the entire system will also be linearly dependent. Contradiction. Consequently, a subsystem of a linearly independent system cannot be linearly dependent. 

Geometric meaning of linear dependence and independence of a system of vectors

Theorem 5

Two vectors And are linearly dependent if and only if
.

Necessity.

And - linearly dependent
that the condition is satisfied
. Then
, i.e.
.

Adequacy.

Linearly dependent. 

Corollary 5.1

The zero vector is collinear to any vector

Corollary 5.2

In order for two vectors to be linearly independent, it is necessary and sufficient that was not collinear .

Theorem 6

In order for a system of three vectors to be linearly dependent, it is necessary and sufficient that these vectors be coplanar .

Necessity.

- are linearly dependent, therefore, one vector can be represented as a linear combination of the other two.

, (13)

Where
And
. According to the parallelogram rule there is a diagonal of a parallelogram with sides
, but a parallelogram is a flat figure
coplanar
- are also coplanar.

Adequacy.

- coplanar. Let's apply three vectors to point O:

C

B`

– linearly dependent 

Corollary 6.1

The zero vector is coplanar to any pair of vectors.

Corollary 6.2

In order for vectors
were linearly independent, it is necessary and sufficient that they are not coplanar.

Corollary 6.3

Any vector of a plane can be represented as a linear combination of any two non-collinear vectors of the same plane.

Theorem 7

Any four vectors in space are linearly dependent .

 Let's consider 4 cases:

Let's draw a plane through vectors, then a plane through vectors and a plane through vectors. Then we draw planes passing through point D, parallel to the pairs of vectors ; ; respectively. We build a parallelepiped along the lines of intersection of planes O.B. 1 D 1 C 1 ABDC.

Let's consider O.B. 1 D 1 C 1 – parallelogram by construction according to the parallelogram rule
.

Consider OADD 1 – a parallelogram (from the property of a parallelepiped)
, Then

EMBED Equation.3 .

By Theorem 1
such that . Then
, and by definition 24 the system of vectors is linearly dependent. 

Corollary 7.1

The sum of three non-coplanar vectors in space is a vector that coincides with the diagonal of a parallelepiped built on these three vectors applied to a common origin, and the origin of the sum vector coincides with the common origin of these three vectors.

Corollary 7.2

If we take 3 non-coplanar vectors in space, then any vector of this space can be decomposed into a linear combination of these three vectors.

Definition 1. A linear combination of vectors is the sum of the products of these vectors and scalars
:

Definition 2. Vector system
is called a linearly dependent system if their linear combination (2.8) vanishes:

and among the numbers
there is at least one that is different from zero.

Definition 3. Vectors
are called linearly independent if their linear combination (2.8) vanishes only in the case when all numbers.

From these definitions the following corollaries can be obtained.

Corollary 1. In a linearly dependent system of vectors, at least one vector can be expressed as a linear combination of the others.

Proof. Let (2.9) be satisfied and, for definiteness, let the coefficient
. We then have:
. Note that the converse is also true.

Corollary 2. If the system of vectors
contains a zero vector, then this system is (necessarily) linearly dependent - the proof is obvious.

Corollary 3. If among n vectors
any k(
) vectors are linearly dependent, then that’s all n vectors are linearly dependent (we will omit the proof).

2 0 . Linear combinations of two, three and four vectors. Let's consider the issues of linear dependence and independence of vectors on a straight line, a plane and in space. Let us present the corresponding theorems.

Theorem 1. In order for two vectors to be linearly dependent, it is necessary and sufficient that they be collinear.

Necessity. Let the vectors And linearly dependent. This means that their linear combination
=0 and (for the sake of definiteness)
. This implies the equality
, and (by definition of multiplying a vector by a number) vectors And collinear.

Adequacy. Let the vectors And collinear ( ) (we assume that they are different from the zero vector; otherwise their linear dependence is obvious).

By Theorem (2.7) (see §2.1, item 2 0) then
such that
, or
– the linear combination is equal to zero, and the coefficient at equals 1 – vectors And linearly dependent.

The following corollary follows from this theorem.

Consequence. If the vectors And are not collinear, then they are linearly independent.

Theorem 2. In order for three vectors to be linearly dependent, it is necessary and sufficient that they be coplanar.

Necessity. Let the vectors ,And linearly dependent. Let us show that they are coplanar.

From the definition of linear dependence of vectors it follows the existence of numbers
And such that the linear combination
, and at the same time (to be specific)
. Then from this equality we can express the vector :=
, that is, the vector equal to the diagonal of a parallelogram constructed on the vectors on the right side of this equality (Fig. 2.6). This means that the vectors ,And lie in the same plane.

Adequacy. Let the vectors ,And coplanar. Let us show that they are linearly dependent.

Let us exclude the case of collinearity of any pair of vectors (because then this pair is linearly dependent and by Corollary 3 (see paragraph 1 0) all three vectors are linearly dependent). Note that this assumption also excludes the existence of a zero vector among these three.

Let's move three coplanar vectors into one plane and bring them to a common origin. Through the end of the vector draw lines parallel to the vectors And ; we get the vectors And (Fig. 2.7) - their existence is ensured by the fact that the vectors And vectors that are not collinear by assumption. It follows that the vector =+. Rewriting this equality in the form (–1) ++=0, we conclude that the vectors ,And linearly dependent.

Two corollaries follow from the proven theorem.

Corollary 1. Let And non-collinear vectors, vector – arbitrary, lying in the plane defined by the vectors And , vector. Then there are numbers And such that

=+. (2.10)

Corollary 2. If the vectors ,And are not coplanar, then they are linearly independent.

Theorem 3. Any four vectors are linearly dependent.

We will omit the proof; with some modifications it copies the proof of Theorem 2. Let us give a corollary from this theorem.

Consequence. For any non-coplanar vectors ,,and any vector
And such that

. (2.11)

Comment. For vectors in (three-dimensional) space, the concepts of linear dependence and independence have, as follows from Theorems 1-3 above, a simple geometric meaning.

Let there be two linearly dependent vectors And . In this case, one of them is a linear combination of the second, that is, it simply differs from it by a numerical factor (for example,
). Geometrically, this means that both vectors are on a common line; they can have the same or opposite directions (Fig. 2.8 xx).

If two vectors are located at an angle to each other (Fig. 2.9 xx), then in this case one of them cannot be obtained by multiplying the other by a number - such vectors are linearly independent. Therefore, the linear independence of two vectors And means that these vectors cannot be laid on one straight line.

Let us find out the geometric meaning of linear dependence and independence of three vectors.

Let the vectors ,And are linearly dependent and let (to be specific) the vector is a linear combination of vectors And , that is, located in the plane containing the vectors And . This means that the vectors ,And lie in the same plane. The converse is also true: if the vectors ,And lie in the same plane, then they are linearly dependent.

Thus, the vectors ,And are linearly independent if and only if they do not lie in the same plane.

3 0 . The concept of basis. One of the most important concepts in linear and vector algebra is the concept of basis. Let's introduce some definitions.

Definition 1. A pair of vectors is called ordered if it is specified which vector of this pair is considered the first and which the second.

Definition 2. Ordered pair ,noncollinear vectors is called a basis on the plane defined by the given vectors.

Theorem 1. Any vector on the plane can be represented as a linear combination of the basis system of vectors ,:

(2.12)

and this representation is the only one.

Proof. Let the vectors And form a basis. Then any vector can be represented in the form
.

To prove uniqueness, assume that there is one more decomposition
. We then have = 0, and at least one of the differences is different from zero. The latter means that the vectors And linearly dependent, that is, collinear; this contradicts the statement that they form a basis.

But then there is only decomposition.

Definition 3. A triple of vectors is called ordered if it is specified which vector is considered the first, which is the second, and which is the third.

Definition 4. An ordered triple of non-coplanar vectors is called a basis in space.

The decomposition and uniqueness theorem also holds here.

Theorem 2. Any vector can be represented as a linear combination of the basis vector system ,,:

(2.13)

and this representation is unique (we will omit the proof of the theorem).

In expansions (2.12) and (2.13) the quantities are called vector coordinates in a given basis (more precisely, by affine coordinates).

With a fixed basis
And
you can write
.

For example, if the basis is given
and it is given that
, then this means that there is a representation (decomposition)
.

4 0 . Linear operations on vectors in coordinate form. The introduction of a basis allows linear operations on vectors to be replaced by ordinary linear operations on numbers - the coordinates of these vectors.

Let some basis be given
. Obviously, specifying the vector coordinates in this basis completely determines the vector itself. The following proposals apply:

a) two vectors
And
are equal if and only if their corresponding coordinates are equal:

b) when multiplying a vector
per number its coordinates are multiplied by this number:

; (2.15)

c) when adding vectors, their corresponding coordinates are added:

We will omit the proofs of these properties; Let us prove property b) only as an example. We have

==

Comment. In space (on the plane) you can choose infinitely many bases.

Let's give an example of a transition from one basis to another, and establish relationships between the vector coordinates in different bases.

Example 1. In the basic system
three vectors are given:
,
And
. In basis ,,vector has decomposition. Find vector coordinates in the basis
.

Solution. We have expansions:
,
,
; hence,
=
+2
+
= =
, that is
in the basis
.

Example 2. Let in some basis
four vectors are given by their coordinates:
,
,
And
.

Find out whether the vectors form
basis; if the answer is positive, find the decomposition of the vector on this basis.

Solution. 1) vectors form a basis if they are linearly independent. Let's make a linear combination of vectors
(
) and find out at what
And it goes to zero:
=0. We have:

=
+
+
=

By defining the equality of vectors in coordinate form, we obtain the following system of (linear homogeneous algebraic) equations:
;
;
, whose determinant
=1
, that is, the system has (only) a trivial solution
. This means linear independence of vectors
and therefore they form a basis.

2) expand the vector in this basis. We have: =
or in coordinate form.

Moving on to the equality of vectors in coordinate form, we obtain a system of linear inhomogeneous algebraic equations:
;
;
. Solving it (for example, using Cramer’s rule), we get:
,
,
And (
)
. We have the vector decomposition in the basis
:=.

5 0 . Projection of a vector onto an axis. Properties of projections. Let there be some axis l, that is, a straight line with a direction chosen on it and let some vector be given Let us define the concept of vector projection per axis l.

Definition. Vector projection per axis l the product of the modulus of this vector and the cosine of the angle between the axis is called l and vector (Fig. 2.10):

. (2.17)

A corollary of this definition is the statement that equal vectors have equal projections (on the same axis).

Let us note the properties of projections.

1) projection of the sum of vectors onto some axis l equal to the sum of the projections of the terms of the vectors onto the same axis:

2) the projection of the product of a scalar by a vector is equal to the product of this scalar by the projection of the vector onto the same axis:

=
. (2.19)

Consequence. The projection of a linear combination of vectors onto the axis is equal to the linear combination of their projections:

We will omit the proofs of the properties.

6 0 . Rectangular Cartesian coordinate system in space.Decomposition of a vector in unit vectors of the axes. Let three mutually perpendicular unit vectors be chosen as a basis; we introduce special notations for them
. By placing their beginnings at a point O, we will direct along them (in accordance with the orts
) coordinate axes Ox,Oy andO z(an axis with a positive direction, origin and unit of length selected on it is called a coordinate axis).

Definition. An ordered system of three mutually perpendicular coordinate axes with a common origin and a common unit of length is called a rectangular Cartesian coordinate system in space.

Axis Ox called the abscissa axis, Oy– ordinate axis uO z axis applicator.

Let's deal with the expansion of an arbitrary vector in terms of basis
. From the theorem (see §2.2, paragraph 3 0, (2.13)) it follows that
can be uniquely expanded over the basis
(here instead of designating coordinates
use
):

. (2.21)

B (2.21)
essence (Cartesian rectangular) vector coordinates . The meaning of Cartesian coordinates is established by the following theorem.

Theorem. Cartesian rectangular coordinates
vector are projections of this vector respectively on the axis Ox,Oy andO z.

Proof. Let's place the vector to the origin of the coordinate system - point O. Then its end will coincide with some point
.

Let's draw through the point
three planes parallel to the coordinate planes Oyz,Oxz And Oxy(Fig. 2.11 xx). We then get:

. (2.22)

In (2.22) the vectors
And
are called vector components
along the axes Ox,Oy andO z.

Let through
And the angles formed by the vector are indicated respectively with orts
. Then for the components we obtain the following formulas:

=
=
,
=

=
,
=

=
(2.23)

From (2.21), (2.22) (2.23) we find:

=
=
;=
=
;=
=
(2.23)

– coordinates
vector there are projections of this vector onto the coordinate axes Ox,Oy andO z respectively.

Comment. Numbers
are called direction cosines of the vector .

Vector module (diagonal of a rectangular parallelepiped) is calculated by the formula:

. (2.24)

From formulas (2.23) and (2.24) it follows that the direction cosines can be calculated using the formulas:

=
;
=
;
=
. (2.25)

Raising both sides of each of the equalities in (2.25) and adding the left and right sides of the resulting equalities term by term, we arrive at the formula:

– not any three angles form a certain direction in space, but only those whose cosines are related by relation (2.26).

7 0 . Radius vector and point coordinates.Determining a vector by its beginning and end. Let's introduce a definition.

Definition. The radius vector (denoted ) is the vector connecting the origin O with this point (Fig. 2.12 xx):

. (2.27)

Any point in space corresponds to a certain radius vector (and vice versa). Thus, points in space are represented in vector algebra by their radius vectors.

Obviously the coordinates
points M are projections of its radius vector
on coordinate axes:

(2.28’)

and thus

(2.28)

– the radius vector of a point is a vector whose projections on the coordinate axes are equal to the coordinates of this point. This leads to two entries:
And
.

We obtain formulas for calculating vector projections
according to the coordinates of its origin - point
and the end - point
.

Let's draw the radius vectors
and vector
(Fig. 2.13). We get that

=
=(2.29)

– the projections of the vector onto the coordinate unit vectors are equal to the differences between the corresponding coordinates of the end and beginning of the vector.

8 0 . Some problems involving Cartesian coordinates.

1) conditions for collinearity of vectors . From the theorem (see §2.1, paragraph 2 0, formula (2.7)) it follows that for collinearity of vectors And it is necessary and sufficient for the following relation to hold: =. From this vector equality we obtain three equalities in coordinate form:, which implies the condition for collinearity of vectors in coordinate form:

(2.30)

– for collinearity of vectors And it is necessary and sufficient that their corresponding coordinates be proportional.

2) distance between points . From representation (2.29) it follows that the distance
between points
And
is determined by the formula

=
=. (2.31)

3) division of a segment in a given ratio . Let points be given
And
and attitude
. Need to find
– point coordinates M (Fig. 2.14).

From the condition of collinearity of vectors we have:
, where
And

. (2.32)

From (2.32) we obtain in coordinate form:

From formulas (2.32’) we can obtain formulas for calculating the coordinates of the midpoint of the segment
, assuming
:

Comment. We will count the segments
And
positive or negative depending on whether their direction coincides with the direction from the beginning
segment to the end
, or does not match. Then, using formulas (2.32) – (2.32”), you can find the coordinates of the point dividing the segment
externally, that is, in such a way that the dividing point M is on the continuation of the segment
, and not inside it. At the same time, of course,
.

4) spherical surface equation . Let's create an equation for a spherical surface - the geometric locus of points
, equidistant at a distance from some fixed center - a point
. It is obvious that in this case
and taking into account formula (2.31)

Equation (2.33) is the equation of the desired spherical surface.

Expression of the form called linear combination of vectors A 1 , A 2 ,...,A n with odds λ 1, λ 2 ,...,λ n.

Determination of linear dependence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly dependent, if there is a non-zero set of numbers λ 1, λ 2 ,...,λ n, in which the linear combination of vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n equal to the zero vector, that is, the system of equations: has a non-zero solution.
Set of numbers λ 1, λ 2 ,...,λ n is nonzero if at least one of the numbers λ 1, λ 2 ,...,λ n different from zero.

Determination of linear independence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly independent, if the linear combination of these vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n equal to the zero vector only for a zero set of numbers λ 1, λ 2 ,...,λ n , that is, the system of equations: A 1 x 1 +A 2 x 2 +...+A n x n =Θ has a unique zero solution.

Example 29.1

Check if a system of vectors is linearly dependent

Solution:

1. We compose a system of equations:

2. We solve it using the Gauss method. The Jordanano transformations of the system are given in Table 29.1. When calculating, the right-hand sides of the system are not written down since they are equal to zero and do not change during Jordan transformations.

3. From the last three rows of the table write down a resolved system equivalent to the original one system:

4. We obtain the general solution of the system:

5. Having set the value of the free variable x 3 =1 at your discretion, we obtain a particular non-zero solution X=(-3,2,1).

Answer: Thus, for a non-zero set of numbers (-3,2,1), the linear combination of vectors equals the zero vector -3A 1 +2A 2 +1A 3 =Θ. Hence, vector system linearly dependent.

Properties of vector systems

Property (1)
If a system of vectors is linearly dependent, then at least one of the vectors is expanded in terms of the others and, conversely, if at least one of the vectors of the system is expanded in terms of the others, then the system of vectors is linearly dependent.

Property (2)
If any subsystem of vectors is linearly dependent, then the entire system is linearly dependent.

Property (3)
If a system of vectors is linearly independent, then any of its subsystems is linearly independent.

Property (4)
Any system of vectors containing a zero vector is linearly dependent.

Property (5)
A system of m-dimensional vectors is always linearly dependent if the number of vectors n is greater than their dimension (n>m)

Basis of the vector system

The basis of the vector system A 1 , A 2 ,..., A n such a subsystem B 1 , B 2 ,...,B r is called(each of the vectors B 1,B 2,...,B r is one of the vectors A 1, A 2,..., A n), which satisfies the following conditions:
1. B 1 ,B 2 ,...,B r linearly independent system of vectors;
2. any vector A j system A 1 , A 2 ,..., A n is linearly expressed through the vectors B 1 , B 2 ,..., B r

r— the number of vectors included in the basis.

Theorem 29.1 On the unit basis of a system of vectors.

If a system of m-dimensional vectors contains m different unit vectors E 1 E 2 ,..., E m , then they form the basis of the system.

Algorithm for finding the basis of a system of vectors

In order to find the basis of the system of vectors A 1 ,A 2 ,...,A n it is necessary:

  • Create a homogeneous system of equations corresponding to the system of vectors A 1 x 1 +A 2 x 2 +...+A n x n =Θ
  • Bring this system