Biographies Characteristics Analysis

Eigenvalues ​​(numbers) and eigenvectors. Examples of solutions. Eigenvalues ​​and eigenvectors of a matrix

Lecture 9

Linear transformations of coordinates. Eigenvectors and eigenvalues ​​of a matrix, their properties. Characteristic polynomial of a matrix, its properties.

We will say that on the set of vectorsRgiven transformation BUT , if each vector X R according to some rule, the vector BUT X R.

Definition 9.1.transformation BUT called linear, if for any vectors X and at and for any real number λ equalities are fulfilled:

BUT( X + at )=BUT X+ A at ,A(λ X ) = λ A X. (9.1)

Definition 9.2.The linear transformation is called identical, if it transforms any vector X into himself.

The identity transformation is denoted HER X= X .

Consider a three-dimensional space with a basis e 1 , e 2, e 3 , in which the linear transformation is specified BUT. Applying it to the basis vectors, we get the vectors BUT e 1, BUT e 2, BUT e 3 belonging to this three-dimensional space. Therefore, each of them can be expanded in a unique way in terms of basis vectors:

BUT e 1 = a 11 e 1+ a 21 e 2+a 31 e 3,

BUT e 2 = a 12 e 1+ a 22 e 2+ a 32 e 3 ,(9.2)

BUT e 3= a 13 e 1+ a 23 e 2+ a 33 e 3 .

Matrix called linear transformation matrix BUT in basis e 1 , e 2, e 3 . The columns of this matrix are composed of the coefficients in formulas (9.2) of the basis transformation.

Comment. Obviously, the matrix of the identity transformation is the identity matrix E.

For an arbitrary vector X = x 1 e 1+ x 2 e 2+ x 3 e 3 the result of applying a linear transformation to it BUT will vector BUT X, which can be expanded in vectors of the same basis: BUT X =x` 1 e 1+ x` 2 e 2+ x` 3 e 3 , where the coordinatesx` ican be found using the formulas:

X` 1 = a 11 x 1 + a 12 x 2 + a 13 x 3 ,

x` 2 = a 21 x 1 + a 22 x 2 + a 23 x 3,(9.3)

x` 3 = a 31 x 1 + a 32 x 2 + a 33 x 3 .

The coefficients in the formulas of this linear transformation are elements of the rows of the matrix BUT.

Linear transformation matrix transformation

when moving to a new basis.

Consider a linear transformation A and two bases in three-dimensional space: e 1 , e 2, e 3 and e 1 , e 2 , e 3 . Let the matrix C define the transition formulas from the basis (e k) to the basis ( e k). If in the first of these bases the chosen linear transformation is given by the matrix A , and in the second - by the matrix BUT, then we can find a relationship between these matrices, namely:

A \u003d C -1 BUT C(9.4)

Indeed, then BUT . On the other hand, the results of applying the same linear transformation BUT in basis (e k), i.e. , and in the basis (e k ): respectively - are connected by the matrix With: , whence it follows that SA= BUT With. Multiplying both sides of this equality on the left by With-1 , we get With -1 CA = = C -1 BUT With, which proves the validity of formula (9.4).

Eigenvalues ​​and eigenvectors of a matrix.

Definition 9.3.Vector X called own vector matrices BUT if there is such a number λ, that the equality holds: BUT X= λ X, that is, the result of applying to X linear transformation given by the matrix BUT, is the multiplication of this vector by the number λ . The number itself λ called own number matrices BUT.

Substituting into formulas (9.3)x` j = λ xj, we obtain a system of equations for determining the coordinates of the eigenvector:

.

From here

.(9.5)

This linear homogeneous the system will have a non-trivial solution only if its main determinant is 0 (Cramer's rule). By writing this condition in the form:

we get an equation for determining the eigenvalues λ called characteristic equation. Briefly, it can be represented as follows:

| AE | = 0,(9.6)

since its left side is the determinant of the matrix BUT- λE. Polynomial with respect to λ| AE| called characteristic polynomial matrices A.

Properties of the characteristic polynomial:

1) The characteristic polynomial of a linear transformation does not depend on the choice of the basis. Proof. (with see (9.4)), but hence, . Thus, does not depend on the choice of basis. Hence, and |AE| does not change upon transition to a new basis.

2) If the matrix BUT linear transformation is symmetrical(those. a ij= a ji), then all the roots of the characteristic equation (9.6) are real numbers.

Properties of eigenvalues ​​and eigenvectors:

1) If we choose a basis from eigenvectors x 1, x 2, x 3 corresponding to the eigenvalues λ 1 , λ 2 , λ 3 matrices BUT, then in this basis the linear transformation A has a diagonal matrix:

(9.7) The proof of this property follows from the definition of eigenvectors.

2) If the transformation eigenvalues BUT are different, then the eigenvectors corresponding to them are linearly independent.

3) If the characteristic polynomial of the matrix BUT has three different roots, then in some basis the matrix BUT has a diagonal shape.

Example.

Let's find the eigenvalues ​​and eigenvectors of the matrix C, leave the characteristic equation: (1- λ )(5 - λ )(1 - λ ) + 6 - 9(5 - λ ) - (1 - λ ) - (1 - λ ) = 0, λ ³ - 7 λ ² + 36 = 0, λ 1 = -2, λ 2 = 3, λ 3 = 6.

Find the coordinates of the eigenvectors corresponding to each found value λ. From (9.5) it follows that if X (1) ={ x 1 , x 2 , x 3 ) is the eigenvector corresponding to λ 1 = -2, then

is a collaborative but indeterminate system. Its solution can be written as X (1) ={ a,0,- a), where a is any number. In particular, if you require that |x (1) |=1, X (1) =

Substituting into the system (9.5) λ 2 =3, we get a system for determining the coordinates of the second eigenvector-x (2) ={ y 1 , y 2 , y 3

Finding the eigenvalues ​​and eigenvectors of matrices is one of the most complex problems of linear algebra that arise in the process of modeling and analyzing the processes of functioning of dynamic systems, statistical modeling. For example, the eigenvectors of the covariance matrix of a random vector determine the directions of the principal axes of the dispersion hyperellipsoid of the values ​​of this vector, and the eigenvalues ​​determine the expansion or contraction of the hyperellipsoid along its principal axes. In mechanics, the eigenvectors and numbers of the inertia tensor characterize the direction of the main axes and the main moments of inertia of a rigid body.

Distinguish complete (algebraic or, otherwise, matrix) eigenvalue problem, which assumes finding all own couples some matrix , and partial eigenvalue problems, consisting, as a rule, in finding one or more eigenvalues and possibly corresponding eigenvectors. Most often, in the latter case, we are talking about finding the largest and smallest modulo eigenvalues; knowledge of such characteristics of the matrix allows, for example, making conclusions about the convergence of certain iterative methods, optimizing their parameters, etc.

The eigenvalue problem can be formulated as follows: for what non-zero vectors and numbers does a linear transformation of a vector with the help of a matrix not change the direction of this vector in space, but is reduced only to “stretching” this vector by a factor? The answer to this question lies in non-trivial solutions of the equation

, (1.2)

where is the identity matrix. Theoretically, this problem is easily solved: you need to find the roots of the so-called characteristic equations

(1.3)

and, substituting them in turn into (1.2), obtain eigenvectors from the corresponding overdetermined systems.

The practical implementation of this approach is associated with a number of difficulties, which increase with the increase in the dimension of the problem being solved. These difficulties are due to the expansion of the determinant and calculating the roots of the resulting polynomial n-th degree, as well as the search for linearly independent solutions of degenerate systems of linear algebraic equations. In this regard, such a direct approach to solving the algebraic eigenvalue problem is usually used only for very small matrix sizes ( n= 2, 3). Already at n> 4, special numerical methods for solving such problems come to the fore, one of which, based on the matrix similarity transformation, will be discussed further. Recall that similar are called matrices and , where With is an arbitrary nonsingular matrix.



Let us briefly list the main properties of eigenvalues ​​and vectors:

1. If – own matrix pair BUT, a is some number, then is also a proper pair for BUT. This means that each eigenvalue corresponds to an uncountable set of eigenvectors that differ only by a scalar factor.

2. Let – own matrix pair , where is some real number. Then – own matrix pair BUT. Thus, adding to this matrix BUT diagonal matrix does not change its eigenvectors and shifts range of the original matrix by a number (to the left when ). The spectrum of a matrix is ​​the set of all its eigenvalues.

3. If is an eigenpair of the invertible matrix , then is an eigenpair of the matrix .

4. The eigenvalues ​​of diagonal and triangular matrices are their diagonal elements, because the characteristic equation (1.3), taking into account (1.1), for such matrices can be written as:

.

The last equality shows that diagonal and triangular real matrices have only real eigenvalues(smooth n taking into account their possible multiplicity). The real nature of eigenvalues ​​is also inherent in the class of symmetric matrices, which is very important in applications, including covariance matrices and inertia tensors.

5. If – own matrix pair , then – own matrix pair BUT Thus, the similarity transformation keeps the spectrum of any matrix unchanged.

6. Let BUT is a matrix of simple dimension structure , and matrices and are formed from its eigenvalues ​​and eigenvectors, respectively. Then the equality . Since for a diagonal matrix formed from eigenvalues, the eigenvectors can be the unit vectors of the original basis ( , ), then using property 5 and taking and (those. ), property 6 can be formulated differently: if is an eigenpair of matrix , then has its own matrix pair BUT.

Eigenvalues ​​(numbers) and eigenvectors.
Solution examples

Be yourself


From both equations it follows that .

Let's put then: .

As a result: is the second eigenvector.

Let's recap the important points:

– the resulting system certainly has a general solution (the equations are linearly dependent);

- "Y" is selected in such a way that it is integer and the first "x" coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

Intermediate "checkpoints" were quite enough, so the check of equalities, in principle, is superfluous.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself used to write them in lines). This option is acceptable, but in the light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but that's only because I commented on the first example in great detail.

Example 2

matrices

We train on our own! An approximate sample of the final design of the task at the end of the lesson.

Sometimes you need to perform an additional task, namely:

write the canonical decomposition of the matrix

What it is?

If the matrix eigenvectors form basis, then it can be represented as:

Where is a matrix composed of the coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Consider the matrix of the first example. Her own vectors linearly independent(non-collinear) and form a basis. Let's make a matrix from their coordinates:

On the main diagonal matrices in due order eigenvalues ​​are located, and the remaining elements are equal to zero:
- once again I emphasize the importance of the order: "two" corresponds to the 1st vector and therefore is located in the 1st column, "three" - to the 2nd vector.

According to the usual algorithm for finding inverse matrix or Gauss-Jordan method find . No, that's not a typo! - before you is a rare event, like a solar eclipse, when the reverse coincided with the original matrix.

It remains to write the canonical decomposition of the matrix :

The system can be solved using elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: - substitute into the second equation:

Since the first coordinate is zero, we obtain a system , from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If only a trivial solution is obtained , then either the eigenvalue was found incorrectly, or the system was compiled / solved with an error.

Compact coordinates gives value

Eigenvector:

And once again, we check that the found solution satisfies every equation of the system. In the following paragraphs and in subsequent tasks, I recommend that this wish be accepted as a mandatory rule.

2) For the eigenvalue, following the same principle, we obtain the following system:

From the 2nd equation of the system we express: - substitute into the third equation:

Since the "Z" coordinate is equal to zero, we obtain a system , from each equation of which a linear dependence follows.

Let be

We check that the solution satisfies every equation of the system.

Thus, the eigenvector: .

3) And, finally, the system corresponds to its own value:

The second equation looks the simplest, so we express it from it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear dependence was revealed, which we substitute into the expression:

As a result, "X" and "Y" were expressed through "Z": . In practice, it is not necessary to achieve just such relationships; in some cases it is more convenient to express both through or and through . Or even a “train” - for example, “X” through “Y”, and “Y” through “Z”

Let's put then:

We check that the found solution satisfies each equation of the system and write the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms nonzero vectors (eigenvectors) into vectors collinear to them.

If by condition it was required to find a canonical expansion of , then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. We make a matrix from their coordinates, the diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, according to the condition, it is necessary to write linear transformation matrix in the basis of eigenvectors, then we give the answer in the form . There is a difference, and a significant difference! For this matrix is ​​the matrix "de".

A problem with simpler calculations for an independent solution:

Example 5

Find eigenvectors of linear transformation given by matrix

When finding your own numbers, try not to bring the case to a polynomial of the 3rd degree. In addition, your system solutions may differ from my solutions - there is no unambiguity here; and the vectors you find may differ from the sample vectors up to proportionality to their respective coordinates. For example, and . It is more aesthetically pleasing to present the answer in the form of , but it's okay if you stop at the second option. However, there are reasonable limits to everything, the version does not look very good anymore.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in case of multiple eigenvalues?

The general algorithm remains the same, but it has its own peculiarities, and it is advisable to keep some parts of the solution in a more rigorous academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Decision

Of course, let's capitalize the fabulous first column:

And, after factoring the square trinomial:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find the eigenvectors:

1) We will deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

There is no better combination:
Eigenvector:

2-3) Now we remove a couple of sentries. In this case, it may be either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value in the determinant , which brings us the following homogeneous system of linear equations:

Eigenvectors are exactly the vectors
fundamental decision system

Actually, throughout the lesson, we were only engaged in finding the vectors of the fundamental system. Just for the time being, this term was not particularly required. By the way, those dexterous students who, in camouflage homogeneous equations, will be forced to smoke it now.


The only action was to remove extra lines. The result is a "one by three" matrix with a formal "step" in the middle.
– basic variable, – free variables. There are two free variables, so there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero factor in front of the “x” allows it to take on absolutely any values ​​(which is also clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can pick up these vectors orally - just by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- unit means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are perfectly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even “more beautifully”: . However, I warn you, in another example, there may not be a simple selection, which is why the reservation is intended for experienced people. Besides, why not take as the third vector, say, ? After all, its coordinates also satisfy each equation of the system, and the vectors are linearly independent. This option, in principle, is suitable, but "crooked", since the "other" vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for a do-it-yourself solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of finishing at the end of the lesson.

It should be noted that in both the 6th and 7th examples, a triple of linearly independent eigenvectors is obtained, and therefore the original matrix can be represented in the canonical expansion . But such raspberries do not happen in all cases:

Example 8


Decision: compose and solve the characteristic equation:

We expand the determinant by the first column:

We carry out further simplifications according to the considered method, avoiding a polynomial of the 3rd degree:

are eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Do not be surprised, in addition to the kit, variables are also in use - there is no difference here.

From the 3rd equation we express - we substitute into the 1st and 2nd equations:

From both equations follows:

Let then:

2-3) For multiple values, we get the system .

Let us write down the matrix of the system and, using elementary transformations, bring it to a stepped form:

Diagonal-type matrices are most simply arranged. The question arises whether it is possible to find a basis in which the matrix of a linear operator would have a diagonal form. Such a basis exists.
Let a linear space R n and a linear operator A acting in it be given; in this case, the operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector x is called an eigenvector of the operator A if the operator A transforms x into a vector collinear to it, i.e. . The number λ is called the eigenvalue or eigenvalue of the operator A corresponding to the eigenvector x .
We note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors of the operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise distinct eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors corresponding to different eigenvalues ​​λ 1 , λ 2 , …, λ n , then they are linearly independent, therefore, they can be taken as the basis of the space R n . Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we act with the operator A on the basis vectors: then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are on the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of a linear operator A in the basis (i = 1..n) has a diagonal form if and only if all vectors of the basis are eigenvectors of the operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let the vector , where x 1 , x 2 , …, x n - coordinates of the vector x relative to the basis and x is the eigenvector of the linear operator A corresponding to the eigenvalue λ, i.e. . This relation can be written in matrix form

. (*)


The equation (*) can be considered as an equation for finding x , and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system of linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If the equation (*) is written in detail in coordinate form, then we get a system of linear homogeneous equations:

(1)
where is the matrix of the linear operator.

System (1) has a nonzero solution if its determinant D is equal to zero


We got an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side is called the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A has no eigenvectors and cannot be reduced to a diagonal form.
Let λ 1 , λ 2 , …, λ n be the real roots of the characteristic equation, and there may be multiples among them. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law , where x 1 , x 2 , .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Decision. We build the matrix of this operator:
.
We compose a system for determining the coordinates of eigenvectors:

We compose the characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
As , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find the general solution of this system: The fundamental system of solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, by setting x 1 = 1: .
Arguing similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R 3 the basis consists of three linearly independent vectors, but we have obtained only two linearly independent eigenvectors, from which the basis in R 3 cannot be formed. Consequently, the matrix A of a linear operator cannot be reduced to a diagonal form.

Example 13 Given a matrix .
1. Prove that the vector is an eigenvector of the matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which the matrix A has a diagonal form.
Decision.
1. If , then x is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in the basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We are looking for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is equal to two and is equal to the number of unknowns, therefore this system has only a zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0 ,1,0) is an eigenvector corresponding to λ = -3. Let's check:
.
If λ = 1, then we get the system
The rank of the matrix is ​​two. Cross out the last equation.
Let x 3 be the free unknown. Then x 1 \u003d -3x 3, 4x 2 \u003d 10x 1 - 6x 3 \u003d -30x 3 - 6x 3, x 2 \u003d -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and different, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of a linear operator A:R n → R n can be reduced to a diagonal form, since for some linear operators there may be less than n linearly independent eigenvectors. However, if the matrix is ​​symmetric, then exactly m linearly independent vectors correspond to the root of the characteristic equation of multiplicity m.

Definition. A symmetric matrix is ​​a square matrix in which the elements that are symmetric with respect to the main diagonal are equal, that is, in which .
Remarks. 1. All eigenvalues ​​of a symmetric matrix are real.
2. The eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the numerous applications of the apparatus studied, we consider the problem of determining the form of a second-order curve.