Biographies Characteristics Analysis

Eigenvalues ​​and eigenvectors of a square matrix. §7

Vector X ≠ 0 is called eigenvector linear operator with matrix A, if there is a number such that AX =X.

In this case, the number is called eigenvalue operator (matrix A) corresponding to the vector x.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into collinear vector, i.e. just multiply by some number. Unlike him, no eigenvectors are more difficult to transform.

Let's write down the definition of an eigenvector in the form of a system of equations:

Let's move all the terms to the left side:

The latter system can be written in matrix form as follows:

(A - E)X = O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square and its determinant is not equal to zero, then using Cramer’s formulas we will always get a unique solution – zero. It can be proven that a system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - E| = = 0

This equation with an unknown is called characteristic equation(characteristic polynomial) matrix A (linear operator).

It can be proven that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator defined by the matrix A = .

To do this, let's compose characteristic equation|A - E| = = (1 -) 2 – 36 = 1 – 2+ 2 - 36 = 2 – 2- 35; D = 4 + 140 = 144; eigenvalues 1 = (2 - 12)/2 = -5; 2 = (2 + 12)/2 = 7.

To find eigenvectors, we solve two systems of equations

(A + 5E)X = O

(A - 7E)X = O

For the first of them, the expanded matrix takes the form

,

whence x 2 = c, x 1 + (2/3)c = 0; x 1 = -(2/3)s, i.e. X (1) = (-(2/3)s; s).

For the second of them, the expanded matrix takes the form

,

from where x 2 = c 1, x 1 - (2/3)c 1 = 0; x 1 = (2/3)s 1, i.e. X (2) = ((2/3)s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)с; с) with eigenvalue (-5) and all vectors of the form ((2/3)с 1 ; с 1) with eigenvalue 7 .

It can be proven that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where  i are the eigenvalues ​​of this matrix.

The converse is also true: if matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proven that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.

With matrix A, if there is a number l such that AX = lX.

In this case, the number l is called eigenvalue operator (matrix A) corresponding to vector X.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more complex to transform.

Let's write down the definition of an eigenvector in the form of a system of equations:

Let's move all the terms to the left side:

The latter system can be written in matrix form as follows:

(A - lE)X = O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square and its determinant is not equal to zero, then using Cramer’s formulas we will always get a unique solution - zero. It can be proven that a system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - lE| = = 0

This equation with unknown l is called characteristic equation (characteristic polynomial) matrix A (linear operator).

It can be proven that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator, given by the matrix A = .

To do this, let's create a characteristic equation |A - lE| = = (1 - l) 2 - 36 = 1 - 2l + l 2 - 36 = l 2 - 2l - 35 = 0; D = 4 + 140 = 144; eigenvalues ​​l 1 = (2 - 12)/2 = -5; l 2 = (2 + 12)/2 = 7.

To find eigenvectors, we solve two systems of equations

(A + 5E)X = O

(A - 7E)X = O

For the first of them, the expanded matrix takes the form

,

whence x 2 = c, x 1 + (2/3)c = 0; x 1 = -(2/3)s, i.e. X (1) = (-(2/3)s; s).

For the second of them, the expanded matrix takes the form

,

from where x 2 = c 1, x 1 - (2/3)c 1 = 0; x 1 = (2/3)s 1, i.e. X (2) = ((2/3)s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)с; с) with eigenvalue (-5) and all vectors of the form ((2/3)с 1 ; с 1) with eigenvalue 7 .

It can be proven that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where l i are the eigenvalues ​​of this matrix.

The converse is also true: if matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proven that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.


Let's illustrate this with the previous example. Let's take arbitrary non-zero values ​​c and c 1, but such that the vectors X (1) and X (2) are linearly independent, i.e. would form a basis. For example, let c = c 1 = 3, then X (1) = (-2; 3), X (2) = (2; 3).

Let's make sure linear independence these vectors:

12 ≠ 0. In this new basis, matrix A will take the form A * = .

To verify this, let's use the formula A * = C -1 AC. First, let's find C -1.

C -1 = ;

Quadratic shapes

Quadratic shape f(x 1, x 2, x n) of n variables is called a sum, each term of which is either the square of one of the variables, or the product of two different variables, taken with a certain coefficient: f(x 1, x 2, x n) = (a ij = a ji).

The matrix A composed of these coefficients is called matrixquadratic form. It's always symmetrical matrix (i.e. a matrix symmetrical about the main diagonal, a ij = a ji).

In matrix notation quadratic form has the form f(X) = X T AX, where

Indeed

For example, let's write in matrix form quadratic form.

To do this, we find a matrix of quadratic form. Its diagonal elements are equal to the coefficients of the squared variables, and the remaining elements are equal to the halves of the corresponding coefficients of the quadratic form. That's why

Let the matrix-column of variables X be obtained by a non-degenerate linear transformation of the matrix-column Y, i.e. X = CY, where C - non-singular matrix nth order. Then the quadratic form f(X) = X T AX = (CY) T A(CY) = (Y T C T)A(CY) = Y T (C T AC)Y.

Thus, with a non-degenerate linear transformation C, the matrix of quadratic form takes the form: A * = C T AC.

For example, let's find the quadratic form f(y 1, y 2), obtained from the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 by linear transformation.

The quadratic form is called canonical(It has canonical view), if all its coefficients a ij = 0 for i ≠ j, i.e.
f(x 1, x 2, x n) = a 11 x 1 2 + a 22 x 2 2 + a nn x n 2 = .

Its matrix is ​​diagonal.

Theorem(proof not given here). Any quadratic form can be reduced to canonical form using a non-degenerate linear transformation.

For example, let us reduce the quadratic form to canonical form
f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3.

To do this, we first select perfect square with variable x 1:

f(x 1, x 2, x 3) = 2(x 1 2 + 2x 1 x 2 + x 2 2) - 2x 2 2 - 3x 2 2 - x 2 x 3 = 2(x 1 + x 2) 2 - 5x 2 2 - x 2 x 3.

Now we select a complete square with the variable x 2:

f(x 1, x 2, x 3) = 2(x 1 + x 2) 2 - 5(x 2 2 + 2* x 2 *(1/10)x 3 + (1/100)x 3 2) + (5/100)x 3 2 =
= 2(x 1 + x 2) 2 - 5(x 2 - (1/10)x 3) 2 + (1/20)x 3 2.

Then the non-degenerate linear transformation y 1 = x 1 + x 2, y 2 = x 2 + (1/10)x 3 and y 3 = x 3 brings this quadratic form to the canonical form f(y 1, y 2, y 3) = 2y 1 2 - 5y 2 2 + (1/20)y 3 2 .

Note that the canonical form of a quadratic form is determined ambiguously (the same quadratic form can be reduced to the canonical form different ways). However, the received different ways canonical forms have a number of general properties. In particular, the number of terms with positive (negative) coefficients of a quadratic form does not depend on the method of reducing the form to this form (for example, in the example considered there will always be two negative and one positive coefficient). This property is called law of inertia quadratic forms.

Let us verify this by bringing the same quadratic form to canonical form in a different way. Let's start the transformation with the variable x 2:

f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3 = -3x 2 2 - x 2 x 3 + 4x 1 x 2 + 2x 1 2 = - 3(x 2 2 +
+ 2* x 2 ((1/6) x 3 - (2/3)x 1) + ((1/6) x 3 - (2/3)x 1) 2) + 3((1/6) x 3 - (2/3)x 1) 2 + 2x 1 2 =
= -3(x 2 + (1/6) x 3 - (2/3)x 1) 2 + 3((1/6) x 3 + (2/3)x 1) 2 + 2x 1 2 = f (y 1 , y 2 , y 3) = -3y 1 2 -
+3y 2 2 + 2y 3 2, where y 1 = - (2/3)x 1 + x 2 + (1/6) x 3, y 2 = (2/3)x 1 + (1/6) x 3 and y 3 = x 1 . Here there is a negative coefficient -3 at y 1 and two positive coefficients 3 and 2 at y 2 and y 3 (and using another method we got a negative coefficient (-5) at y 2 and two positive ones: 2 at y 1 and 1/20 at y 3).

It should also be noted that the rank of a matrix of quadratic form, called rank of quadratic form, equal to the number non-zero coefficients canonical form and does not change under linear transformations.

The quadratic form f(X) is called positively (negative) certain, if for all values ​​of the variables that are not simultaneously equal to zero, it is positive, i.e. f(X) > 0 (negative, i.e.
f(X)< 0).

For example, the quadratic form f 1 (X) = x 1 2 + x 2 2 is positive definite, because is a sum of squares, and the quadratic form f 2 (X) = -x 1 2 + 2x 1 x 2 - x 2 2 is negative definite, because represents it can be represented as f 2 (X) = -(x 1 - x 2) 2.

In most practical situations, it is somewhat more difficult to establish the definite sign of a quadratic form, so for this we use one of the following theorems (we will formulate them without proof).

Theorem. A quadratic form is positive (negative) definite if and only if all eigenvalues ​​of its matrix are positive (negative).

Theorem(Sylvester criterion). A quadratic form is positive definite if and only if all the leading minors of the matrix of this form are positive.

Main (corner) minor The kth order matrix A of the nth order is called the determinant of the matrix, composed of the first k rows and columns of the matrix A ().

Note that for negative definite quadratic forms the signs of the principal minors alternate, and the first-order minor must be negative.

For example, let us examine the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 + 3x 2 2 for sign definiteness.

= (2 - l)*
*(3 - l) - 4 = (6 - 2l - 3l + l 2) - 4 = l 2 - 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is positive definite.

Method 2. Principal minor of the first order of matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = 6 - 4 = 2 > 0. Therefore, according to Sylvester’s criterion, the quadratic form is positive definite.

We examine another quadratic form for sign definiteness, f(x 1, x 2) = -2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (-2 - l)*
*(-3 - l) - 4 = (6 + 2l + 3l + l 2) - 4 = l 2 + 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is negative definite.

Method 2. Principal minor of the first order of matrix A D 1 = a 11 =
= -2 < 0. Главный минор второго порядка D 2 = = 6 - 4 = 2 >0. Consequently, according to Sylvester’s criterion, the quadratic form is negative definite (the signs of the main minors alternate, starting with the minus).

And as another example, we examine the sign-determined quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (2 - l)*
*(-3 - l) - 4 = (-6 - 2l + 3l + l 2) - 4 = l 2 + l - 10 = 0; D = 1 + 40 = 41;
.

One of these numbers is negative and the other is positive. The signs of the eigenvalues ​​are different. Consequently, the quadratic form can be neither negatively nor positively definite, i.e. this quadratic form is not sign-definite (it can take values ​​of any sign).

Method 2. Principal minor of the first order of matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = -6 - 4 = -10< 0. Следовательно, по критерию Сильвестра квадратичная форма не является знакоопределенной (знаки главных миноров разные, при этом первый из них - положителен).

In the image we see the shift transformation that is happening to Gioconda. The blue vector changes direction, but the red one does not. Therefore, red is an eigenvector of such a transformation, but blue is not. Since the red vector is neither stretched nor compressed, its eigenvalue is one. All vectors are collinear and red are also eigenvectors. eigenvector) square matrix(WITH eigenvalue(English) eigenvalue)) – This is a non-zero vector for which the relation holds

Where? is a definite scalar, that is, real or complex number.
That is, the eigenvectors of the matrix A are non-zero vectors that, under the action of a linear transformation, are specified by the matrix A do not change direction, but can change length by a factor?.
The matrix has dimensions no more than N eigenvectors and eigenvalues ​​corresponding to them.
Relation (*) also makes sense for a linear operator in a vector space V. If this space is finite-dimensional, then the operator can be written as a matrix with respect to a specific basis V.
Since eigenvectors and eigenvalues ​​were denoted without using coordinates, independent of the choice of basis. Therefore, similar matrices have the same eigenvalues.
The leading role in understanding the eigenvalues ​​of matrices is played by the Hamilton-Cayley theorem. It follows from this that the eigenvalues ​​of the matrix A and only they are the roots of the characteristic polynomial of the matrix A:

p (?) is a polynomial of degree n, therefore, by the fundamental theorem of algebra, there is exactly n complex eigenvalues, taking into account their multiplicities.
So the matrix A has no more n eigenvalues ​​(but many eigenvectors for each of them).
Let us write the characteristic polynomial through its roots:

The multiplicity of the root of the characteristic polynomial of a matrix is ​​called algebraic multiplicity eigenvalue
The set of all eigenvalues ​​of a matrix or linear operator in a finite-dimensional vector space is called spectrum matrix or linear operator. (This terminology is modified for non-skinchenotherworldly vector spaces: V general case, the spectrum of the operator may include?, which are not eigenvalues.)
Due to the connection between the characteristic polynomial of a matrix and its eigenvalues, the latter are also called characteristic numbers matrices.
For each eigenvalue, we obtain our own system of equations:

What will have linearly independent solutions.
The set of all solutions of the system forms a linear subspace of dimension and is called own space(English) eigenspace) matrices with eigenvalues.
The dimension of proper space is called geometric multiplicity corresponding eigenvalue?.
All eigenspaces are invariant subspaces for .
If there are at least two linearly independent eigenvectors with the same eigenvalue?, then such an eigenvalue is called degenerate. This terminology is used primarily when the geometric and algebraic multiplicities of the eigenvalues ​​coincide, for example, for Hermitian matrices.

Where – Square size matrix n x n,-The second column of which is a vector, A – This is a diagonal matrix with the corresponding values.

The eigenvalue problem is the problem of finding the eigenvectors and numbers of a matrix.
By definition (using the characteristic equation), you can only find the eigenvalues ​​of matrices with dimensions less than five. The characteristic equation has degree equally matrices. For higher degrees finding solutions to the equation becomes very problematic, so they use various numerical methods
Miscellaneous tasks require receiving different quantities eigenvalues. Therefore, there are several problems of finding eigenvalues, each of which uses its own methods.
It would seem that the partial problem of eigenvalues ​​is a partial problem of the complete one, and is solved by the same methods as the complete one. However, the methods applied to particular problems are much more efficient, so they can be applied to high-dimensional matrices (for example, in nuclear physics problems arise in finding eigenvalues ​​for matrices of dimensions 10 3 – 10 6).
Jacobi method

One of the oldest and most common approaches to a decision complete problem eigenvalues ​​is the Jacobi method, first published in 1846.
The method is applied to a symmetric matrix A
This is a simple iterative algorithm in which the eigenvector matrix is ​​calculated by a series of multiplications.

Eigenvalues(numbers) and eigenvectors.
Examples of solutions

Be yourself


From both equations it follows that .

Let's put it then: .

As a result: – second eigenvector.

Let's repeat important points solutions:

– the resulting system certainly has common decision(the equations are linearly dependent);

– we select the “y” in such a way that it is integer and the first “x” coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

There were quite enough intermediate “checkpoints”, so checking equality is, in principle, unnecessary.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself am used to writing them down in lines). This option is acceptable, but in light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but this is only because I commented on the first example in great detail.

Example 2

Matrices

Let's train on our own! An approximate example of a final task at the end of the lesson.

Sometimes you need to do additional task, namely:

write the canonical matrix decomposition

What it is?

If the eigenvectors of the matrix form basis, then it can be represented as:

Where is a matrix composed of coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Let's look at the matrix of the first example. Its eigenvectors linearly independent(non-collinear) and form a basis. Let's create a matrix of their coordinates:

On main diagonal matrices in the appropriate order are located eigenvalues, and the remaining elements are equal to zero:
– I once again emphasize the importance of order: “two” corresponds to the 1st vector and is therefore located in the 1st column, “three” – to the 2nd vector.

Using the usual algorithm for finding inverse matrix or Gauss-Jordan method we find . No, that's not a typo! - before you is rare, like solar eclipse an event when the inverse coincides with the original matrix.

It remains to write down the canonical decomposition of the matrix:

The system can be solved using elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: – substitute into the second equation:

Since the first coordinate is zero, we obtain a system, from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If it turns out only trivial solution , then either the eigenvalue was found incorrectly, or the system was compiled/solved with an error.

Compact coordinates gives the value

Eigenvector:

And once again, we check that the solution found satisfies every equation of the system. In subsequent paragraphs and in subsequent tasks, I recommend taking this wish as a mandatory rule.

2) For the eigenvalue, using the same principle, we obtain the following system:

From the 2nd equation of the system we express: – substitute into the third equation:

Since the “zeta” coordinate is equal to zero, we obtain a system from each equation of which it follows linear dependence.

Let

Checking that the solution satisfies every equation of the system.

Thus, the eigenvector is: .

3) And finally, the system corresponds to the eigenvalue:

The second equation looks the simplest, so let’s express it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear relationship has emerged, which we substitute into the expression:

As a result, “x” and “y” were expressed through “z”: . In practice, it is not necessary to achieve precisely such relationships; in some cases it is more convenient to express both through or and through . Or even “train” - for example, “X” through “I”, and “I” through “Z”

Let's put it then:

We check that the solution found satisfies each equation of the system and writes the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms non-zero vectors (eigenvectors) into collinear vectors.

If the condition required finding the canonical decomposition, then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. Making a matrix from their coordinates, a diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, by condition, you need to write linear transformation matrix in the basis of eigenvectors, then we give the answer in the form . There is a difference, and the difference is significant! Because this matrix is ​​the “de” matrix.

Problem with more simple calculations For independent decision:

Example 5

Find eigenvectors of a linear transformation given by a matrix

When finding your own numbers, try not to go all the way to a 3rd degree polynomial. In addition, your system solutions may differ from my solutions - there is no certainty here; and the vectors you find may differ from the sample vectors up to the proportionality of their respective coordinates. For example, and. It is more aesthetically pleasing to present the answer in the form, but it’s okay if you stop at the second option. However, there is everything reasonable limits, the version doesn't look very good anymore.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in the case of multiple eigenvalues?

General algorithm remains the same, but it has its own characteristics, and it is advisable to keep some parts of the solution in a more strict academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Solution

Of course, let’s capitalize the fabulous first column:

And, after decomposition quadratic trinomial by multipliers:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find our own vectors:

1) Let’s deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

You won't find a better combination:
Eigenvector:

2-3) Now we remove a couple of sentries. IN in this case it might work out either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value into the determinant which brings us the next homogeneous system of linear equations:

Eigenvectors are exactly vectors
fundamental system of solutions

Actually, throughout the entire lesson we did nothing but find the vectors of the fundamental system. Just for the time being this term didn't really need it. By the way, those clever students who missed the topic in camouflage suits homogeneous equations, will be forced to smoke it now.


The only action was to remove the extra lines. The result is a one-by-three matrix with a formal “step” in the middle.
– basic variable, – free variables. There are two free variables, therefore there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero multiplier in front of the “X” allows it to take on absolutely any values ​​(which is clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can select these vectors orally - simply by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- one, which means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are clearly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even more “beautifully”: . However, I caution that in another example simple selection It may not turn out to be the case, which is why the clause is intended for experienced people. In addition, why not take, say, as the third vector? After all, its coordinates also satisfy each equation of the system, and the vectors linearly independent. This option, in principle, is suitable, but “crooked”, since the “other” vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for an independent solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of the final design at the end of the lesson.

It should be noted that in both the 6th and 7th examples we obtain a triple of linearly independent eigenvectors, and therefore the original matrix is ​​representable in canonical expansion. But such raspberries do not happen in all cases:

Example 8


Solution: Let’s create and solve the characteristic equation:

Let's expand the determinant in the first column:

We carry out further simplifications according to the considered method, avoiding the third-degree polynomial:

– eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Don’t be surprised, in addition to the kit, there are also variables in use - there is no difference here.

From the 3rd equation we express it and substitute it into the 1st and 2nd equations:

From both equations it follows:

Let then:

2-3) For multiple values ​​we get the system .

Let's write down the matrix of the system and, using elementary transformations, bring it to a stepwise form: