Biographies Characteristics Analysis

Let's find the eigenvalues ​​from the characteristic equation online. Eigenvectors and eigenvalues

Diagonal matrices have the simplest structure. The question arises whether it is possible to find a basis in which the matrix of the linear operator would have a diagonal form. Such a basis exists.
Let us be given a linear space R n and a linear operator A acting in it; in this case, operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector is called an eigenvector of the operator A if the operator A translates into a collinear vector, that is. The number λ is called the eigenvalue or eigenvalue of the operator A, corresponding to the eigenvector.
Let us note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise different eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors , corresponding to different eigenvalues ​​λ 1, λ 2, ..., λ n, then they are linearly independent, therefore, they can be taken as the basis of the space R n. Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we will act with the operator A on the basis vectors: Then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are along the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of the linear operator A in the basis (i = 1..n) has a diagonal form if and only if all vectors of the basis are eigenvectors operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let a vector be given , where x 1, x 2, …, x n are the coordinates of the vector relative to the basis and is the eigenvector of the linear operator A corresponding to the eigenvalue λ, that is. This relationship can be written in matrix form

. (*)


Equation (*) can be considered as an equation for finding , and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If equation (*) is written in detail in coordinate form, then we obtain a system of linear homogeneous equations:

(1)
Where - linear operator matrix.

System (1) has a non-zero solution if its determinant D is equal to zero


We received an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side is called the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A has no eigenvectors and cannot be reduced to diagonal form.
Let λ 1, λ 2, …, λ n be real roots characteristic equation, and among them there may be multiples. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law, where x 1, x 2, .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Solution. We build the matrix of this operator:
.
We create a system for determining the coordinates of eigenvectors:

We compose a characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
Because , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find common decision of this system: The fundamental system of solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, putting x 1 = 1: .
Reasoning similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R 3, the basis consists of three linearly independent vectors, but we received only two linearly independent eigenvectors, from which the basis in R 3 cannot be composed. Consequently, we cannot reduce the matrix A of a linear operator to diagonal form.

Example 13. Given a matrix .
1. Prove that the vector is an eigenvector of matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which matrix A has a diagonal form.
Solution.
1. If , then is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in a basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We look for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Let's find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is two and equal to the number unknowns, so this system has only the zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0,1,0) is an eigenvector, corresponding to λ = -3. Let's check:
.
If λ = 1, then we obtain the system
The rank of the matrix is ​​two. We cross out the last equation.
Let x 3 be a free unknown. Then x 1 = -3x 3, 4x 2 = 10x 1 - 6x 3 = -30x 3 - 6x 3, x 2 = -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and distinct, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of a linear operator A:R n → R n can be reduced to diagonal form, since for some linear operators there may be less than n linear independent eigenvectors. However, if the matrix is ​​symmetric, then the root of the characteristic equation of multiplicity m corresponds to exactly m linearly independent vectors.

Definition. A symmetric matrix is ​​a square matrix in which the elements symmetric about the main diagonal are equal, that is, in which .
Notes. 1. All eigenvalues ​​of a symmetric matrix are real.
2. The eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the many applications of the studied apparatus, we consider the problem of determining the type of a second-order curve.

An eigenvector of a square matrix is ​​one that, when multiplied by a given matrix, results in a collinear vector. In simple words, when multiplying a matrix by an eigenvector, the latter remains the same, but multiplied by a certain number.

Definition

An eigenvector is a non-zero vector V, which, when multiplied by a square matrix M, becomes itself increased by some number λ. In algebraic notation it looks like:

M × V = λ × V,

where λ is the eigenvalue of the matrix M.

Let's consider numerical example. For ease of recording, numbers in the matrix will be separated by a semicolon. Let us have a matrix:

  • M = 0; 4;
  • 6; 10.

Let's multiply it by a column vector:

  • V = -2;

When we multiply a matrix by a column vector, we also get a column vector. Strict mathematical language The formula for multiplying a 2 × 2 matrix by a column vector would look like this:

  • M × V = M11 × V11 + M12 × V21;
  • M21 × V11 + M22 × V21.

M11 means the element of matrix M located in the first row and first column, and M22 means the element located in the second row and second column. For our matrix, these elements are equal to M11 = 0, M12 = 4, M21 = 6, M22 10. For a column vector, these values ​​are equal to V11 = –2, V21 = 1. According to this formula, we get the following result of the product of a square matrix by a vector:

  • M × V = 0 × (-2) + (4) × (1) = 4;
  • 6 × (-2) + 10 × (1) = -2.

For convenience, let's write the column vector into a row. So, we multiplied the square matrix by the vector (-2; 1), resulting in the vector (4; -2). Obviously, this is the same vector multiplied by λ = -2. Lambda in in this case denotes the eigenvalue of the matrix.

An eigenvector of a matrix is ​​a collinear vector, that is, an object that does not change its position in space when multiplied by a matrix. The concept of collinearity in vector algebra similar to the term parallelism in geometry. In geometric interpretation collinear vectors- These are parallel directed segments of different lengths. Since the time of Euclid, we know that one line has an infinite number of lines parallel to it, so it is logical to assume that each matrix has an infinite number of eigenvectors.

From the previous example it is clear that eigenvectors can be (-8; 4), and (16; -8), and (32, -16). These are all collinear vectors corresponding to the eigenvalue λ = -2. When multiplying the original matrix by these vectors, we will still end up with a vector that differs from the original by 2 times. That is why, when solving problems of finding an eigenvector, it is necessary to find only linearly independent vector objects. Most often, for an n × n matrix, there are an n number of eigenvectors. Our calculator is designed for the analysis of second-order square matrices, so almost always the result will find two eigenvectors, except for cases when they coincide.

In the example above, we knew the eigenvector of the original matrix in advance and clearly determined the lambda number. However, in practice, everything happens the other way around: the eigenvalues ​​are found first and only then the eigenvectors.

Solution algorithm

Let's look at the original matrix M again and try to find both of its eigenvectors. So the matrix looks like:

  • M = 0; 4;
  • 6; 10.

First we need to determine the eigenvalue λ, which requires calculating the determinant of the following matrix:

  • (0 − λ); 4;
  • 6; (10 − λ).

This matrix is obtained by subtracting the unknown λ from the elements on the main diagonal. The determinant is determined using the standard formula:

  • detA = M11 × M21 − M12 × M22
  • detA = (0 − λ) × (10 − λ) − 24

Since our vector must be non-zero, we accept the resulting equation as linearly dependent and equate our determinant detA to zero.

(0 − λ) × (10 − λ) − 24 = 0

Let's open the brackets and get the characteristic equation of the matrix:

λ 2 − 10λ − 24 = 0

This is standard quadratic equation, which needs to be solved through the discriminant.

D = b 2 − 4ac = (-10) × 2 − 4 × (-1) × 24 = 100 + 96 = 196

The root of the discriminant is sqrt(D) = 14, therefore λ1 = -2, λ2 = 12. Now for each lambda value we need to find the eigenvector. Let us express the system coefficients for λ = -2.

  • M − λ × E = 2; 4;
  • 6; 12.

In this formula, E is identity matrix. Based on the resulting matrix, we create a system of linear equations:

2x + 4y = 6x + 12y,

where x and y are the eigenvector elements.

Let's collect all the X's on the left and all the Y's on the right. Obviously - 4x = 8y. Divide the expression by - 4 and get x = –2y. Now we can determine the first eigenvector of the matrix, taking any values ​​of the unknowns (remember the infinity of linearly dependent eigenvectors). Let's take y = 1, then x = –2. Therefore, the first eigenvector looks like V1 = (–2; 1). Return to the beginning of the article. It was this vector object that we multiplied the matrix by to demonstrate the concept of an eigenvector.

Now let's find the eigenvector for λ = 12.

  • M - λ × E = -12; 4
  • 6; -2.

Let's create the same system of linear equations;

  • -12x + 4y = 6x − 2y
  • -18x = -6y
  • 3x = y.

Now we take x = 1, therefore y = 3. Thus, the second eigenvector looks like V2 = (1; 3). When multiplying the original matrix by given vector, the result will always be the same vector multiplied by 12. This concludes the solution algorithm. Now you know how to manually determine the eigenvector of a matrix.

  • determinant;
  • trace, that is, the sum of the elements on the main diagonal;
  • rank, that is maximum amount linearly independent rows/columns.

The program operates according to the above algorithm, shortening the solution process as much as possible. It is important to point out that in the program lambda is designated by the letter “c”. Let's look at a numerical example.

Example of how the program works

Let's try to determine the eigenvectors for the following matrix:

  • M = 5; 13;
  • 4; 14.

Let's enter these values ​​into the cells of the calculator and get the answer in the following form:

  • Matrix rank: 2;
  • Matrix determinant: 18;
  • Matrix trace: 19;
  • Calculation of the eigenvector: c 2 − 19.00c + 18.00 (characteristic equation);
  • Eigenvector calculation: 18 (first lambda value);
  • Eigenvector calculation: 1 (second lambda value);
  • System of equations for vector 1: -13x1 + 13y1 = 4x1 − 4y1;
  • System of equations for vector 2: 4x1 + 13y1 = 4x1 + 13y1;
  • Eigenvector 1: (1; 1);
  • Eigenvector 2: (-3.25; 1).

Thus, we obtained two linearly independent eigenvectors.

Conclusion

Linear algebra and analytical geometry are standard subjects for any freshman technical specialty. A large number of vectors and matrices are terrifying, and in such cumbersome calculations it is easy to make mistakes. Our program will allow students to check their calculations or automatically solve the problem of finding an eigenvector. There are other linear algebra calculators in our catalog; use them in your studies or work.

www.site allows you to find . The site performs the calculation. In a few seconds the server will issue correct solution. The characteristic equation for the matrix will be algebraic expression, found by the rule for calculating the determinant matrices matrices, while along the main diagonal there will be differences in the values ​​of the diagonal elements and the variable. When calculating characteristic equation for the matrix online, each element matrices will be multiplied with corresponding other elements matrices. Find in mode online only possible for square matrices. Finding operation characteristic equation for the matrix online comes down to calculation algebraic sum products of elements matrices as a result of finding the determinant matrices, only for the purpose of determining characteristic equation for the matrix online. This operation takes special place in theory matrices, allows you to find eigenvalues ​​and vectors using roots. The task of finding characteristic equation for the matrix online consists of multiplying elements matrices followed by summing these products according to a certain rule. www.site finds characteristic equation for the matrix given dimension in mode online. Calculation characteristic equation for the matrix online given its dimension, this is finding a polynomial with numerical or symbolic coefficients, found according to the rule for calculating the determinant matrices- as the sum of the products of the corresponding elements matrices, only for the purpose of determining characteristic equation for the matrix online. Finding a polynomial with respect to a variable for a quadratic matrices, as a definition characteristic equation for the matrix, common in theory matrices. The meaning of the roots of a polynomial characteristic equation for the matrix online used to determine eigenvectors and eigenvalues ​​for matrices. Moreover, if the determinant matrices will be equal to zero, then characteristic equation of the matrix will still exist, unlike the reverse matrices. In order to calculate characteristic equation for the matrix or find for several at once matrices characteristic equations, you need to spend a lot of time and effort, while our server will find in a matter of seconds characteristic equation for matrix online. In this case, the answer to finding characteristic equation for the matrix online will be correct and with sufficient accuracy, even if the numbers when finding characteristic equation for the matrix online will be irrational. On the site www.site character entries are allowed in elements matrices, that is characteristic equation for matrix online can be represented in general symbolic form when calculating characteristic equation of the matrix online. It is useful to check the answer obtained when solving the problem of finding characteristic equation for the matrix online using the site www.site. When performing the operation of calculating a polynomial - characteristic equation of the matrix, you need to be careful and extremely focused when solving this problem. In turn, our site will help you check your decision on the topic characteristic equation of a matrix online. If you do not have time for long checks of solved problems, then www.site will certainly be a convenient tool for checking when finding and calculating characteristic equation for the matrix online.

Eigenvalues ​​(numbers) and eigenvectors.
Examples of solutions

Be yourself


From both equations it follows that .

Let's put it then: .

As a result: – second eigenvector.

Let's repeat important points solutions:

– the resulting system certainly has a general solution (the equations are linearly dependent);

– we select the “y” in such a way that it is integer and the first “x” coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

There were quite enough intermediate “checkpoints”, so checking equality is, in principle, unnecessary.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself am used to writing them down in lines). This option is acceptable, but in light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but this is only because I commented on the first example in great detail.

Example 2

Matrices

Let's train on our own! An approximate example of a final task at the end of the lesson.

Sometimes you need to do additional task, namely:

write the canonical matrix decomposition

What it is?

If the eigenvectors of the matrix form basis, then it can be represented as:

Where is a matrix composed of coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Let's look at the matrix of the first example. Its eigenvectors linearly independent(non-collinear) and form a basis. Let's create a matrix of their coordinates:

On main diagonal matrices in the appropriate order the eigenvalues ​​are located, and the remaining elements are equal to zero:
– I once again emphasize the importance of order: “two” corresponds to the 1st vector and is therefore located in the 1st column, “three” – to the 2nd vector.

Using the usual algorithm for finding inverse matrix or Gauss-Jordan method we find . No, that's not a typo! - before you is rare, like solar eclipse an event when the inverse coincides with the original matrix.

It remains to write down the canonical decomposition of the matrix:

The system can be solved using elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: – substitute into the second equation:

Since the first coordinate is zero, we obtain a system, from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If it turns out only trivial solution , then either the eigenvalue was found incorrectly, or the system was compiled/solved with an error.

Compact coordinates gives the value

Eigenvector:

And once again, we check that the solution found satisfies every equation of the system. In subsequent paragraphs and in subsequent tasks, I recommend taking this wish as a mandatory rule.

2) For the eigenvalue, using the same principle, we obtain the following system:

From the 2nd equation of the system we express: – substitute into the third equation:

Since the “zeta” coordinate is equal to zero, we obtain a system from each equation of which it follows linear dependence.

Let

Checking that the solution satisfies every equation of the system.

Thus, the eigenvector is: .

3) And finally, the system corresponds to the eigenvalue:

The second equation looks the simplest, so let’s express it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear relationship has emerged, which we substitute into the expression:

As a result, “x” and “y” were expressed through “z”: . In practice, it is not necessary to achieve precisely such relationships; in some cases it is more convenient to express both through or and through . Or even “train” - for example, “X” through “I”, and “I” through “Z”

Let's put it then:

We check that the solution found satisfies each equation of the system and writes the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms non-zero vectors (eigenvectors) into collinear vectors.

If the condition required finding the canonical decomposition, then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. Making a matrix from their coordinates, a diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, by condition, you need to write linear transformation matrix in the basis of eigenvectors, then we give the answer in the form . There is a difference, and the difference is significant! Because this matrix is ​​the “de” matrix.

Problem with more simple calculations For independent decision:

Example 5

Find eigenvectors of linear transformation, given by the matrix

When finding your own numbers, try not to go all the way to a 3rd degree polynomial. In addition, your system solutions may differ from my solutions - there is no certainty here; and the vectors you find may differ from the sample vectors up to the proportionality of their respective coordinates. For example, and. It is more aesthetically pleasing to present the answer in the form, but it’s okay if you stop at the second option. However, there is everything reasonable limits, the version doesn't look very good anymore.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in the case of multiple eigenvalues?

General algorithm remains the same, but it has its own characteristics, and it is advisable to keep some parts of the solution in a more strict academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Solution

Of course, let’s capitalize the fabulous first column:

And, after decomposition quadratic trinomial by multipliers:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find the eigenvectors:

1) Let’s deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

You won't find a better combination:
Eigenvector:

2-3) Now we remove a couple of sentries. In this case it may turn out either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value into the determinant which brings us the next homogeneous system of linear equations:

Eigenvectors are exactly vectors
fundamental system of solutions

Actually, throughout the entire lesson we did nothing but find the vectors of the fundamental system. Just for the time being this term didn't really need it. By the way, those clever students who missed the topic in camouflage suits homogeneous equations, will be forced to smoke it now.


The only action was to remove the extra lines. The result is a one-by-three matrix with a formal “step” in the middle.
– basic variable, – free variables. There are two free variables, therefore there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero multiplier in front of the “X” allows it to take on absolutely any values ​​(which is clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can select these vectors orally - simply by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- one, which means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are clearly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even more “beautifully”: . However, I caution that in another example simple selection It may not turn out to be the case, which is why the clause is intended for experienced people. In addition, why not take, say, as the third vector? After all, its coordinates also satisfy each equation of the system, and the vectors linearly independent. This option, in principle, is suitable, but “crooked”, since the “other” vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for an independent solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of the final design at the end of the lesson.

It should be noted that in both the 6th and 7th examples we obtain a triple of linearly independent eigenvectors, and therefore the original matrix is ​​representable in canonical expansion. But such raspberries do not happen in all cases:

Example 8


Solution: Let’s create and solve the characteristic equation:

Let's expand the determinant in the first column:

We carry out further simplifications according to the considered method, avoiding the third-degree polynomial:

eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Don’t be surprised, in addition to the kit, there are also variables in use - there is no difference here.

From the 3rd equation we express it and substitute it into the 1st and 2nd equations:

From both equations it follows:

Let then:

2-3) For multiple values ​​we get the system .

Let's write down the matrix of the system and, using elementary transformations, bring it to a stepwise form:

Definition 9.3. Vector X called eigenvector matrices A, if there is such a number λ, that the equality holds: A X= λ X, that is, the result of applying to X linear transformation specified by the matrix A, is the multiplication of this vector by the number λ . The number itself λ called eigenvalue matrices A.

Substituting into formulas (9.3) x` j = λx j , we obtain a system of equations for determining the coordinates of the eigenvector:

. (9.5)

This linear homogeneous system will have a nontrivial solution only if its main determinant is 0 (Cramer's rule). By writing this condition in the form:

we obtain an equation for determining the eigenvalues λ , called characteristic equation. Briefly it can be represented as follows:

| A - λE | = 0, (9.6)

since its left side contains the determinant of the matrix A-λE. Polynomial relative λ | A - λE| called characteristic polynomial matrices A.

Properties of the characteristic polynomial:

1) The characteristic polynomial of a linear transformation does not depend on the choice of basis. Proof. (see (9.4)), but hence, . Thus, it does not depend on the choice of basis. This means that | A-λE| does not change when moving to a new basis.

2) If the matrix A linear transformation is symmetrical(those. and ij =a ji), then all roots of the characteristic equation (9.6) are real numbers.

Properties of eigenvalues ​​and eigenvectors:

1) If you choose a basis from the eigenvectors x 1, x 2, x 3 , corresponding to the eigenvalues λ 1, λ 2, λ 3 matrices A, then in this basis linear transformation A has a diagonal matrix:

(9.7) The proof of this property follows from the definition of eigenvectors.

2) If the eigenvalues ​​of the transformation A are different, then their corresponding eigenvectors are linearly independent.

3) If the characteristic polynomial of the matrix A has three various roots, then in some basis the matrix A has a diagonal appearance.

Let's find the eigenvalues ​​and eigenvectors of the matrix Let's create a characteristic equation: (1- λ )(5 - λ )(1 - λ ) + 6 - 9(5 - λ ) - (1 - λ ) - (1 - λ ) = 0, λ ³ - 7 λ ² + 36 = 0, λ 1 = -2, λ 2 = 3, λ 3 = 6.

Let's find the coordinates of the eigenvectors corresponding to each found value λ. From (9.5) it follows that if X (1) ={x 1 ,x 2 ,x 3) – eigenvector corresponding λ 1 =-2, then

- joint, but uncertain system. Its solution can be written in the form X (1) ={a,0,-a), where a is any number. In particular, if we require that | x (1) |=1, X (1) =

Substituting into system (9.5) λ 2 =3, we obtain a system for determining the coordinates of the second eigenvector - x (2) ={y 1 ,y 2 ,y 3}:

, where X (2) ={b,-b,b) or, provided | x (2) |=1, x (2) =

For λ 3 = 6 find the eigenvector x (3) ={z 1 , z 2 , z 3}:

, x (3) ={c,2c,c) or in the normalized version

x (3) = It can be noticed that X (1) X (2) = ab–ab= 0, x (1) x (3) = ac-ac= 0, x (2) x (3) = bc- 2bc + bc= 0. Thus, the eigenvectors of this matrix are pairwise orthogonal.

Lecture 10.

Quadratic forms and their connection with symmetric matrices. Properties of eigenvectors and eigenvalues ​​of a symmetric matrix. Reducing a quadratic form to canonical form.

Definition 10.1.Quadratic shape real variables x 1, x 2,…, x n is called a polynomial of the second degree in these variables that does not contain a free term and terms of the first degree.

Examples quadratic forms:

(n = 2),

(n = 3). (10.1)

Let us recall the definition of a symmetric matrix given in the last lecture:

Definition 10.2. Square matrix called symmetrical, if , that is, if the matrix elements that are symmetrical about the main diagonal are equal.

Properties of eigenvalues ​​and eigenvectors of a symmetric matrix:

1) All eigenvalues ​​of a symmetric matrix are real.

Proof (for n = 2).

Let the matrix A has the form: . Let's create a characteristic equation:

(10.2) Let’s find the discriminant:

Therefore, the equation has only real roots.

2) The eigenvectors of a symmetric matrix are orthogonal.

Proof (for n= 2).

The coordinates of the eigenvectors and must satisfy the equations.