Biographies Characteristics Analysis

Find matrix eigenvectors. System of homogeneous linear equations

An eigenvector of a square matrix is ​​one that, when multiplied by a given matrix, results in a collinear vector. In simple words, when a matrix is ​​multiplied by an eigenvector, the latter remains the same, but multiplied by some number.

Definition

An eigenvector is a non-zero vector V, which, when multiplied by a square matrix M, becomes itself, increased by some number λ. In algebraic notation, this looks like:

M × V = λ × V,

where λ is an eigenvalue of the matrix M.

Consider numerical example. For convenience of writing, the numbers in the matrix will be separated by a semicolon. Let's say we have a matrix:

  • M = 0; 4;
  • 6; 10.

Let's multiply it by a column vector:

  • V = -2;

When multiplying a matrix by a column vector, we also get a column vector. Strict mathematical language the formula for multiplying a 2 × 2 matrix by a column vector would look like this:

  • M × V = M11 × V11 + M12 × V21;
  • M21 x V11 + M22 x V21.

M11 means the element of the matrix M, standing in the first row and first column, and M22 is the element located in the second row and second column. For our matrix, these elements are M11 = 0, M12 = 4, M21 = 6, M22 10. For a column vector, these values ​​are V11 = –2, V21 = 1. According to this formula, we get the following result of the product of a square matrix by a vector:

  • M × V = 0 × (-2) + (4) × (1) = 4;
  • 6 × (-2) + 10 × (1) = -2.

For convenience, we write the column vector into a row. So, we have multiplied the square matrix by the vector (-2; 1), resulting in the vector (4; -2). Obviously, this is the same vector multiplied by λ = -2. lambda in this case denotes an eigenvalue of the matrix.

The eigenvector of a matrix is ​​a collinear vector, that is, an object that does not change its position in space when it is multiplied by a matrix. The concept of collinearity in vector algebra similar to the term parallelism in geometry. In geometric interpretation collinear vectors- These are parallel directed segments of different lengths. Since the time of Euclid, we know that one line has an infinite number of lines parallel to it, so it is logical to assume that each matrix has an infinite number of eigenvectors.

From the previous example, it can be seen that both (-8; 4), and (16; -8), and (32, -16) can be eigenvectors. All these are collinear vectors corresponding to the eigenvalue λ = -2. When multiplying the original matrix by these vectors, we will still get a vector as a result, which differs from the original by 2 times. That is why, when solving problems for finding an eigenvector, it is required to find only linearly independent vector objects. Most often, for an n × n matrix, there are n-th number of eigenvectors. Our calculator is designed for the analysis of second-order square matrices, so almost always two eigenvectors will be found as a result, except when they coincide.

In the example above, we knew in advance the eigenvector of the original matrix and visually determined the lambda number. However, in practice, everything happens the other way around: at the beginning there are eigenvalues ​​and only then eigenvectors.

Solution algorithm

Let's look at the original matrix M again and try to find both of its eigenvectors. So the matrix looks like:

  • M = 0; 4;
  • 6; 10.

To begin with, we need to determine the eigenvalue λ, for which we need to calculate the determinant of the following matrix:

  • (0 − λ); 4;
  • 6; (10 − λ).

This matrix obtained by subtracting the unknown λ from the elements on the main diagonal. The determinant is determined by the standard formula:

  • detA = M11 × M21 − M12 × M22
  • detA = (0 − λ) × (10 − λ) − 24

Since our vector must not be zero, we take the resulting equation as linearly dependent and equate our determinant detA to zero.

(0 − λ) × (10 − λ) − 24 = 0

Let's expand the brackets and get characteristic equation matrices:

λ 2 − 10λ − 24 = 0

This is standard quadratic equation, which is to be solved in terms of the discriminant.

D \u003d b 2 - 4ac \u003d (-10) × 2 - 4 × (-1) × 24 \u003d 100 + 96 \u003d 196

The root of the discriminant is sqrt(D) = 14, so λ1 = -2, λ2 = 12. Now for each lambda value, we need to find an eigenvector. Let us express the coefficients of the system for λ = -2.

  • M − λ × E = 2; 4;
  • 6; 12.

In this formula, E is identity matrix. Based on the obtained matrix, we will compose the system linear equations:

2x + 4y = 6x + 12y

where x and y are elements of the eigenvector.

Let's collect all the X's on the left and all the Y's on the right. Obviously - 4x = 8y. Divide the expression by - 4 and get x = -2y. Now we can determine the first eigenvector of the matrix by taking any values ​​of the unknowns (remember about the infinity of linearly dependent eigenvectors). Let's take y = 1, then x = -2. Therefore, the first eigenvector looks like V1 = (–2; 1). Return to the beginning of the article. It was this vector object that we multiplied the matrix by to demonstrate the concept of an eigenvector.

Now let's find the eigenvector for λ = 12.

  • M - λ × E = -12; 4
  • 6; -2.

Let us compose the same system of linear equations;

  • -12x + 4y = 6x − 2y
  • -18x = -6y
  • 3x=y.

Now let's take x = 1, hence y = 3. Thus, the second eigenvector looks like V2 = (1; 3). When multiplying the original matrix by given vector, the result will always be the same vector multiplied by 12. This completes the solution algorithm. Now you know how to manually define an eigenvector of a matrix.

  • determinant;
  • trace, that is, the sum of the elements on the main diagonal;
  • rank, that is maximum amount linearly independent rows/columns.

The program operates according to the above algorithm, minimizing the solution process. It is important to point out that in the program the lambda is denoted by the letter "c". Let's look at a numerical example.

Program example

Let's try to define eigenvectors for the following matrix:

  • M=5; thirteen;
  • 4; 14.

Let's enter these values ​​into the cells of the calculator and get the answer in the following form:

  • Matrix rank: 2;
  • Matrix determinant: 18;
  • Matrix trace: 19;
  • Eigenvector calculation: c 2 − 19.00c + 18.00 (characteristic equation);
  • Eigenvector calculation: 18 (first lambda value);
  • Eigenvector calculation: 1 (second lambda value);
  • System of equations of vector 1: -13x1 + 13y1 = 4x1 − 4y1;
  • Vector 2 equation system: 4x1 + 13y1 = 4x1 + 13y1;
  • Eigenvector 1: (1; 1);
  • Eigenvector 2: (-3.25; 1).

Thus, we have obtained two linearly independent eigenvectors.

Conclusion

Linear algebra and analytic geometry are standard subjects for any freshman technical specialty. A large number of vectors and matrices is terrifying, and it is easy to make a mistake in such cumbersome calculations. Our program will allow students to check their calculations or automatically solve the problem of finding an eigenvector. There are other linear algebra calculators in our catalog, use them in your study or work.

How to paste mathematical formulas to the website?

If you ever need to add one or two mathematical formulas to a web page, then the easiest way to do this is as described in the article: mathematical formulas are easily inserted into the site in the form of pictures that Wolfram Alpha automatically generates. In addition to simplicity, this universal way will help improve the visibility of the site in search engines. It has been working for a long time (and I think it will work forever), but it is morally outdated.

If you are constantly using mathematical formulas on your site, then I recommend that you use MathJax, a special JavaScript library that displays mathematical notation in web browsers using MathML, LaTeX, or ASCIIMathML markup.

There are two ways to get started using MathJax: (1) using simple code, you can quickly connect a MathJax script to your site, which will be in right moment automatically download from a remote server (list of servers); (2) upload the MathJax script from a remote server to your server and connect it to all pages of your site. The second method is more complicated and time consuming and will allow you to speed up the loading of your site's pages, and if the parent MathJax server becomes temporarily unavailable for some reason, this will not affect your own site in any way. Despite these advantages, I chose the first method, as it is simpler, faster and does not require technical skills. Follow my example, and within 5 minutes you will be able to use all the features of MathJax on your website.

You can connect the MathJax library script from a remote server using two code options taken from the main MathJax website or from the documentation page:

One of these code options needs to be copied and pasted into the code of your web page, preferably between the tags and or right after the tag . According to the first option, MathJax loads faster and slows down the page less. But the second option automatically tracks and loads the latest versions of MathJax. If you insert the first code, then it will need to be updated periodically. If you paste the second code, then the pages will load more slowly, but you will not need to constantly monitor MathJax updates.

The easiest way to connect MathJax is in Blogger or WordPress: in the site control panel, add a widget designed to insert third-party JavaScript code, copy the first or second version of the load code presented above into it, and place the widget closer to the beginning of the template (by the way, this is not at all necessary , since the MathJax script is loaded asynchronously). That's all. Now learn the MathML, LaTeX, and ASCIIMathML markup syntax and you're ready to embed math formulas into your web pages.

Any fractal is built according to a certain rule, which is consistently applied an unlimited number of times. Each such time is called an iteration.

The iterative algorithm for constructing a Menger sponge is quite simple: the original cube with side 1 is divided by planes parallel to its faces into 27 equal cubes. One central cube and 6 cubes adjacent to it along the faces are removed from it. It turns out a set consisting of 20 remaining smaller cubes. Doing the same with each of these cubes, we get a set consisting of 400 smaller cubes. Continuing this process indefinitely, we get the Menger sponge.

www.site allows you to find . The site does the calculation. In a few seconds, the server will issue correct solution. The characteristic equation for the matrix will be algebraic expression, found by the rule for calculating the determinant matrices matrices, while on the main diagonal there will be differences in the values ​​of the diagonal elements and the variable. When calculating characteristic equation for matrix online, each element matrices will be multiplied with the corresponding other elements matrices. Find in mode online possible only for square matrices. Find operation characteristic equation for matrix online comes down to calculating algebraic sum products of elements matrices as a result of finding the determinant matrices, only for the purpose of determining characteristic equation for matrix online. This operation takes special place in theory matrices, allows you to find eigenvalues ​​and vectors using roots . Finding task characteristic equation for matrix online is to multiply elements matrices with the subsequent summation of these products according to a certain rule. www.site finds characteristic equation for matrix given dimension in the mode online. calculation characteristic equation for matrix online for a given dimension, this is finding a polynomial with numerical or symbolic coefficients found by the rule for calculating the determinant matrices- as the sum of the products of the corresponding elements matrices, only for the purpose of determining characteristic equation for matrix online. Finding a polynomial with respect to a variable for a square matrices, as definition characteristic equation for the matrix, common in theory matrices. The value of the roots of the polynomial characteristic equation for matrix online used to define eigenvectors and eigenvalues for matrices. However, if the determinant matrices will be zero, then matrix characteristic equation will still exist, unlike the reverse matrices. In order to calculate characteristic equation for matrix or search for several at once matrices characteristic equations, you need to spend a lot of time and effort, while our server will find characteristic equation for online matrix. In this case, the answer by finding characteristic equation for matrix online will be correct and with sufficient accuracy, even if the numbers when finding characteristic equation for matrix online will be irrational. Online www.site character entries are allowed in elements matrices, i.e characteristic equation for online matrix can be represented in a general symbolic form when calculating characteristic equation matrix online. It is useful to check the answer obtained when solving the problem of finding characteristic equation for matrix online using the site www.site. When performing the operation of calculating a polynomial - characteristic equation of the matrix, it is necessary to be attentive and extremely concentrated in solving this problem. In turn, our site will help you check your decision on the topic characteristic equation matrix online. If you do not have time for long checks of solved problems, then www.site will certainly be a convenient tool for checking when finding and calculating characteristic equation for matrix online.

". The first part outlines the provisions that are minimally necessary for understanding chemometrics, and the second part contains the facts that you need to know for a deeper understanding of the methods multivariate analysis. The presentation is illustrated by examples made in an Excel workbook. Matrix.xls that accompanies this document.

Links to examples are placed in the text as Excel objects. These examples are abstract in nature, they are not tied to tasks in any way. analytical chemistry. Real Examples the use of matrix algebra in chemometrics is discussed in other texts devoted to various chemometric applications.

Most of the measurements carried out in analytical chemistry are not direct but indirect. This means that in the experiment, instead of the value of the desired analyte C (concentration), another value is obtained x(signal) related to but not equal to C, i.e. x(C) ≠ C. As a rule, the type of dependence x(C) is not known, but fortunately in analytical chemistry most measurements are proportional. This means that as the concentration of C in a times, the signal X will increase by the same amount., i.e. x(a C) = a x(C). In addition, the signals are also additive, so the signal from a sample containing two substances with concentrations of C 1 and C 2 will be is equal to the sum signals from each component, i.e. x(C1 + C2) = x(C1)+ x(C2). Proportionality and additivity together give linearity. Many examples could be given to illustrate the principle of linearity, but it suffices to mention the two most clear examples- chromatography and spectroscopy. The second feature inherent in the experiment in analytical chemistry is multichannel. Modern analytical equipment simultaneously measures signals for many channels. For example, the intensity of light transmission is measured for several wavelengths at once, i.e. range. Therefore, in the experiment we are dealing with a variety of signals x 1 , x 2 ,...., x n characterizing the set of concentrations C 1 ,C 2 , ..., C m of substances present in the system under study.

Rice. 1 Spectra

So, the analytical experiment is characterized by linearity and multidimensionality. Therefore, it is convenient to consider experimental data as vectors and matrices and manipulate them using the apparatus of matrix algebra. The fruitfulness of this approach is illustrated by the example shown in , which shows three spectra taken for 200 wavelengths from 4000 to 4796 cm–1. First ( x 1) and second ( x 2) the spectra were obtained for standard samples in which the concentrations of two substances A and B are known: in the first sample [A] = 0.5, [B] = 0.1, and in the second sample [A] = 0.2, [B] = 0.6. What can be said about a new, unknown sample, the spectrum of which is indicated x 3 ?

Consider three experimental spectra x 1 , x 2 and x 3 as three vectors of dimension 200. Using linear algebra, one can easily show that x 3 = 0.1 x 1 +0.3 x 2 , so the third sample obviously contains only substances A and B in concentrations [A] = 0.5×0.1 + 0.2×0.3 = 0.11 and [B] = 0.1×0.1 + 0.6×0.3 = 0.19.

1. Basic information

1.1 Matrices

Matrix called a rectangular table of numbers, for example

Rice. 2 Matrix

Matrices are denoted by capital bold letters ( A), and their elements - corresponding lower case with indices, i.e. a ij . The first index numbers the rows and the second number the columns. In chemometrics, it is customary to denote maximum value index with the same letter as the index itself, but capitalized. Therefore, the matrix A can also be written as ( a ij , i = 1,..., I; j = 1,..., J). For the example matrix I = 4, J= 3 and a 23 = −7.5.

Pair of numbers I and J is called the dimension of the matrix and is denoted as I× J. An example of a matrix in chemometrics is a set of spectra obtained for I samples on J wavelengths.

1.2. The simplest operations with matrices

Matrices can multiply by numbers. In this case, each element is multiplied by this number. For example -

Rice. 3 Multiplying a matrix by a number

Two matrices of the same dimension can be element-wise fold and subtract. For example,

Rice. 4 Matrix addition

As a result of multiplication by a number and addition, a matrix of the same dimension is obtained.

A zero matrix is ​​a matrix consisting of zeros. It is designated O. It's obvious that A+O = A, AA = O and 0 A = O.

The matrix can transpose. During this operation, the matrix is ​​flipped, i.e. rows and columns are swapped. Transposition is indicated by a dash, A" or index A t . Thus, if A = {a ij , i = 1,..., I; j = 1,...,J), then A t = ( a ji , j = 1,...,J; i = 1,..., I). for example

Rice. 5 Matrix transposition

It's obvious that ( A t) t = A, (A+B) t = A t + B t .

1.3. Matrix multiplication

Matrices can multiply, but only if they have the appropriate dimensions. Why this is so will be clear from the definition. Matrix product A, dimension I× K, and matrices B, dimension K× J, is called the matrix C, dimension I× J, whose elements are numbers

Thus for the product AB it is necessary that the number of columns in the left matrix A was equal to the number of rows in the right matrix B. Matrix product example -

Fig.6 Product of matrices

The matrix multiplication rule can be formulated as follows. To find an element of a matrix C standing at the intersection i-th line and j-th column ( c ij) must be multiplied element by element i-th row of the first matrix A on the j-th column of the second matrix B and add up all the results. So in the example shown, the element from the third row and the second column is obtained as the sum of the element-wise products of the third row A and second column B

Fig.7 Element of the product of matrices

The product of matrices depends on the order, i.e. ABBA, at least for dimensional reasons. It is said to be non-commutative. However, the product of matrices is associative. It means that ABC = (AB)C = A(BC). Moreover, it is also distributive, i.e. A(B+C) = AB+AC. It's obvious that AO = O.

1.4. Square matrices

If the number of columns of a matrix is ​​equal to the number of its rows ( I = J=N), then such a matrix is ​​called square. In this section, we will consider only such matrices. Among these matrices, one can single out matrices with special properties.

Solitary matrix (denoted I and sometimes E) is a matrix in which all elements are equal to zero, except for the diagonal ones, which are equal to 1, i.e.

Obviously AI = IA = A.

The matrix is ​​called diagonal, if all its elements, except for the diagonal ones ( a ii) are equal to zero. for example

Rice. 8 Diagonal matrix

Matrix A called the top triangular, if all its elements lying below the diagonal are equal to zero, i.e. a ij= 0, at i>j. for example

Rice. 9 Upper triangular matrix

The lower triangular matrix is ​​defined similarly.

Matrix A called symmetrical, if A t = A. In other words a ij = a ji. for example

Rice. 10 Symmetric matrix

Matrix A called orthogonal, if

A t A = AA t = I.

The matrix is ​​called normal if

1.5. Trace and determinant

Following square matrix A(denoted Tr( A) or Sp( A)) is the sum of its diagonal elements,

For example,

Rice. 11 Matrix trace

It's obvious that

Sp(α A) = α Sp( A) and

Sp( A+B) = Sp( A)+ Sp( B).

It can be shown that

Sp( A) = Sp( A t), Sp( I) = N,

and also that

Sp( AB) = Sp( BA).

Another important characteristic square matrix is ​​its determinant(denoted by det( A)). The definition of the determinant in general case quite complicated, so we will start with the simplest option - the matrix A dimension (2×2). Then

For a (3×3) matrix, the determinant will be equal to

In the case of a matrix ( N× N) the determinant is calculated as the sum 1 2 3 ... N= N! terms, each of which is equal to

Indices k 1 , k 2 ,..., kN are defined as all possible ordered permutations r numbers in the set (1, 2, ... , N). The calculation of the matrix determinant is a complex procedure, which in practice is carried out using special programs. For example,

Rice. 12 Matrix determinant

We note only the obvious properties:

det( I) = 1, det( A) = det( A t),

det( AB) = det( A)det( B).

1.6. Vectors

If the matrix has only one column ( J= 1), then such an object is called vector. More precisely, a column vector. for example

Matrices consisting of one row can also be considered, for example

This object is also a vector, but row vector. When analyzing data, it is important to understand which vectors we are dealing with - columns or rows. So the spectrum taken for one sample can be considered as a row vector. Then the set of spectral intensities at some wavelength for all samples should be treated as a column vector.

The dimension of a vector is the number of its elements.

It is clear that any column vector can be transformed into a row vector by transposition, i.e.

In those cases where the form of a vector is not specifically specified, but simply a vector is said, then they mean a column vector. We will also adhere to this rule. A vector is denoted by a lowercase direct bold letter. A zero vector is a vector all elements of which are equal to zero. It is denoted 0 .

1.7. The simplest operations with vectors

Vectors can be added and multiplied by numbers in the same way as matrices. For example,

Rice. 13 Operations with vectors

Two vectors x and y called collinear, if there is a number α such that

1.8. Products of vectors

Two vectors of the same dimension N can be multiplied. Let there be two vectors x = (x 1 , x 2 ,...,x N) t and y = (y 1 , y 2 ,...,y N) t . Guided by the multiplication rule "row by column", we can make two products from them: x t y and xy t . First work

called scalar or internal. Its result is a number. It also uses the notation ( x,y)= x t y. For example,

Rice. 14 Inner (scalar) product

Second work

called external. Its result is a dimension matrix ( N× N). For example,

Rice. 15 Outer product

Vectors, scalar product which is equal to zero are called orthogonal.

1.9. Vector norm

The scalar product of a vector with itself is called a scalar square. This value

defines a square length vector x. To denote length (also called the norm vector) the notation is used

For example,

Rice. 16 Vector norm

Unit length vector (|| x|| = 1) is called normalized. Nonzero vector ( x0 ) can be normalized by dividing it by the length, i.e. x = ||x|| (x/||x||) = ||x|| e. Here e = x/||x|| is a normalized vector.

Vectors are called orthonormal if they are all normalized and pairwise orthogonal.

1.10. Angle between vectors

The scalar product defines and injectionφ between two vectors x and y

If the vectors are orthogonal, then cosφ = 0 and φ = π/2, and if they are collinear, then cosφ = 1 and φ = 0.

1.11. Vector representation of a matrix

Each matrix A size I× J can be represented as a set of vectors

Here each vector a j is an j-th column and row vector b i is an i-th row of the matrix A

1.12. Linearly dependent vectors

Vectors of the same dimension ( N) can be added and multiplied by a number, just like matrices. The result is a vector of the same dimension. Let there be several vectors of the same dimension x 1 , x 2 ,...,x K and the same number of numbers α α 1 , α 2 ,...,α K. Vector

y= α 1 x 1 + α 2 x 2 +...+α K x K

called linear combination vectors x k .

If there are such non-zero numbers α k ≠ 0, k = 1,..., K, what y = 0 , then such a set of vectors x k called linearly dependent. Otherwise, the vectors are called linearly independent. For example, vectors x 1 = (2, 2) t and x 2 = (−1, −1) t are linearly dependent, since x 1 +2x 2 = 0

1.13. Matrix rank

Consider a set of K vectors x 1 , x 2 ,...,x K dimensions N. The rank of this system of vectors is the maximum number of linearly independent vectors. For example in the set

there are only two linearly independent vectors, for example x 1 and x 2 , so its rank is 2.

Obviously, if there are more vectors in the set than their dimension ( K>N), then they are necessarily linearly dependent.

Matrix rank(denoted by rank( A)) is the rank of the system of vectors it consists of. Although any matrix can be represented in two ways (column vectors or row vectors), this does not affect the rank value, since

1.14. inverse matrix

square matrix A is called non-degenerate if it has a unique reverse matrix A-1 , determined by the conditions

AA −1 = A −1 A = I.

The inverse matrix does not exist for all matrices. A necessary and sufficient condition for nondegeneracy is

det( A) ≠ 0 or rank( A) = N.

Matrix inversion is a complex procedure for which there are special programs. For example,

Rice. 17 Matrix inversion

We give formulas for the simplest case - matrices 2 × 2

If matrices A and B are non-degenerate, then

(AB) −1 = B −1 A −1 .

1.15. Pseudo-inverse matrix

If the matrix A is degenerate and the inverse matrix does not exist, then in some cases one can use pseudo-inverse matrix, which is defined as such a matrix A+ that

AA + A = A.

The pseudo-inverse matrix is ​​not the only one and its form depends on the construction method. For example for rectangular matrix the Moore-Penrose method can be used.

If the number of columns less than number lines, then

A + =(A t A) −1 A t

For example,

Rice. 17a Pseudo matrix inversion

If the number of columns more number lines, then

A + =A t( AA t) −1

1.16. Multiplication of a vector by a matrix

Vector x can be multiplied by a matrix A suitable dimension. In this case, the column vector is multiplied on the right Ax, and the vector string is on the left x t A. If the dimension of the vector J, and the dimension of the matrix I× J then the result is a vector of dimension I. For example,

Rice. 18 Vector-Matrix Multiplication

If the matrix A- square ( I× I), then the vector y = Ax has the same dimensions as x. It's obvious that

A(α 1 x 1 + α 2 x 2) = α 1 Ax 1 + α 2 Ax 2 .

Therefore matrices can be considered as linear transformations of vectors. In particular x = x, Ox = 0 .

2. Additional information

2.1. Systems of linear equations

Let be A- matrix size I× J, a b- dimension vector J. Consider the equation

Ax = b

with respect to the vector x, dimensions I. Essentially, this is a system of I linear equations with J unknown x 1 ,...,x J. A solution exists if and only if

rank( A) = rank( B) = R,

where B is the augmented dimension matrix I×( J+1) consisting of the matrix A, padded with a column b, B = (A b). Otherwise, the equations are inconsistent.

If a R = I = J, then the solution is unique

x = A −1 b.

If a R < I, then there are many various solutions, which can be expressed in terms of a linear combination JR vectors. System homogeneous equations Ax = 0 with a square matrix A (N× N) has no trivial solution (x0 ) if and only if det( A) = 0. If R= rank( A)<N, then there are NR linearly independent solutions.

2.2. Bilinear and quadratic forms

If a A- This square matrix, a x and y- vectors of the corresponding dimension, then the scalar product of the form x t Ay called bilinear the shape defined by the matrix A. At x = y expression x t Ax called quadratic form.

2.3. Positive definite matrices

square matrix A called positive definite, if for any nonzero vector x0 ,

x t Ax > 0.

The negative (x t Ax < 0), non-negative (x t Ax≥ 0) and non-positive (x t Ax≤ 0) certain matrices.

2.4. Cholesky decomposition

If the symmetric matrix A is positive definite, then there is a unique triangular matrix U with positive elements, for which

A = U t U.

For example,

Rice. 19 Cholesky decomposition

2.5. polar decomposition

Let be A is a nondegenerate square matrix of dimension N× N. Then there is a unique polar performance

A = SR,

where S is a non-negative symmetric matrix, and R is an orthogonal matrix. matrices S and R can be defined explicitly:

S 2 = AA t or S = (AA t) ½ and R = S −1 A = (AA t) −½ A.

For example,

Rice. 20 Polar decomposition

If the matrix A is degenerate, then the decomposition is not unique - namely: S still alone, but R there may be many. Polar decomposition represents a matrix A as a compression/stretch combination S and turning R.

2.6. Eigenvectors and eigenvalues

Let be A is a square matrix. Vector v called own vector matrices A, if

Av = λ v,

where the number λ is called eigenvalue matrices A. Thus, the transformation that the matrix performs A over vector v, is reduced to a simple stretching or compression with a factor λ. The eigenvector is determined up to multiplication by the constant α ≠ 0, i.e. if v is an eigenvector, then α v is also an eigenvector.

2.7. Eigenvalues

At the matrix A, dimension ( N× N) cannot be greater than N eigenvalues. They satisfy characteristic equation

det( A − λ I) = 0,

being algebraic equation N-th order. In particular, for a 2×2 matrix, the characteristic equation has the form

For example,

Rice. 21 Eigenvalues

Set of eigenvalues ​​λ 1 ,..., λ N matrices A called spectrum A.

The spectrum has various properties. In particular

det( A) = λ 1×...×λ N, Sp( A) = λ 1 +...+λ N.

The eigenvalues ​​of an arbitrary matrix can be complex numbers, but if the matrix is ​​symmetric ( A t = A), then its eigenvalues ​​are real.

2.8. Eigenvectors

At the matrix A, dimension ( N× N) cannot be greater than N eigenvectors, each of which corresponds to its own value. To determine the eigenvector v n you need to solve a system of homogeneous equations

(A − λ n I)v n = 0 .

It has a non-trivial solution because det( A-λ n I) = 0.

For example,

Rice. 22 Eigenvectors

The eigenvectors of a symmetric matrix are orthogonal.

Eigenvalues ​​(numbers) and eigenvectors.
Solution examples

Be yourself


From both equations it follows that .

Let's put then: .

As a result: is the second eigenvector.

Let's repeat important points solutions:

– the resulting system certainly has common decision(the equations are linearly dependent);

- "Y" is selected in such a way that it is integer and the first "x" coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

Intermediate "checkpoints" were quite enough, so the check of equalities, in principle, is superfluous.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself used to write them in lines). This option is acceptable, but in the light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but that's only because I commented on the first example in great detail.

Example 2

matrices

We train on our own! An approximate sample of the final design of the task at the end of the lesson.

Sometimes you need to do additional task, namely:

write the canonical decomposition of the matrix

What it is?

If the matrix eigenvectors form basis, then it can be represented as:

Where is a matrix composed of the coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Consider the matrix of the first example. Her own vectors linearly independent(non-collinear) and form a basis. Let's make a matrix from their coordinates:

On the main diagonal matrices in due order eigenvalues ​​are located, and the remaining elements are equal to zero:
- once again I emphasize the importance of the order: "two" corresponds to the 1st vector and therefore is located in the 1st column, "three" - to the 2nd vector.

According to the usual algorithm for finding inverse matrix or Gauss-Jordan method find . No, that's not a typo! - in front of you is rare, like solar eclipse event when the inverse matched the original matrix.

It remains to write the canonical decomposition of the matrix :

The system can be solved with elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: - substitute into the second equation:

Since the first coordinate is zero, we obtain a system , from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If only a trivial solution is obtained , then either the eigenvalue was found incorrectly, or the system was compiled / solved with an error.

Compact coordinates gives value

Eigenvector:

And once again, we check that the found solution satisfies every equation of the system. In the following paragraphs and in subsequent tasks, I recommend that this wish be accepted as a mandatory rule.

2) For the eigenvalue, following the same principle, we obtain next system:

From the 2nd equation of the system we express: - substitute into the third equation:

Since the "zeta" coordinate is equal to zero, we obtain a system , from each equation of which it follows linear dependence.

Let be

We check that the solution satisfies every equation of the system.

Thus, the eigenvector: .

3) And, finally, the system corresponds to its own value:

The second equation looks the simplest, so we express it from it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear dependence was revealed, which we substitute into the expression:

As a result, "X" and "Y" were expressed through "Z": . In practice, it is not necessary to achieve just such relationships; in some cases it is more convenient to express both through or and through . Or even a “train” - for example, “X” through “Y”, and “Y” through “Z”

Let's put then:

We check that the found solution satisfies each equation of the system and write the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms nonzero vectors (eigenvectors) into vectors collinear to them.

If by condition it was required to find a canonical expansion of , then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. We make a matrix from their coordinates, the diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, according to the condition, it is necessary to write matrix linear transformation in the basis of eigenvectors, then we give the answer in the form . There is a difference, and a significant difference! For this matrix is ​​the matrix "de".

Challenge with more simple calculations for independent decision:

Example 5

Find eigenvectors of linear transformation given by matrix

When finding your own numbers, try not to bring the case to a polynomial of the 3rd degree. In addition, your system solutions may differ from my solutions - there is no unambiguity here; and the vectors you find may differ from the sample vectors up to proportionality to their respective coordinates. For example, and . It is more aesthetically pleasing to present the answer in the form of , but it's okay if you stop at the second option. However, everything has reasonable limits, the version does not look very good.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in case of multiple eigenvalues?

General algorithm remains the same, but it has its own peculiarities, and it is advisable to keep some sections of the solution in a more strict academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Decision

Of course, let's capitalize the fabulous first column:

And after decomposition square trinomial for multipliers:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find our own vectors:

1) We will deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

There is no better combination:
Eigenvector:

2-3) Now we remove a couple of sentries. In this case, it may be either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value in the determinant , which brings us the following homogeneous system of linear equations:

Eigenvectors are exactly the vectors
fundamental decision system

Actually, throughout the lesson, we were only engaged in finding the vectors of the fundamental system. Just for the time being this term was not particularly required. By the way, those dexterous students who, in camouflage homogeneous equations, will be forced to smoke it now.


The only action was to remove extra lines. The result is a "one by three" matrix with a formal "step" in the middle.
– basic variable, – free variables. There are two free variables, so there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero factor in front of the “x” allows it to take on absolutely any values ​​(which is also clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can pick up these vectors orally - just by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- unit means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are perfectly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even “more beautifully”: . However, a word of caution, in another example simple selection may not be, which is why the reservation is intended for experienced people. Besides, why not take as the third vector, say, ? After all, its coordinates also satisfy each equation of the system, and the vectors are linearly independent. This option, in principle, is suitable, but "crooked", since the "other" vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for a do-it-yourself solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of finishing at the end of the lesson.

It should be noted that in both the 6th and 7th examples, a triple of linearly independent eigenvectors is obtained, and therefore the original matrix can be represented in canonical decomposition. But such raspberries do not happen in all cases:

Example 8


Decision: compose and solve the characteristic equation:

We expand the determinant by the first column:

We carry out further simplifications according to the considered method, avoiding a polynomial of the 3rd degree:

are eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Do not be surprised, in addition to the kit, variables are also in use - there is no difference here.

From the 3rd equation we express - we substitute into the 1st and 2nd equations:

From both equations follows:

Let then:

2-3) For multiple values, we get the system .

Let us write down the matrix of the system and, using elementary transformations, bring it to a stepped form: