Biographies Characteristics Analysis

Inverse matrix and its properties. inverse matrix

Finding the inverse matrix- a problem that is most often solved by two methods:

  • the method of algebraic additions, in which it is required to find determinants and transpose matrices;
  • elimination method unknown gauss, in which it is required to perform elementary transformations of matrices (add rows, multiply rows by the same number, etc.).

For those who are especially curious, there are other methods, for example, the method of linear transformations. In this lesson, we will analyze the three methods mentioned and algorithms for finding the inverse matrix by these methods.

inverse matrix BUT, such a matrix is ​​called

BUT
. (1)

inverse matrix , which is required to be found for a given square matrix BUT, such a matrix is ​​called

the product by which the matrices BUT on the right is the identity matrix, i.e.,
. (1)

An identity matrix is ​​a diagonal matrix in which all diagonal entries are equal to one.

Theorem.For every non-singular (non-degenerate, non-singular) square matrix, one can find an inverse matrix, and moreover, only one. For a special (degenerate, singular) square matrix, the inverse matrix does not exist.

The square matrix is ​​called non-special(or non-degenerate, non-singular) if its determinant is not equal to zero, and special(or degenerate, singular) if its determinant is zero.

inverse matrix can only be found for a square matrix. Naturally, the inverse matrix will also be square and of the same order as the given matrix. A matrix for which an inverse matrix can be found is called an invertible matrix.

For inverse matrix there is an apt analogy with the reciprocal of a number. For every number a, which is not equal to zero, there exists a number b that the work a and b equal to one: ab= 1 . Number b is called the reciprocal of a number b. For example, for the number 7, the inverse is the number 1/7, since 7*1/7=1.

Finding the inverse matrix by the method of algebraic additions (union matrix)

For a non-singular square matrix BUT the inverse is the matrix

where is the matrix determinant BUT, а is the matrix associated with the matrix BUT.

Allied with a square matrix A is a matrix of the same order whose elements are algebraic additions corresponding elements of the determinant of the matrix transposed with respect to the matrix A. Thus, if

then

and

Algorithm for finding the inverse matrix by the method of algebraic additions

1. Find the determinant of this matrix A. If the determinant is equal to zero, finding the inverse matrix stops, since the matrix is ​​degenerate and there is no inverse for it.

2. Find a matrix transposed with respect to A.

3. Calculate the elements of the union matrix as the algebraic complements of the marita found in step 2.

4. Apply formula (2): multiply the number, inverse determinant matrices A, to the union matrix found in step 4.

5. Check the result obtained in step 4 by multiplying this matrix A to the inverse matrix. If the product of these matrices is equal to the identity matrix, then the inverse matrix was found correctly. Otherwise start the solution process again.

Example 1 For matrix

find the inverse matrix.

Decision. To find the inverse matrix, it is necessary to find the determinant of the matrix BUT. We find by the rule of triangles:

Therefore, the matrix BUT is non-singular (non-degenerate, non-singular) and there is an inverse for it.

Let's find the matrix associated with the given matrix BUT.

Let's find the matrix transposed with respect to the matrix A:

We calculate the elements of the union matrix as algebraic complements of the matrix transposed with respect to the matrix A:

Therefore, the matrix conjugated with the matrix A, has the form

Comment. The order of calculation of elements and transposition of the matrix may be different. One can first compute the algebraic complements of the matrix A, and then transpose the matrix of algebraic complements. The result should be the same elements of the union matrix.

Applying formula (2), we find the matrix inverse to the matrix BUT:

Finding the Inverse Matrix by Gaussian Elimination of Unknowns

The first step to find the inverse matrix by Gaussian elimination is to assign to the matrix A identity matrix of the same order, separating them with a vertical bar. We get a dual matrix. Multiply both parts of this matrix by , then we get

,

Algorithm for finding the inverse matrix by the Gaussian elimination of unknowns

1. To the matrix A assign an identity matrix of the same order.

2. Transform the resulting dual matrix so that the identity matrix is ​​obtained in its left part, then the inverse matrix will automatically be obtained in the right part in place of the identity matrix. Matrix A on the left side is converted to the identity matrix by elementary transformations matrices.

2. If in the process of matrix transformation A into the identity matrix in any row or in any column there will be only zeros, then the determinant of the matrix is ​​equal to zero, and, therefore, the matrix A will be degenerate, and it has no inverse matrix. In this case, further finding of the inverse matrix stops.

Example 2 For matrix

find the inverse matrix.

and we will transform it so that the identity matrix is ​​obtained on the left side. Let's start the transformation.

Multiply the first row of the left and right matrix by (-3) and add it to the second row, and then multiply the first row by (-4) and add it to the third row, then we get

.

To avoid, if possible fractional numbers in subsequent transformations, we will first create a unit in the second row on the left side of the dual matrix. To do this, multiply the second row by 2 and subtract the third row from it, then we get

.

Let's add the first row to the second, and then multiply the second row by (-9) and add it to the third row. Then we get

.

Divide the third row by 8, then

.

Multiply the third row by 2 and add it to the second row. It turns out:

.

Swapping the places of the second and third lines, then we finally get:

.

We see that the identity matrix is ​​obtained on the left side, therefore, the inverse matrix is ​​obtained on the right side. Thus:

.

You can check the correctness of the calculations by multiplying the original matrix by the found inverse matrix:

The result should be an inverse matrix.

Example 3 For matrix

find the inverse matrix.

Decision. Compiling a dual matrix

and we will transform it.

We multiply the first row by 3, and the second by 2, and subtract from the second, and then we multiply the first row by 5, and the third by 2 and subtract from the third row, then we get

.

We multiply the first row by 2 and add it to the second, and then subtract the second from the third row, then we get

.

We see that in the third line on the left side, all elements turned out to be equal to zero. Therefore, the matrix is ​​degenerate and has no inverse matrix. We stop further finding of the reverse maria.

This topic is one of the most hated among students. Worse, probably, only determinants.

The trick is that the very concept of the inverse element (and I'm not just talking about matrices now) refers us to the operation of multiplication. Even in school curriculum multiplication is considered complicated operation, and the multiplication of matrices is generally a separate topic, to which I have a whole paragraph and a video tutorial devoted to it.

Today we will not go into the details of matrix calculations. Just remember: how matrices are denoted, how they are multiplied and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

In order not to accidentally confuse rows and columns in places (believe me, in the exam you can confuse one with a deuce - what can we say about some lines there), just take a look at the picture:

Determination of indexes for matrix cells

What's happening? If we place the standard coordinate system $OXY$ in the left upper corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with the coordinates $\left(x;y \right)$ - this will be the row number and column number.

Why is the coordinate system placed exactly in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis pointing down and not to the right? Again, it's simple: take the standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it encloses the matrix. This is a 90 degree clockwise rotation - we see its result in the picture.

In general, we figured out how to determine the indices of the matrix elements. Now let's deal with multiplication.

Definition. The matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first matches the number of rows in the second, are called consistent.

It's in that order. One can be ambiguous and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$, those. the pair $\left(B;A \right)$ is also consistent.

Only consistent matrices can be multiplied.

Definition. The product of consistent matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , whose elements $((c)_(ij))$ are calculated by the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply elements from this row and column. Add up the results.

Yes, that's a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication is, generally speaking, non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributive: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And distributive again: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right multiplier-sum just because of the non-commutativity of the multiplication operation.

If, nevertheless, it turns out that $A\cdot B=B\cdot A$, such matrices are called permutable.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest in solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the game that will be written next.

What is an inverse matrix

Since matrix multiplication is a very time-consuming operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix is ​​also not the most trivial. And it needs some explanation.

Key Definition

Well, it's time to know the truth.

Definition. The matrix $B$ is called the inverse of the matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten like this:

It would seem that everything is extremely simple and clear. But when analyzing such a definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that such a matrix is ​​exactly one? What if for some original matrix $A$ there is a whole crowd of inverses?
  3. What do all these "reverses" look like? And how do you actually count them?

As for the calculation algorithms - we will talk about this a little later. But we will answer the rest of the questions right now. Let us arrange them in the form of separate assertions-lemmas.

Basic properties

Let's start with how the matrix $A$ should look like in order for it to have $((A)^(-1))$. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square and have the same order $n$.

Proof. Everything is simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in that order:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are "transit" and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, so the matrices $((A)^(-1))$ and $A$ are also consistent in this order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, so the dimensions of the matrices are exactly the same:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square size$\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​unique.

Proof. Let's start from the opposite: let the matrix $A$ have at least two instances of inverses — $B$ and $C$. Then, according to the definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices $A$, $B$, $C$ and $E$ are square of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

Received only possible variant: two instances of the inverse matrix are equal. The lemma is proven.

The above reasoning almost verbatim repeats the proof of the uniqueness of the inverse element for all real numbers$b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether any square matrix is reversible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3 . Given a matrix $A$. If the matrix $((A)^(-1))$ inverse to it exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A \right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them it is possible to calculate the determinant: $\left| A \right|$ and $\left| ((A)^(-1)) \right|$. However, the product determinant is equal to the product determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition of $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is different from zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, in principle, no inverse matrix can exist with a zero determinant.

But first, let's formulate an "auxiliary" definition:

Definition. A degenerate matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can assert that any invertible matrix is ​​nondegenerate.

How to find the inverse matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be considered now is very efficient for matrices of size $\left[ 2\times 2 \right]$ and - in part - of size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything.

Algebraic additions

Get ready. Now there will be pain. No, don't worry: a beautiful nurse in a skirt, stockings with lace does not come to you and will not give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the "Union Matrix" are coming to you.

Let's start with the main one. Let there be a square matrix of size $A=\left[ n\times n \right]$ whose elements are named $((a)_(ij))$. Then, for each such element, one can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ in the $i$-th row and $j$-th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$-th row and $j$-th column.

Again. The algebraic complement to the matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and the $j$-th column from the original matrix. We get a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in fact we just find out the sign in front of $M_(ij)^(*) $.
  3. We count - we get specific number. Those. the algebraic addition is just a number, not some new matrix, and so on.

The matrix $M_(ij)^(*)$ itself is called the complementary minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition- what we considered in the lesson about the determinant.

Important note. Actually, in "adult" mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection, we get a matrix of size $\left[ k\times k \right]$ — its determinant is called a minor of order $k$ and is denoted by $((M)_(k))$.
  2. Then we cross out these "selected" $k$ rows and $k$ columns. Again, we get a square matrix - its determinant is called the complementary minor and is denoted by $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Take a look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we get only 2 terms - these will be the same $i+j$ - the "coordinates" of the element $((a)_(ij))$, for which we are looking for an algebraic complement.

So today we use a slightly simplified definition. But as we will see later, it will be more than enough. Much more important is the following:

Definition. The union matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic complements $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “this is how much you have to count in total!” Relax: you have to count, but not so much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, Lemma 3 stated that an invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the converse is also true: if the matrix $A$ is not degenerate, then it is always invertible. And there is even a search scheme $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - all the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it's non-zero.
  2. Compile the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and put them in place $((a)_(ij))$.
  3. Transpose this matrix $S$ and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

And that's it! The inverse matrix $((A)^(-1))$ is found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Decision. Let's check the reversibility. Let's calculate the determinant:

\[\left| A \right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. So the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2\right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5\right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Pay attention: determinants |2|, |5|, |1| and |3| are the determinants of matrices of size $\left[ 1\times 1 \right]$, not modules. Those. if the determinants were negative numbers, it is not necessary to remove the "minus".

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

That's it. Problem solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Decision. Again, we consider the determinant:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is different from zero — the matrix is ​​invertible. But now it will be the most tinny: you have to count as many as 9 (nine, damn it!) Algebraic additions. And each of them will contain the $\left[ 2\times 2 \right]$ qualifier. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \ 2 & 1 & -2 \\\end(array) \right]\]

Well, that's all. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example, we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse - you should get $E$.

It is much easier and faster to perform this check than to look for an error in further calculations, when, for example, you solve a matrix equation.

Alternative way

As I said, the inverse matrix theorem works fine for sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in last case- it’s not so “perfect” anymore), but for large-sized matrices, sadness begins.

But don't worry: there is an alternative algorithm that can be used to calmly find the inverse even for the $\left[ 10\times 10 \right]$ matrix. But, as is often the case, to consider this algorithm, we need a little theoretical background.

Elementary transformations

Among the various transformations of the matrix, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$-th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column) multiplied by any number $k\ne 0$ (of course, $k=0$ is also possible, but what's the point of that? ?Nothing will change though).
  3. Permutation. Take the $i$-th and $j$-th rows (columns) and swap them.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the associated matrix. Yes, yes, you heard right. Now there will be one more definition - the last one in today's lesson.

Attached Matrix

Surely in school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that's all.

So: now everything will be the same, but already “in an adult way”. Ready?

Definition. Let the matrix $A=\left[ n\times n \right]$ and the identity matrix $E$ of the same size $n$ be given. Then the associated matrix $\left[ A\left| E\right. \right]$ is a new $\left[ n\times 2n \right]$ matrix that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ of the required size, we separate them with a vertical bar for beauty - here you have the attached one. :)

What's the catch? And here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string transformations bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to get the matrix $E$ on the right from $A$, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the associated matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until the right instead of $A$ appears $E$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the reverse;
  4. PROFITS! :)

Of course, much easier said than done. So let's look at a couple of examples: for the sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Decision. We compose the attached matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we do not touch it, otherwise the newly removed units will begin to "multiply" in the third column.

But we can subtract the second line twice from the last one - we get a unit in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - in this way we will “zero out” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second row by −1 and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

It remains only to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

Task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Decision. Again we compose the attached one:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's borrow a little, worry about how much we have to count now ... and start counting. To begin with, we “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We observe too many "minuses" in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now it's time to "fry" the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final roll: "burn out" the second column by subtracting row 2 from row 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again, the identity matrix on the left, so the inverse on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

Definition 1: A matrix is ​​called degenerate if its determinant is zero.

Definition 2: A matrix is ​​called non-singular if its determinant is not equal to zero.

Matrix "A" is called inverse matrix, if the condition A*A-1 = A-1 *A = E (identity matrix) is satisfied.

A square matrix is ​​invertible only if it is nonsingular.

Scheme for calculating the inverse matrix:

1) Calculate the determinant of the matrix "A" if A = 0, then the inverse matrix does not exist.

2) Find all algebraic complements of the matrix "A".

3) Compose a matrix of algebraic additions (Aij )

4) Transpose the matrix of algebraic complements (Aij )T

5) Multiply the transposed matrix by the reciprocal of the determinant of this matrix.

6) Run a check:

At first glance it may seem that it is difficult, but in fact everything is very simple. All solutions are based on simple arithmetic operations, the main thing when solving is not to get confused with the signs "-" and "+", and not to lose them.

Now let's decide together practical task, calculating the inverse matrix.

Task: find the inverse matrix "A", shown in the picture below:

We solve everything exactly as indicated in the plan for calculating the inverse matrix.

1. The first thing to do is to find the determinant of the matrix "A":

Explanation:

We have simplified our determinant by using its main functions. First, we added to the 2nd and 3rd row the elements of the first row, multiplied by one number.

Secondly, we changed the 2nd and 3rd columns of the determinant, and according to its properties, we changed the sign in front of it.

Thirdly, we took out the common factor (-1) of the second row, thereby changing the sign again, and it became positive. We also simplified line 3 the same way as at the very beginning of the example.

We have a triangular determinant, in which the elements below the diagonal are equal to zero, and by property 7 it is equal to the product of the elements of the diagonal. As a result, we got A = 26, hence the inverse matrix exists.

A11 = 1*(3+1) = 4

A12 \u003d -1 * (9 + 2) \u003d -11

A13 = 1*1 = 1

A21 = -1*(-6) = 6

A22 = 1*(3-0) = 3

A23 = -1*(1+4) = -5

A31 = 1*2 = 2

A32 = -1*(-1) = -1

A33 = 1+(1+6) = 7

3. The next step is to compile a matrix from the resulting additions:

5. We multiply this matrix by the reciprocal of the determinant, that is, by 1/26:

6. Well, now we just need to check:

During the verification, we received an identity matrix, therefore, the decision was made absolutely correctly.

2 way to calculate the inverse matrix.

1. Elementary transformation of matrices

2. Inverse matrix through an elementary converter.

Elementary matrix transformation includes:

1. Multiplying a string by a non-zero number.

2. Adding to any line of another line, multiplied by a number.

3. Swapping the rows of the matrix.

4. Applying a chain of elementary transformations, we obtain another matrix.

BUT -1 = ?

1. (A|E) ~ (E|A -1 )

2. A -1*A=E

Consider it on practical example with real numbers.

Exercise: Find the inverse matrix.

Decision:

Let's check:

A little clarification on the solution:

We first swapped rows 1 and 2 of the matrix, then we multiplied the first row by (-1).

After that, the first row was multiplied by (-2) and added to the second row of the matrix. Then we multiplied the 2nd row by 1/4.

final stage transformations was the multiplication of the second row by 2 and the addition from the first. As a result, we have an identity matrix on the left, therefore, the inverse matrix is ​​the matrix on the right.

After checking, we were convinced of the correctness of the decision.

As you can see, calculating the inverse matrix is ​​very simple.

In concluding this lecture, I would also like to devote some time to the properties of such a matrix.

Methods for finding the inverse matrix, . Consider a square matrix

Denote Δ = det A.

The square matrix A is called non-degenerate, or non-special if its determinant is non-zero, and degenerate, or special, ifΔ = 0.

A square matrix B exists for a square matrix A of the same order if their product A B = B A = E, where E is the identity matrix of the same order as the matrices A and B.

Theorem . In order for the matrix A to have an inverse matrix, it is necessary and sufficient that its determinant be nonzero.

Inverse matrix to matrix A, denoted by A- 1 so B = A - 1 and is calculated by the formula

, (1)

where А i j - algebraic complements of elements a i j of matrix A..

Calculation A -1 by formula (1) for matrices high order very laborious, so in practice it is convenient to find A -1 using the method of elementary transformations (EP). Any non-singular matrix A can be reduced by the EP of only columns (or only rows) to the identity matrix E. If the EPs perfect over the matrix A are applied in the same order to the identity matrix E, then the result is an inverse matrix. It is convenient to perform EP on the matrices A and E simultaneously, writing both matrices side by side through the line. We note once again that when searching for the canonical form of a matrix, in order to find it, one can use transformations of rows and columns. If you need to find the inverse matrix, you should use only rows or only columns in the transformation process.

Example 2.10. For matrix find A -1 .

Decision.We first find the determinant of the matrix A
so the inverse matrix exists and we can find it by the formula: , where A i j (i,j=1,2,3) - algebraic complements of elements a i j of the original matrix.

Where .

Example 2.11. Using the method of elementary transformations, find A -1 for the matrix: A=.

Decision.We assign an identity matrix of the same order to the original matrix on the right: . With the help of elementary column transformations, we reduce the left “half” to the identity one, simultaneously performing exactly such transformations on the right matrix.
To do this, swap the first and second columns:
~ . We add the first to the third column, and the first multiplied by -2 to the second: . From the first column we subtract the doubled second, and from the third - the second multiplied by 6; . Let's add the third column to the first and second: . Multiply the last column by -1: . The square matrix obtained to the right of the vertical bar is the inverse matrix to the given matrix A. So,
.

For any nondegenerate matrix And there exists and, moreover, a unique matrix A -1 such that

A*A -1 =A -1 *A = E,

where E is the identity matrix of the same orders as A. The matrix A -1 is called the inverse of matrix A.

If someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of an identity matrix:

Finding the inverse matrix by the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij - elements a ij .

Those. To calculate the inverse of a matrix, you need to calculate the determinant of this matrix. Then find algebraic additions for all its elements and make a new matrix from them. Next, you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for matrix

Solution. Find A -1 by the adjoint matrix method. We have det A = 2. Find the algebraic complements of the elements of the matrix A. In this case the algebraic complements of the matrix elements will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A*:

We find the inverse matrix by the formula:

We get:

Use the adjoint matrix method to find A -1 if

Solution. First of all, we calculate the given matrix to make sure that the inverse matrix exists. We have

Here we have added to the elements of the second row the elements of the third row, multiplied previously by (-1), and then expanded the determinant by the second row. Since the definition of this matrix is ​​different from zero, then the matrix inverse to it exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

we transport the matrix A*:

Then according to the formula

Finding the inverse matrix by the method of elementary transformations

In addition to the method of finding the inverse matrix, which follows from the formula (the method of the associated matrix), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) permutation of rows (columns);

2) multiplying a row (column) by a non-zero number;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by a certain number.

To find the matrix A -1, we construct rectangular matrix B = (A|E) of orders (n; 2n), assigning to the matrix A on the right the identity matrix E through the dividing line:

Consider an example.

Using the method of elementary transformations, find A -1 if

Solution. We form the matrix B:

Denote the rows of the matrix B through α 1 , α 2 , α 3 . Let's perform the following transformations on the rows of the matrix B.