Biographies Characteristics Analysis

Define a matrix. Types of matrices

Matrices. Types of matrices. Operations on matrices and their properties.

Determinant of the matrix of the nth order. N, Z, Q, R, C,

A matrix of order m*n is a rectangular table of numbers containing m-rows and n-columns.

Matrix equality:

Two matrices are called equal if the number of rows and columns of one of them is equal, respectively, to the number of rows and columns of the other and, respectively. the elements of these matrices are equal.

Note: Elements with the same indexes are matched.

Types of matrices:

Square Matrix: A matrix is ​​said to be square if the number of rows is equal to the number of columns.

Rectangular: A matrix is ​​said to be rectangular if the number of rows is not equal to the number of columns.

Row matrix: a matrix of order 1*n (m=1) has the form a11,a12,a13 and is called a row matrix.

Matrix column:………….

Diagonal: the diagonal of a square matrix, going from the upper left corner to the lower right corner, that is, consisting of elements a11, a22 ...... - is called the main diagonal. (definition: a square matrix, all elements of which are equal to zero, except for those located on the main diagonal, is called a diagonal matrix.

Identity: A diagonal matrix is ​​called identity if all elements are located on the main diagonal and are equal to 1.

Upper triangular: A=||aij|| is called an upper triangular matrix if aij=0. Provided i>j.

Lower triangular: aij=0. i

Zero: This is a matrix whose Els are 0.

Operations on matrices.

1. Transposition.

2. Multiplication of a matrix by a number.

3. Matrix addition.

4. Matrix multiplication.

Basic sv-va action on matrices.

1.A+B=B+A (commutativity)

2.A+(B+C)=(A+B)+C (associativity)

3.a(A+B)=aA+aB (distributivity)

4.(a+b)A=aA+bA (distributive)

5.(ab)A=a(bA)=b(aA) (asoots.)

6.AB≠BA (no commu.)

7.A(BC)=(AB)C (associative) – executed if def. Matrix products are performed.

8.A(B+C)=AB+AC (distributive)

(B+C)A=BA+CA (distributive)

9.a(AB)=(aA)B=(aB)A

Determinant of a square matrix - definition and its properties. Decomposition of the determinant in rows and columns. Methods for calculating determinants.

If matrix A has order m>1, then the determinant of this matrix is ​​a number.

The algebraic complement Aij of the element aij of matrix A is the minor Mij multiplied by the number

THEOREM1: Determinant of matrix A is equal to the sum products of all elements of an arbitrary row (column) and their algebraic complements.

Basic properties of determinants.

1. The determinant of a matrix will not change when it is transposed.

2. When permuting two rows (columns), the determinant changes sign, but its absolute value does not change.

3. The determinant of a matrix that has two identical rows (columns) is 0.

4. When multiplying a row (column) of a matrix by a number, its determinant is multiplied by this number.

5. If one of the rows (columns) of the matrix consists of 0, then the determinant of this matrix is ​​0.

6. If all elements of the i-th row (column) of a matrix are presented as a sum of two terms, then its determinant can be represented as a sum of determinants of two matrices.

7. The determinant will not change if, respectively, the elements of one column (row) are added to the elements of another column (row) by premultiplying. for the same number.

8.Sum arbitrary elements any column (row) of the determinant to the corresponding algebraic addition elements of another column (row) is 0.

https://pandia.ru/text/78/365/images/image004_81.gif" width="46" height="27">

Methods for calculating the determinant:

1. By definition or Theorem 1.

2. Reduction to a triangular form.

Definition and properties of the inverse matrix. Calculation of the inverse matrix. Matrix equations.

Definition: A square matrix of order n is called the inverse of a matrix A of the same order and is denoted

In order for the matrix A to exist inverse matrix it is necessary and sufficient that the determinant of the matrix A be different from 0.

Inverse Matrix Properties:

1. Uniqueness: for a given matrix A, its inverse is unique.

2. matrix determinant

3. The operation of taking the transposition and taking the inverse matrix.

Matrix equations:

Let A and B be two square matrices the same order.

https://pandia.ru/text/78/365/images/image008_56.gif" width="163" height="11 src=">

The concept of linear dependence and independence of matrix columns. Linear dependency properties and linear independence column systems.

Columns А1,А2…An are called linearly dependent if there is a non-trivial linear combination of them equal to the 0th column.

Columns А1,А2…An are called linearly independent if there is a non-trivial linear combination of them equal to the 0th column.

A linear combination is called trivial if all coefficients С(l) are equal to 0 and non-trivial otherwise.

https://pandia.ru/text/78/365/images/image010_52.gif" width="88" height="24">

2. In order for the columns to be linearly dependent, it is necessary and sufficient that some column be a linear combination of other columns.

Let 1 of the columns https://pandia.ru/text/78/365/images/image014_42.gif" width="13" height="23 src="> be a linear combination of other columns.

https://pandia.ru/text/78/365/images/image016_38.gif" width="79" height="24"> are linearly dependent, then all columns are linearly dependent.

4. If a system of columns is linearly independent, then any of its subsystems is also linearly independent.

(Everything that is said about columns is also true for rows).

Matrix minors. Basis minors. Matrix rank. The method of fringing minors for calculating the rank of a matrix.

The order minor of matrix A is the determinant whose elements are located at the intersection of k-rows and k-rows of matrix A.

If all minors of order k of the matrix A = 0, then any minor of order k + 1 is also equal to 0.

Basic minor.

The rank of a matrix A is the order of its basis minor.

The method of bordering minors: - We choose a non-zero element of the matrix A (If such an element does not exist, then the rank of A \u003d 0)

We border the previous minor of the 1st order with the minor of the 2nd order. (If this minor is not equal to 0, then the rank >=2) If the rank of this minor =0, then we border the chosen 1st order minor with other 2nd order minors. (If all minors of the 2nd order = 0, then the rank of the matrix = 1).

Matrix rank. Methods for finding the rank of a matrix.

The rank of a matrix A is the order of its basis minor.

Calculation methods:

1) The method of bordering minors: -Choose a non-zero element of the matrix A (if there is no such element, then rank = 0) - Border the previous 1st order minor with the 2nd order minor..gif" width="40" height="22" >r+1 Mr+1=0.

2) Bringing the matrix to stepped view: This method is based on elementary transformations. Under elementary transformations, the rank of the matrix does not change.

The following transformations are called elementary transformations:

Permutation of two rows (columns).

Multiplication of all elements of some column (row) by a number not =0.

Addition to all elements of a certain column (row) of elements of another column (row), previously multiplied by the same number.

Basis minor theorem. Necessary and sufficient condition for the determinant to be equal to zero.

The basis minor of the matrix A is the minor of the largest k-th order different from 0.

Basis minor theorem:

Basic rows (columns) are linearly independent. Any row (column) of matrix A is a linear combination of basic rows (columns).

Notes: Rows and columns at the intersection of which is basic minor are called base rows and columns, respectively.

a11 a12… a1r a1j

a21 a22….a2r a2j

a31 a32….a3r a3j

ar1 ar2 ….arr arj

ak1 ak2…..akr akj

Necessary and sufficient conditions for the determinant to be equal to zero:

In order for the determinant of the nth order = 0, it is necessary and sufficient that its rows (columns) be linearly dependent.

Systems linear equations, their classification and forms of recording. Cramer's rule.

Consider a system of 3 linear equations with three unknowns:

https://pandia.ru/text/78/365/images/image020_29.gif" alt="l14image048" width="64" height="38 id=">!}

is called the determinant of the system.

We compose three more determinants as follows: we replace successively 1, 2 and 3 columns in the determinant D with a column of free terms

https://pandia.ru/text/78/365/images/image022_23.gif" alt="l14image052" width="93" height="22 id=">!}

Proof. So, consider a system of 3 equations with three unknowns. We multiply the 1st equation of the system by the algebraic complement A11 of the element a11, the 2nd equation by A21 and the 3rd one by A31:

https://pandia.ru/text/78/365/images/image024_24.gif" alt="l14image056" width="247" height="31 id=">!}

Consider each of the brackets and right side this equation. By the theorem on the expansion of the determinant in terms of the elements of the 1st column

https://pandia.ru/text/78/365/images/image026_23.gif" alt="l14image060" width="324" height="42 id=">!}

Similarly, it can be shown that and .

Finally, it is easy to see that

Thus, we get the equality: .

Hence, .

The equalities and are derived similarly, whence the assertion of the theorem follows.

Systems of linear equations. Compatibility condition for linear equations. The Kronecker-Capelli theorem.

System solution algebraic equations is such a collection of n numbers C1,C2,C3……Cn, which, when substituted into the original system in place of x1,x2,x3…..xn, turns all equations of the system into identities.

A system of linear algebraic equations is called consistent if it has at least one solution.

A joint system is called definite if it has a unique solution, and indefinite if it has infinitely many solutions.

Conditions for the compatibility of systems of linear algebraic equations.

a11 a12 ……a1n x1 b1

a21 a22 ……a2n x2 b2

……………….. .. = ..

am1 am2…..amn xn bn

THEOREM: For a system of m linear equations with n unknowns to be consistent, it is necessary and sufficient that the rank of the augmented matrix be equal to rank matrices a.

Note: This theorem only gives criteria for the existence of a solution, but does not indicate a way to find a solution.

10 question.

Systems of linear equations. Basis minor method - general method finding all solutions to systems of linear equations.

A=a21 a22…..a2n

Basis minor method:

Let the system be compatible and RgA=RgA’=r. Let the basic minor be painted in the upper left corner of the matrix A.

https://pandia.ru/text/78/365/images/image035_20.gif" width="22" height="23 src=">…...gif" width="23" height="23 src= ">…...gif" width="22" height="23 src=">…...gif" width="46" height="23 src=">-…..-a

d2 b2-a(2r+1)x(r+1)-..-a(2n)x(n)

… = …………..

Dr br-a(rr+1)x(r+1)-..-a(rn)x(n)

https://pandia.ru/text/78/365/images/image050_12.gif" width="33" height="22 src=">

Remarks: If the rank of the main matrix and considered is equal to r=n, then in this case dj=bj and the system has a unique solution.

Homogeneous systems of linear equations.

A system of linear algebraic equations is called homogeneous if all its free terms are equal to zero.

AX=0 is a homogeneous system.

AX = B is an inhomogeneous system.

Homogeneous systems are always consistent.

X1 =x2 =..=xn =0

Theorem 1.

Homogeneous systems have inhomogeneous solutions when the rank of the system matrix less than number unknown.

Theorem 2.

homogeneous system n-linear equations with n-unknowns has a non-zero solution when the determinant of the matrix A is equal to zero. (detA=0)

Properties of solutions of homogeneous systems.

Any linear combination of a solution to a homogeneous system is itself a solution to this system.

α1C1 +α2C2 ; α1 and α2 are some numbers.

A(α1C1 + α2C2) = A(α1C1) + A(α2C2) = α1(A C1) + α2(AC2) = 0, i.e. k. (A C1) = 0; (AC2) = 0

For heterogeneous system this property has no place.

Fundamental decision system.

Theorem 3.

If rank matrix system equation with n-unknowns is equal to r, then this system has n-r linearly independent solutions.

Let the basic minor in the left upper corner. If r< n, то неизвестные х r+1;хr+2;..хn называются свободными переменными, а систему уравнений АХ=В запишем, как Аr Хr =Вr

C1 = (C11 C21 .. Cr1 , 1.0..0)

C2 = (C21 C22 .. C2r,0, 1..0)<= Линейно-независимы.

……………………..

Cn-r = (Cn-r1 Cn-r2 .. Cn-rr ,0, 0..1)

A system of n-r linearly independent solutions of a homogeneous system of linear equations with n-unknowns of rank r is called a fundamental system of solutions.

Theorem 4.

Any solution to a system of linear equations is a linear combination of a solution to the fundamental system.

С = α1C1 + α2C2 + .. + αn-r Cn-r

If r

12 question.

General solution of an inhomogeneous system.

Sleep (gen. non-uniform) \u003d COO + SCH (private)

AX=B (heterogeneous system); AX=0

(ASoo) + ASch = ASch = B, because (ASoo) = 0

Sleep \u003d α1C1 + α2C2 + .. + αn-r Cn-r + Mid

Gauss method.

This is a method of successive elimination of unknowns (variables) - it consists in the fact that with the help of elementary transformations, the original system of equations is reduced to an equivalent system of a stepwise form, from which all other variables are found sequentially, starting from the last variables.

Let a≠0 (if this is not the case, then this is achieved by rearranging the equations).

1) we exclude the variable x1 from the second, third ... n-th equation, multiplying the first equation by suitable numbers and adding the results obtained to the 2nd, 3rd ... n-th equation, then we get:

We get a system equivalent to the original one.

2) exclude the variable x2

3) we exclude the variable x3, etc.

Continuing the process of sequential elimination of variables x4;x5...xr-1 we get for the (r-1)-th step.

The number zero of the last n-r in the equations means that their left side looks like: 0x1 +0x2+..+0xn

If at least one of the numbers вr+1, вr+2… is not equal to zero, then the corresponding equality is inconsistent and system (1) is not consistent. Thus, for any consistent system, this vr+1 … vm is equal to zero.

The last n-r equations in the system (1;r-1) are identities and can be ignored.

Two cases are possible:

a) the number of equations of the system (1; r-1) is equal to the number of unknowns, i.e. r \u003d n (in this case, the system has a triangular form).

b)r

The transition from system (1) to an equivalent system (1; r-1) is called the direct move of the Gauss method.

About finding a variable from the system (1; r-1) - by the reverse course of the Gauss method.

Gaussian transformations are conveniently carried out by implementing them not with equations, but with an extended matrix of their coefficients.

13 question.

similar matrices.

We will consider only square matrices of order n/

A matrix A is said to be similar to matrix B (A~B) if there exists a non-singular matrix S such that A=S-1BS.

Properties of similar matrices.

1) Matrix A is similar to itself. (A~A)

If S=E then EAE=E-1AE=A

2) If A~B, then B~A

If A=S-1BS => SAS-1= (SS-1)B(SS-1)=B

3) If A~B and at the same time B~C, then A~C

Given that A=S1-1BS1, and B=S2-1CS2 => A= (S1-1 S2-1) C(S2 S1) = (S2 S1)-1C(S2 S1) = S3-1CS3, where S3 = S2S1

4) The determinants of similar matrices are equal.

Given that A~B, it is necessary to prove that detA=detB.

A=S-1 BS, detA=det(S-1 BS)= detS-1* detB* detS = 1/detS *detB*detS (reduce) = detB.

5) The ranks of similar matrices are the same.

Eigenvectors and eigenvalues matrices.

The number λ is called an eigenvalue of the matrix A if there is a non-zero vector X (matrix column) such that AX = λ X, the vector X is called the eigenvector of the matrix A, and the set of all eigenvalues ​​is called the spectrum of the matrix A.

Properties eigenvectors.

1) When multiplying an eigenvector by a number, we get an eigenvector with the same eigenvalue.

AX \u003d λ X; Х≠0

α X => A (α X) \u003d α (AX) \u003d α (λ X) \u003d \u003d λ (α X)

2) Eigenvectors with pairwise different eigenvalues ​​are linearly independent λ1, λ2,.. λk.

Let the system consist of the 1st vector, let's take an inductive step:

C1 X1 + C2 X2 + .. + Cn Xn = 0 (1) - multiply by A.

C1 AX1 + C2 AX2 + .. + Cn AXn \u003d 0

С1 λ1 Х1 +С2 λ2 Х2 + .. +Сn λn Хn = 0

Multiply by λn+1 and subtract

C1 X1 +C2 X2 + .. +Cn Xn+ Cn+1 Xn+1 = 0

С1 λ1 Х1 +С2 λ2 Х2 + .. +Сn λn Хn+ Сn+1 λn+1 Хn+1 = 0

C1 (λ1 –λn+1)X1 + C2 (λ2 –λn+1)X2 +.. + Cn (λn –λn+1)Xn + Cn+1 (λn+1 –λn+1)Xn+1 = 0

C1 (λ1 –λn+1)X1 + C2 (λ2 –λn+1)X2 +.. + Cn (λn –λn+1)Xn = 0

It is necessary that C1 \u003d C2 \u003d ... \u003d Cn \u003d 0

Cn+1 Xn+1 λn+1 =0

Characteristic equation.

A-λE is called characteristic matrix for matrix A.

In order for a non-zero vector X to be an eigenvector of the matrix A, corresponding to the eigenvalue λ, it is necessary that it be a solution to a homogeneous system of linear algebraic equations (A - λE)X = 0

The system has a non-trivial solution when det (A - XE) = 0 - this is a characteristic equation.

Statement!

The characteristic equations of similar matrices coincide.

det(S-1AS - λЕ) = det(S-1AS - λ S-1ЕS) = det(S-1 (A - λЕ)S) = det S-1 det(A - λЕ) detS= det(A - λЕ)

Characteristic polynomial.

det(A – λЕ) - function with respect to the parameter λ

det(A – λЕ) = (-1)n Xn +(-1)n-1(a11+a22+..+ann)λn-1+..+detA

This polynomial is called the characteristic polynomial of the matrix A.

Consequence:

1) If the matrices are A~B, then the sum of their diagonal elements is the same.

a11+a22+..+ann = в11+в22+..+вnn

2) The set of eigenvalues ​​of similar matrices coincide.

If characteristic equations matrices are the same, they are not necessarily similar.

For matrix A

For matrix B

https://pandia.ru/text/78/365/images/image062_10.gif" width="92" height="38">

Det(Ag-λE) = (λ11 – λ)(λ22 – λ)…(λnn – λ)= 0

For a matrix A of order n to be diagonalizable, it is necessary that there exist linearly independent eigenvectors of the matrix A.

Consequence.

If all eigenvalues ​​of the matrix A are different, then it is diagonalizable.

Algorithm for finding eigenvectors and eigenvalues.

1) compose the characteristic equation

2) find the roots of the equations

3) compose a system of equations to determine the eigenvector.

λi (A-λi E)X = 0

4) find fundamental system decisions

x1,x2..xn-r, where r is the rank of the characteristic matrix.

r = Rg(A - λi E)

5) eigenvector, eigenvalues ​​λi are written as:

X \u003d C1 X1 + C2 X2 + .. + Cn-r Xn-r, where C12 + C22 + ... C2n ≠0

6) we check whether the matrix can be reduced to a diagonal form.

7) find Ag

Ag = S-1AS S=

15 question.

Basis of a line, plane, space.

DIV_ADBLOCK371">

The module of a vector is its length, that is, the distance between A and B (││, ││). The modulus of a vector is equal to zero, when this vector is zero (│ō│=0)

4.Orth vector.

Ortom given vector is called a vector that has the same direction as the given vector and has a modulus equal to one.

Equal vectors have equal orts.

5. Angle between two vectors.

This is the smaller part of the area, bounded by two rays emanating from the same point and directed in the same direction as the given vectors.

Addition of vectors. Multiplying a vector by a number.

1) Addition of two vectors

https://pandia.ru/text/78/365/images/image065_9.gif" height="11">+ │≤│ │+│ │

2) Multiplication of a vector by a scalar.

The product of a vector and a scalar is a new vector that has:

a) = products of the modulus of the multiplied vector by absolute value scalar.

b) the direction is the same as the multiplied vector if the scalar is positive, and opposite if the scalar is negative.

λ a(vector)=>│ λ │= │ λ │=│ λ ││ │

Properties line operations over vectors.

1. The Law of Communitativity.

2. The law of associativity.

3. Addition with zero.

a(vector)+ō= a(vector)

4. Addition with the opposite.

5. (αβ) = α(β) = β(α)

6;7. Law of distributivity.

Expression of a vector in terms of its modulus and unit vector.

Maximum number linearly independent vectors called the basis.

A basis on a line is any non-zero vector.

A basis on the plane is any two non-callenary vectors.

A basis in space is a system of any three non-coplanar vectors.

The coefficient of expansion of a vector in some basis is called the components or coordinates of the vector in the given basis.

https://pandia.ru/text/78/365/images/image075_10.gif" height="11 src=">.gif" height="11 src="> perform addition and multiplication by a scalar, then as a result any number of such actions we get:

λ1 https://pandia.ru/text/78/365/images/image079_10.gif" height="11 src=">+...gif" height="11 src=">.gif" height="11 src="> are called linearly dependent if there is a non-trivial linear combination of them equal to ō.

λ1 https://pandia.ru/text/78/365/images/image079_10.gif" height="11 src=">+...gif" height="11 src=">.gif" height="11 src="> are called linearly independent if there is no non-trivial linear combination of them.

Properties of linearly dependent and independent vectors:

1) the system of vectors containing the zero vector is linearly dependent.

λ1 https://pandia.ru/text/78/365/images/image079_10.gif" height="11 src=">+...gif" height="11 src=">.gif" height="11 src="> are linearly dependent, some vector must be a linear combination of other vectors.

3) if some of the vectors from the system a1 (vector), a2 (vector) ... ak (vector) are linearly dependent, then all vectors are linearly dependent.

4)if all vectors https://pandia.ru/text/78/365/images/image076_9.gif" height="11 src=">.gif" width="75" height="11">

https://pandia.ru/text/78/365/images/image082_10.gif" height="11 src=">.gif" height="11 src=">)

Linear operations in coordinates.

https://pandia.ru/text/78/365/images/image069_9.gif" height="12 src=">.gif" height="11 src=">.gif" height="11 src="> .gif" height="11 src=">+ (λа3)DIV_ADBLOCK374">

The scalar product of 2 vectors is the number equal to the product vectors by the cosine of the angle between them.

https://pandia.ru/text/78/365/images/image090_8.gif" width="48" height="13">

3. (a;b)=0 if and only if the vectors are orthogonal or any of the vectors is equal to 0.

4. Distributivity (αa+βb;c)=α(a;c)+β(b;c)

5. Expression dot product a and b through their coordinates

https://pandia.ru/text/78/365/images/image093_8.gif" width="40" height="11 src=">

https://pandia.ru/text/78/365/images/image095_8.gif" width="254" height="13 src=">

When the condition () , h, l=1,2,3

https://pandia.ru/text/78/365/images/image098_7.gif" width="176" height="21 src=">

https://pandia.ru/text/78/365/images/image065_9.gif" height="11"> and the third vector is called which satisfies the following equations:

3. - right

Vector product properties:

4. Vector product of coordinate vectors

orthonormal basis.

https://pandia.ru/text/78/365/images/image109_7.gif" width="41" height="11 src=">

https://pandia.ru/text/78/365/images/image111_8.gif" width="41" height="11 src=">

Often 3 symbols are used to denote the orts of an orthonormal basis

https://pandia.ru/text/78/365/images/image063_10.gif" width="77" height="11 src=">

https://pandia.ru/text/78/365/images/image114_5.gif" width="549" height="32 src=">

If is an orthonormal basis, then

DIV_ADBLOCK375">

Straight line on a plane. Mutual arrangement 2 straight lines. The distance from a point to a straight line. Angle between two lines. Condition of parallelism and perpendicularity of 2 straight lines.

1. A special case of location of 2 straight lines on a plane.

1) - the equation of a straight parallel axis OX

2) - the equation of a straight line parallel to the OS axis

2. Mutual arrangement of 2 straight lines.

Theorem 1 Let relatively affine system coordinates are given equations of lines

A) Then the necessary and sufficient condition when they intersect is:

B) Then the necessary and sufficient condition for the fact that the lines are parallel is the condition:

B) Then necessary and sufficient condition the fact that the lines merge into one is the condition:

3. Distance from a point to a line.

Theorem. Distance from a point to a line relative to Cartesian system coordinates:

https://pandia.ru/text/78/365/images/image127_7.gif" width="34" height="11 src=">

4. Angle between two straight lines. Perpendicular condition.

Let 2 straight lines be given with respect to the Cartesian coordinate system by general equations.

https://pandia.ru/text/78/365/images/image133_4.gif" width="103" height="11 src=">

If , then the lines are perpendicular.

24 question.

plane in space. Complonarity condition for a vector and a plane. The distance from a point to a plane. Condition of parallelism and perpendicularity of two planes.

1. Complonarity condition for a vector and a plane.

https://pandia.ru/text/78/365/images/image138_6.gif" width="40" height="11 src=">

https://pandia.ru/text/78/365/images/image140.jpg" alt="Untitled4.jpg" width="111" height="39">!}

https://pandia.ru/text/78/365/images/image142_6.gif" width="86" height="11 src=">

https://pandia.ru/text/78/365/images/image144_6.gif" width="148" height="11 src=">

https://pandia.ru/text/78/365/images/image145.jpg" alt="Untitled5.jpg" width="88" height="57">!}

https://pandia.ru/text/78/365/images/image147_6.gif" width="31" height="11 src=">

https://pandia.ru/text/78/365/images/image148_4.gif" width="328" height="24 src=">

3. Angle between 2 planes. Perpendicular condition.

https://pandia.ru/text/78/365/images/image150_6.gif" width="132" height="11 src=">

If , then the planes are perpendicular.

25 question.

Straight line in space. Different kinds equations of a straight line in space.

https://pandia.ru/text/78/365/images/image156_6.gif" width="111" height="19">

2. Vector equation of a straight line in space.

https://pandia.ru/text/78/365/images/image138_6.gif" width="40" height="11 src=">

https://pandia.ru/text/78/365/images/image162_5.gif" width="44" height="29 src=">

4. The canonical equation is direct.

https://pandia.ru/text/78/365/images/image164_4.gif" width="34" height="18 src=">

https://pandia.ru/text/78/365/images/image166_0.jpg" alt="Untitled3.jpg" width="56" height="51"> !}

28 question.

Ellipse. Conclusion Canonical equation ellipse. Form. Properties

Ellipse - geometric place points for which the sum of distances from two fixed distances, called foci, is given number 2a greater than the distance 2c between foci.

https://pandia.ru/text/78/365/images/image195_4.gif" alt="image002" width="17" height="23 id=">.gif" alt="image043" width="81 height=44" height="44"> 0=!}

in Fig.2 r1=a+ex r2=a-ex

Ur-e tangent to ellipse

DIV_ADBLOCK378">

Canonical equation of a hyperbola

Form and St.

y=±b/a multiply by the root of (x2-a2)

The axis of symmetry of a hyperbola is its axes

Segment 2a - the real axis of the hyperbola

Eccentricity e=2c/2a=c/a

If b=a we get an isosceles hyperbola

An asymptote is a straight line if, with an unlimited removal of the point M1 along the curve, the distance from the point to the straight line tends to zero.

lim d=0 for x-> ∞

d=ba2/(x1+(x21-a2)1/2/c)

tangent of hyperbola

xx0/a2 - yy0/b2 = 1

parabola - the locus of points equidistant from a point called the focus and a given line called the directrix

Canonical parabola equation

properties

the axis of symmetry of the parabola passes through its focus and is perpendicular to the directrix

if you rotate the parabola, you get an elliptical paraboloid

all parabolas are similar

Question 30. Investigation of the equation of the general form of a curve of the second order.

Curve type def. with leading terms A1, B1, C1

A1x12+2Bx1y1+C1y12+2D1x1+2E1y1+F1=0

1. AC=0 ->curve of parabolic type

A=C=0 => 2Dx+2Ey+F=0

A≠0 C=0 => Ax2+2Dx+2Ey+F=0

If E=0 => Ax2+2Dx+F=0

then x1=x2 - merges into one

x1≠x2 - lines are parallel Oy

x1≠x2 and imaginary roots, has no geometric image

C≠0 A=0 =>C1y12+2D1x1+2E1y1+F1=0

Conclusion: a parabolic curve is either a parabola, or 2 parallel lines, or imaginary, or merge into one.

2.AC>0 -> elliptic type curve

Complementing the original equation to the full square, we transform it to the canonical one, then we get the cases

(x-x0)2/a2+(y-y0)2/b2=1 - ellipse

(x-x0)2/a2+(y-y0)2/b2=-1 - imaginary ellipse

(x-x0)2/a2-(y-y0)2/b2=0 - point with coordinate x0 y0

Conclusion: curve el. type is either an ellipse, or imaginary, or a point

3. AC<0 - кривая гиперболического типа

(x-x0)2/a2-(y-y0)2/b2=1 hyperbola, real axis is parallel

(x-x0)2/a2-(y-y0)2/b2=-1 hyperbola, real axis parallel to Oy

(x-x0)2/a2-(y-y0)2/b2=0 ur-e of two lines

Conclusion: a curve of hyperbolic type is either a hyperbola or two straight lines

Matrix dimension is called a table of numbers containing rows and columns. Numbers are called elements of this matrix, where is the row number, is the column number at the intersection of which this element is located. A matrix containing rows and columns looks like: .

Types of matrices:

1) at - square , and they call matrix order ;

2) a square matrix in which all off-diagonal elements are equal to zero

diagonal ;

3) a diagonal matrix in which all diagonal elements are equal

unit - single and is denoted by ;

4) at - rectangular ;

5) at - matrix-row (vector-row);

6) at - matrix-column (vector-column);

7) for all is a zero matrix.

Note that the main numerical characteristic of a square matrix is ​​its determinant. The determinant corresponding to the th order matrix also has the th order.

1st order matrix determinant is called a number.

2nd order matrix determinant called a number . (1.1)

3rd order matrix determinant called a number . (1.2)

Let us give the definitions necessary for the further exposition.

Minor M ij element A ij matrices n- order A is called the determinant of the matrix ( n-1)- order obtained from matrix A by deleting i-th line and j-th column.

Algebraic Complement A ij element A ij matrices n- of order A is called the minor of this element, taken with the sign .

Let us formulate the main properties of determinants that are inherent in determinants of all orders and simplify their calculation.

1. When transposing a matrix, its determinant does not change.

2. When two rows (columns) of a matrix are interchanged, its determinant changes sign.

3. A determinant having two proportional (equal) rows (columns) is equal to zero.

4. The common factor of the elements of any row (column) of the determinant can be taken out of the sign of the determinant.

5. If the elements of any row (column) of the determinant are the sum of two terms, then the determinant can be decomposed into the sum of two corresponding determinants.

6. The determinant will not change if the elements of any of its row (column) are added to the corresponding elements of its other row (column), previously multiplied by any number.

7. The determinant of a matrix is ​​equal to the sum of the products of the elements of any of its rows (columns) and the algebraic complements of these elements.

Let us explain this property using the example of a 3rd order determinant. In this case property 7 means that – expansion of the determinant by the elements of the 1st row. Note that the row (column) where there are zero elements is chosen for expansion, since the terms corresponding to them in the expansion vanish.

Property 7 is Laplace's theorem on the decomposition of the determinant.

8. The sum of the products of the elements of any row (column) of the determinant and the algebraic complements of the corresponding elements of its other row (column) is equal to zero.

The latter property is often referred to as the pseudodecomposition of the determinant.

Questions for self-examination.

1. What is called a matrix?

2. What matrix is ​​called square? What is meant by its order?

3. What matrix is ​​called diagonal, identity?

4. What matrix is ​​called a row matrix and a column matrix?

5. What is the main numerical characteristic of a square matrix?

6. What number is called the determinant of the 1st, 2nd and 3rd order?

7. What is called a minor and algebraic complement of a matrix element?

8. What are the main properties of determinants?

9. What property can be used to calculate the determinant of any order?

Matrix Actions(scheme 2)

A number of operations are defined on the set of matrices, the main ones being the following:

1) transposition – replacement of matrix rows with columns, and columns with rows;

2) the multiplication of a matrix by a number is performed element by element, that is , Where , ;

3) matrix addition, defined only for matrices of the same dimension;

4) multiplication of two matrices, defined only for consistent matrices.

The sum (difference) of two matrices such a resulting matrix is ​​called, each element of which is equal to the sum (difference) of the corresponding elements of the matrix terms.

The two matrices are called agreed if the number of columns of the first one is equal to the number of rows of the other. The product of two consistent matrices and such a resulting matrix is ​​called , What , (1.4)

Where , . It follows that the element of the -th row and the -th column of the matrix is ​​equal to the sum of the pairwise products of the elements of the -th row of the matrix and the elements of the -th column of the matrix .

The product of matrices is not commutative, that is, A . B B . A. An exception is, for example, the product of square matrices by the identity A . E = E . A.

Example 1.1. Multiply matrices A and B if:

.

Solution. Since the matrices are consistent (the number of columns of the matrix is ​​equal to the number of rows of the matrix ), we use formula (1.4):

Questions for self-examination.

1. What actions are performed on matrices?

2. What is called the sum (difference) of two matrices?

3. What is called the product of two matrices?

Cramer's method for solving square systems of linear algebraic equations(scheme 3)

Let us give a number of necessary definitions.

The system of linear equations is called heterogeneous , if at least one of its free terms is nonzero, and homogeneous if all its free terms are equal to zero.

Solving the system of equations is called an ordered set of numbers, which, being substituted for variables in a system, turns each of its equations into an identity.

The system of equations is called joint if it has at least one solution, and incompatible if it has no solutions.

The joint system of equations is called certain if it has a unique solution, and uncertain if it has more than one solution.

Consider an inhomogeneous quadratic system of linear algebraic equations, which has the following general form:

. (1.5) The main matrix of the system linear algebraic equations is called a matrix composed of coefficients in the unknowns: .

The determinant of the main matrix of the system is called main determinant and is denoted.

The auxiliary determinant is obtained from the main determinant by replacing the i-th column with the column of free terms.

Theorem 1.1 (Cramer's theorem). If the main determinant of a quadratic system of linear algebraic equations is nonzero, then the system has a unique solution calculated by the formulas:

If the main determinant , then the system either has an infinite set of solutions (for all zero auxiliary determinants), or has no solution at all (if at least one of the auxiliary determinants differs from zero)

In the light of the above definitions, Cramer's theorem can be formulated differently: if the principal determinant of a system of linear algebraic equations is nonzero, then the system is jointly defined and, moreover, ; if the principal determinant is zero, then the system is either consistent indefinite (for all ) or inconsistent (if at least one of is different from zero).

After that, the resulting solution should be checked.

Example 1.2. Solve the system by Cramer's method

Solution. Since the main determinant of the system

is different from zero, then the system has a unique solution. Calculate auxiliary determinants

We use Cramer's formulas (1.6): , ,

Questions for self-examination.

1. What is called the solution of a system of equations?

2. What system of equations is called compatible, incompatible?

3. What system of equations is called definite, indefinite?

4. What matrix of the system of equations is called the main one?

5. How to calculate auxiliary determinants of a system of linear algebraic equations?

6. What is the essence of Cramer's method for solving systems of linear algebraic equations?

7. What can be a system of linear algebraic equations if its main determinant is equal to zero?

Solution of square systems of linear algebraic equations by the inverse matrix method(scheme 4)

A matrix that has a non-zero determinant is called non-degenerate ; having a determinant equal to zero - degenerate .

The matrix is ​​called inverse for a given square matrix, if when multiplying the matrix by its inverse both on the right and on the left, the identity matrix is ​​obtained, that is, . (1.7)

Note that in this case the product of matrices and is commutative.

Theorem 1.2. A necessary and sufficient condition for the existence of an inverse matrix for a given square matrix is ​​the difference from zero of the determinant of the given matrix

If the main matrix of the system turned out to be degenerate during verification, then there is no inverse for it, and the method under consideration cannot be applied.

If the main matrix is ​​nonsingular, that is, the determinant is 0, then for it you can find the inverse matrix using the following algorithm.

1. Calculate the algebraic complements of all elements of the matrix .

2. Write out the found algebraic additions to the matrix transposed.

3. Compile the inverse matrix according to the formula: (1.8)

4. Check the correctness of the found matrix A-1 according to formula (1.7). Note that this check can be included in the final check of the system solution itself.

System (1.5) of linear algebraic equations can be represented as a matrix equation: , where is the main matrix of the system, is the column of unknowns, and is the column of free terms. We multiply this equation on the left by the inverse matrix , we get:

Since by definition of the inverse matrix , the equation takes the form or . (1.9)

Thus, to solve a quadratic system of linear algebraic equations, you need to multiply the column of free terms on the left by the matrix inverse for the main matrix of the system. After that, you should check the obtained solution.

Example 1.3. Solve the system using the inverse matrix method

Solution. Calculate the main determinant of the system

. Therefore, the matrix is ​​nonsingular and its inverse matrix exists.

Find the algebraic complements of all elements of the main matrix:

We write the algebraic additions transposed into the matrix

. We use formulas (1.8) and (1.9) to find a solution to the system

Questions for self-examination.

1. What matrix is ​​called degenerate, nondegenerate?

2. What matrix is ​​called inverse for a given one? What is the condition for its existence?

3. What is the algorithm for finding the inverse matrix for a given one?

4. What matrix equation is the system of linear algebraic equations equivalent to?

5. How to solve a system of linear algebraic equations using the inverse matrix for the main matrix of the system?

Study of inhomogeneous systems of linear algebraic equations(scheme 5)

The study of any system of linear algebraic equations begins with the transformation of its extended matrix by the Gaussian method. Let the dimension of the main matrix of the system be .

Matrix called extended system matrix , if, along with the coefficients of the unknowns, it contains a column of free terms. Therefore, the dimension is .

The Gauss method is based on elementary transformations , which include:

– permutation of matrix rows;

– multiplication of the rows of the matrix by a number different from the steering wheel;

– element-wise addition of matrix rows;

- deletion of the zero line;

– matrix transposition (in this case, transformations are performed by columns).

Elementary transformations bring the original system to a system equivalent to it. Systems are called equivalent if they have the same set of solutions.

Matrix rank is the highest order of its non-zero minors. Elementary transformations do not change the rank of a matrix.

The following theorem answers the question of whether a nonhomogeneous system of linear equations has solutions.

Theorem 1.3 (Kronecker-Capelli theorem). An inhomogeneous system of linear algebraic equations is consistent if and only if the rank of the extended matrix of the system is equal to the rank of its main matrix, i.e.

Let us denote the number of rows remaining in the matrix after the Gaussian method as (respectively, the system remains equations). These lines matrices are called basic .

If , then the system has a unique solution (it is jointly defined), its matrix is ​​reduced to a triangular form by elementary transformations. Such a system can be solved by the Cramer method, using the inverse matrix, or the universal Gauss method.

If (the number of variables in the system is more than equations), the matrix is ​​reduced to a stepped form by elementary transformations. Such a system has many solutions and is jointly indefinite. In this case, to find solutions to the system, it is necessary to perform a number of operations.

1. Leave in the left parts of the equations of the system of unknowns ( basis variables ), move the remaining unknowns to the right-hand sides ( free variables ). After dividing the variables into basic and free, the system takes the form:

. (1.10)

2. From the coefficients at the basic variables, make a minor ( basic minor ), which must be different from zero.

3. If the basic minor of system (1.10) is equal to zero, then one of the basic variables is replaced by a free one; check the obtained basis minor for non-zero.

4. Applying formulas (1.6) of Cramer's method, considering the right-hand sides of the equations as their free members, find the expression of the basic variables in terms of the free ones in a general form. The resulting ordered set of system variables is its common solution .

5. Giving arbitrary values ​​to the free variables in (1.10), calculate the corresponding values ​​of the basic variables. The resulting ordered set of values ​​of all variables is called private decision systems corresponding to given values ​​of free variables. The system has an infinite number of particular solutions.

6. Get basic solution system is a particular solution obtained at zero values ​​of free variables.

Note that the number of basis sets of variables of system (1.10) is equal to the number of combinations of elements by elements . Since each basic set of variables has its own basic solution, therefore, the system also has basic solutions.

A homogeneous system of equations is always compatible, since it has at least one - zero (trivial) solution. In order for a homogeneous system of linear equations with variables to have nonzero solutions, it is necessary and sufficient that its main determinant be equal to zero. This means that the rank of its main matrix is ​​less than the number of unknowns. In this case, the study of a homogeneous system of equations for general and particular solutions is carried out similarly to the study of an inhomogeneous system. Solutions of a homogeneous system of equations have an important property: if two different solutions of a homogeneous system of linear equations are known, then their linear combination is also a solution to this system. It is easy to verify the validity of the following theorem.

Theorem 1.4. The general solution of an inhomogeneous system of equations is the sum of the general solution of the corresponding homogeneous system and some particular solution of the inhomogeneous system of equations

Example 1.4.

Explore the given system and find one particular solution:

Solution. Let us write out the extended matrix of the system and apply elementary transformations to it:

. Since and , then by Theorem 1.3 (Kronecker-Capelli) the given system of linear algebraic equations is consistent. The number of variables , i.e. , means that the system is indefinite. The number of basis sets of system variables is equal to

. Therefore, 6 sets of variables can be basic: . Let's consider one of them. Then the system obtained as a result of the Gauss method can be rewritten in the form

. Main determinant . Using Cramer's method, we are looking for the general solution of the system. Auxiliary determinants

By formulas (1.6) we have

. This expression of the basic variables in terms of the free ones is the general solution of the system:

For specific values ​​of free variables, from the general solution we obtain a particular solution of the system. For example, a particular solution corresponds to the values ​​of free variables . For , we obtain the basic solution of the system

Questions for self-examination.

1. What system of equations is called homogeneous, non-homogeneous?

2. What matrix is ​​called extended?

3. List the basic elementary transformations of matrices. What method for solving systems of linear equations is based on these transformations?

4. What is called the rank of a matrix? In what way can it be calculated?

5. What does the Kronecker-Capelli theorem say?

6. What form can the system of linear algebraic equations be reduced to as a result of its solution by the Gauss method? What does this mean?

7. What rows of the matrix are called basic?

8. Which variables of the system are called basic, which are free?

9. What solution of an inhomogeneous system is called private?

10. What solution is called basic? How many basic solutions does an inhomogeneous system of linear equations have?

11. What solution of an inhomogeneous system of linear algebraic equations is called general? Formulate a theorem on the general solution of an inhomogeneous system of equations.

12. What are the main properties of solutions to a homogeneous system of linear algebraic equations?

Note that the elements of the matrix can be not only numbers. Imagine that you are describing the books that are on your bookshelf. Let your shelf be in order and all the books stand in strictly defined places. The table that will contain the description of your library (according to the shelves and the sequence of books on the shelf) will also be a matrix. But such a matrix will not be numeric. Another example. Instead of numbers, there are different functions, united among themselves by some dependence. The resulting table will also be called a matrix. In other words, Matrix is ​​any rectangular table made up of homogeneous elements. Here and below we will talk about matrices composed of numbers.

Instead of parentheses, matrices are written using square brackets or straight double vertical lines.


(2.1*)

Definition 2. If in the expression(1) m = n , then they talk about square matrix, and if , something about rectangular.

Depending on the values ​​of m and n, there are some special types of matrices:

The most important characteristic square matrix is ​​its determinant or determinant, which is composed of matrix elements and is denoted

Obviously, D E =1 ; .

Definition 3. If , then the matrix A called non-degenerate or not special.

Definition 4. If detA = 0 , then the matrix A called degenerate or special.

Definition 5. Two matrices A And B called equal and write A=B if they have the same dimensions and their corresponding elements are equal, i.e..

For example, the matrices and are equal, because they are equal in size and each element of one matrix is ​​equal to the corresponding element of the other matrix. But the matrices cannot be called equal, although the determinants of both matrices are equal, and the dimensions of the matrices are the same, but not all elements in the same places are equal. The matrices are different because they have different sizes. The first matrix is ​​2x3 and the second 3x2. Although the number of elements is the same - 6 and the elements themselves are the same 1, 2, 3, 4, 5, 6, but they are in different places in each matrix. But the matrices and are equal, according to Definition 5.

Definition 6. If we fix a certain number of matrix columns A and the same number of its rows, then the elements at the intersection of the specified columns and rows form a square matrix n- th order, the determinant of which called minor k- th order matrix A.

Example. Write out three minors of the second order of the matrix

Points in space, product Rv gives another vector that defines the position of the point after the rotation. If v is a row vector, the same transformation can be obtained using vR T , where R T - transposed to R matrix.

Encyclopedic YouTube

    1 / 5

    C# - Console - Olympics - Square Spiral

    Matrix: definition and basic concepts

    Where to get strength and inspiration Recharging 4 square matrix

    Sum and difference of matrices, multiplication of a matrix by a number

    Transposed Matrix / Transposed Matrix

    Subtitles

Main Diagonal

Elements a ii (i = 1, ..., n) form the main diagonal of a square matrix. These elements lie on an imaginary straight line passing from the upper left corner to the lower right corner of the matrix. For example, the main diagonal of the 4x4 matrix in the figure contains the elements a 11 = 9, a 22 = 11, a 33 = 4, a 44 = 10.

The diagonal of a square matrix passing through the lower left and upper right corners is called side.

Special types

Name Example with n = 3
Diagonal Matrix [ a 11 0 0 0 a 22 0 0 0 a 33 ] (\displaystyle (\begin(bmatrix)a_(11)&0&0\\0&a_(22)&0\\0&0&a_(33)\end(bmatrix)))
Lower triangular matrix [ a 11 0 0 a 21 a 22 0 a 31 a 32 a 33 ] (\displaystyle (\begin(bmatrix)a_(11)&0&0\\a_(21)&a_(22)&0\\a_(31)&a_( 32)&a_(33)\end(bmatrix)))
Upper triangular matrix [ a 11 a 12 a 13 0 a 22 a 23 0 0 a 33 ] (\displaystyle (\begin(bmatrix)a_(11)&a_(12)&a_(13)\\0&a_(22)&a_(23)\\ 0&0&a_(33)\end(bmatrix)))

Diagonal and triangular matrices

If all elements outside the main diagonal are zero, A called diagonal. If all elements above (below) the main diagonal are zero, A is called the lower (upper) triangular matrix.

Identity matrix

Q(x) = x T Ax

takes only positive values ​​(respectively, negative values ​​or both). If the quadratic form takes only non-negative (respectively, only non-positive) values, the symmetric matrix is ​​said to be positive semi-definite (respectively, negative semi-definite). A matrix is ​​indefinite if it is neither positive nor negative semidefinite.

A symmetric matrix is ​​positive definite if and only if all its eigenvalues ​​are positive. The table on the right shows two possible cases for 2×2 matrices.

If we use two different vectors, we get a bilinear form associated with A:

B A (x, y) = x T Ay.

orthogonal matrix

orthogonal matrix is a square matrix with real elements whose columns and rows are orthogonal unit vectors (that is, orthonormal). One can also define an orthogonal matrix as a matrix whose inverse is equal to the transpose:

A T = A − 1 , (\displaystyle A^(\mathrm (T) )=A^(-1),)

whence follows

A T A = A A T = E (\displaystyle A^(T)A=AA^(T)=E),

orthogonal matrix A always reversible ( A −1 = A T), unitary ( A −1 = A*), and normal ( A*A = AA*). The determinant of any orthonormal matrix is ​​either +1 or −1. As a linear map, any orthonormal matrix with determinant +1 is a simple rotation, while any orthonormal matrix with determinant −1 is either a simple reflection or a composition of reflection and rotation.

Operations

Track

Determinant det( A) or | A| square matrix A is a number that defines some properties of the matrix. A matrix is ​​invertible if and only when its determinant is nonzero.

Let there be a square matrix of the nth order

Matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix, in which all elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices that have the same number of rows and columns.

Inverse Matrix Existence Condition Theorem

For a matrix to have an inverse matrix, it is necessary and sufficient that it be nondegenerate.

The matrix A = (A1, A2,...A n) is called non-degenerate if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write the matrix A in the table for solving systems of equations by the Gauss method and on the right (in place of the right parts of the equations) assign matrix E to it.
  2. Using Jordan transformations, bring matrix A to a matrix consisting of single columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that the identity matrix E is obtained under the matrix A of the original table.
  4. Write the inverse matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using the Jordan transformations, we reduce the matrix A to the identity matrix E. The calculations are shown in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix is ​​obtained. Therefore, the calculations are correct.

Answer:

Solution of matrix equations

Matrix equations can look like:

AX = B, XA = B, AXB = C,

where A, B, C are given matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from an equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse of the matrix equals (see example 1)

Matrix method in economic analysis

Along with others, they also find application matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to compare the functioning of organizations and their structural divisions.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage the formation of a system of economic indicators is carried out and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual lines (i = 1,2,....,n), and along the vertical graphs - numbers of indicators (j = 1,2,....,m).

At the second stage for each vertical column, the largest of the available values ​​of the indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by the largest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each indicator of the matrix is ​​assigned a certain weighting coefficient k. The value of the latter is determined by an expert.

On the last fourth stage found values ​​of ratings Rj grouped in order of increasing or decreasing.

The above matrix methods should be used, for example, in a comparative analysis of various investment projects, as well as in assessing other economic performance indicators of organizations.