Diseases, endocrinologists. MRI
Site search

What is the procedure for calculating the inverse matrix. Finding the inverse matrix: three algorithms and examples

Let there be a square matrix of nth order

Matrix A -1 is called inverse matrix in relation to matrix A, if A*A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix in which all the elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices in which the number of rows and columns coincide.

Theorem for the existence condition of an inverse matrix

In order for a matrix to have an inverse matrix, it is necessary and sufficient that it be non-singular.

The matrix A = (A1, A2,...A n) is called non-degenerate, if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write matrix A into the table for solving systems of equations using the Gaussian method and assign matrix E to it on the right (in place of the right-hand sides of the equations).
  2. Using Jordan transformations, reduce matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table you get the identity matrix E.
  4. Write down the inverse matrix A -1, which is located in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write matrix A and assign the identity matrix E to the right. Using Jordan transformations, we reduce matrix A to the identity matrix E. The calculations are given in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix was obtained. Therefore, the calculations were performed correctly.

Answer:

Solving matrix equations

Matrix equations can look like:

AX = B, HA = B, AXB = C,

where A, B, C are the specified matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from the equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse matrix is ​​equal to (see example 1)

Matrix method in economic analysis

Along with others, they are also used matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural divisions.

In the process of applying matrix analysis methods, several stages can be distinguished.

At the first stage a system of economic indicators is being formed and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual rows (i = 1,2,....,n), and in vertical columns - numbers of indicators (j = 1,2,....,m).

At the second stage For each vertical column, the largest of the available indicator values ​​is identified, which is taken as one.

After this, all amounts reflected in this column are divided by the largest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each matrix indicator is assigned a certain weight coefficient k. The value of the latter is determined by expert opinion.

On the last one, fourth stage found rating values Rj are grouped in order of their increase or decrease.

The matrix methods outlined should be used, for example, in a comparative analysis of various investment projects, as well as in assessing other economic indicators of organizations.

For any non-singular matrix A there is a unique matrix A -1 such that

A*A -1 =A -1 *A = E,

where E is the identity matrix of the same orders as A. The matrix A -1 is called the inverse of matrix A.

In case someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of an identity matrix:

Finding the inverse matrix using the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij - elements a ij.

Those. To calculate the inverse matrix, you need to calculate the determinant of this matrix. Then find the algebraic complements for all its elements and compose a new matrix from them. Next you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for a matrix

Solution. Let's find A -1 using the adjoint matrix method. We have det A = 2. Let us find the algebraic complements of the elements of matrix A. In this case, the algebraic complements of the matrix elements will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A*:

We find the inverse matrix using the formula:

We get:

Using the adjoint matrix method, find A -1 if

Solution. First of all, we calculate the definition of this matrix to verify the existence of the inverse matrix. We have

Here we added to the elements of the second row the elements of the third row, previously multiplied by (-1), and then expanded the determinant for the second row. Since the definition of this matrix is ​​nonzero, its inverse matrix exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

transport matrix A*:

Then according to the formula

Finding the inverse matrix using the method of elementary transformations

In addition to the method of finding the inverse matrix, which follows from the formula (the adjoint matrix method), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) rearrangement of rows (columns);

2) multiplying a row (column) by a number other than zero;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by a certain number.

To find the matrix A -1, we construct a rectangular matrix B = (A|E) of orders (n; 2n), assigning to matrix A on the right the identity matrix E through a dividing line:

Let's look at an example.

Using the method of elementary transformations, find A -1 if

Solution. We form matrix B:

Let us denote the rows of matrix B by α 1, α 2, α 3. Let us perform the following transformations on the rows of matrix B.

The inverse matrix for a given matrix is ​​such a matrix, multiplying the original one by which gives the identity matrix: A mandatory and sufficient condition for the presence of an inverse matrix is ​​that the determinant of the original matrix is ​​not equal to zero (which in turn implies that the matrix must be square). If the determinant of a matrix is ​​equal to zero, then it is called singular and such a matrix does not have an inverse. In higher mathematics, inverse matrices are important and are used to solve a number of problems. For example, on finding the inverse matrix a matrix method for solving systems of equations was constructed. Our service site allows calculate inverse matrix online two methods: the Gauss-Jordan method and using the matrix of algebraic additions. The first one involves a large number of elementary transformations inside the matrix, the second one involves the calculation of the determinant and algebraic additions to all elements. To calculate the determinant of a matrix online, you can use our other service - Calculation of the determinant of a matrix online

.

Find the inverse matrix for the site

website allows you to find inverse matrix online fast and free. On the site, calculations are made using our service and the result is given with a detailed solution for finding inverse matrix. The server always gives only an accurate and correct answer. In tasks by definition inverse matrix online, it is necessary that the determinant matrices was nonzero, otherwise website will report the impossibility of finding the inverse matrix due to the fact that the determinant of the original matrix is ​​equal to zero. The task of finding inverse matrix found in many branches of mathematics, being one of the most basic concepts of algebra and a mathematical tool in applied problems. Independent definition of inverse matrix requires significant effort, a lot of time, calculations and great care to avoid typos or minor errors in calculations. Therefore our service finding the inverse matrix online will make your task much easier and will become an indispensable tool for solving mathematical problems. Even if you find the inverse matrix yourself, we recommend checking your solution on our server. Enter your original matrix on our website Calculate inverse matrix online and check your answer. Our system never makes mistakes and finds inverse matrix given dimension in mode online instantly! On the site website character entries are allowed in elements matrices, in this case inverse matrix online will be presented in general symbolic form.

Let's consider the problem of defining the inverse operation of matrix multiplication.

Let A be a square matrix of order n. Matrix A^(-1) satisfying, together with the given matrix A, the equalities:

A^(-1)\cdot A=A\cdot A^(-1)=E,


called reverse. The matrix A is called reversible, if there is an inverse for it, otherwise - irreversible.

From the definition it follows that if the inverse matrix A^(-1) exists, then it is square of the same order as A. However, not every square matrix has an inverse. If the determinant of a matrix A is equal to zero (\det(A)=0), then there is no inverse for it. In fact, applying the theorem on the determinant of the product of matrices for the identity matrix E=A^(-1)A we obtain a contradiction

\det(E)=\det(A^(-1)\cdot A)=\det(A^(-1))\det(A)=\det(A^(-1))\cdot0=0


since the determinant of the identity matrix is ​​equal to 1. It turns out that the nonzero determinant of a square matrix is ​​the only condition for the existence of an inverse matrix. Recall that a square matrix whose determinant is equal to zero is called singular (singular); otherwise, it is called non-degenerate (non-singular).

Theorem 4.1 on the existence and uniqueness of the inverse matrix. Square matrix A=\begin(pmatrix)a_(11)&\cdots&a_(1n)\\ \vdots&\ddots&\vdots\\ a_(n1)&\cdots&a_(nn) \end(pmatrix), whose determinant is non-zero, has an inverse matrix and, moreover, only one:

A^(-1)=\frac(1)(\det(A))\cdot\! \begin(pmatrix)A_(11)&A_(21)&\cdots&A_(1n)\\ A_(12)&A_(22)&\cdots&A_(n2)\\ \vdots&\vdots&\ddots&\vdots\\ A_(1n )&A_(2n)&\cdots&A_(nn) \end(pmatrix)= \frac(1)(\det(A))\cdot A^(+),

where A^(+) is the matrix transposed for a matrix composed of algebraic complements of elements of the matrix A.

The matrix A^(+) is called adjoint matrix with respect to matrix A.

In fact, the matrix \frac(1)(\det(A))\,A^(+) exists under the condition \det(A)\ne0 . It is necessary to show that it is inverse to A, i.e. satisfies two conditions:

\begin(aligned)\mathsf(1))&~A\cdot\!\left(\frac(1)(\det(A))\cdot A^(+)\right)=E;\\ \mathsf (2))&~ \!\left(\frac(1)(\det(A))\cdot A^(+)\right)\!\cdot A=E.\end(aligned)

Let's prove the first equality. According to paragraph 4 of remarks 2.3, from the properties of the determinant it follows that AA^(+)=\det(A)\cdot E. That's why

A\cdot\!\left(\frac(1)(\det(A))\cdot A^(+)\right)= \frac(1)(\det(A))\cdot AA^(+) = \frac(1)(\det(A))\cdot \det(A)\cdot E=E,

which is what needed to be shown. The second equality is proved in a similar way. Therefore, under the condition \det(A)\ne0, matrix A has an inverse

A^(-1)=\frac(1)(\det(A))\cdot A^(+).

We will prove the uniqueness of the inverse matrix by contradiction. Let, in addition to the matrix A^(-1), there be another inverse matrix B\,(B\ne A^(-1)) such that AB=E. Multiplying both sides of this equality from the left by the matrix A^(-1) , we get \underbrace(A^(-1)AB)_(E)=A^(-1)E. Hence B=A^(-1) , which contradicts the assumption B\ne A^(-1) . Therefore, the inverse matrix is ​​unique.

Notes 4.1

1. From the definition it follows that the matrices A and A^(-1) commute.

2. The inverse of a non-singular diagonal matrix is ​​also diagonal:

\Bigl[\operatorname(diag)(a_(11),a_(22),\ldots,a_(nn))\Bigr]^(-1)= \operatorname(diag)\!\left(\frac(1 )(a_(11)),\,\frac(1)(a_(22)),\,\ldots,\,\frac(1)(a_(nn))\right)\!.

3. The inverse of a non-singular lower (upper) triangular matrix is ​​lower (upper) triangular.

4. Elementary matrices have inverses, which are also elementary (see paragraph 1 of remarks 1.11).

Properties of an inverse matrix

The matrix inversion operation has the following properties:

\begin(aligned)\bold(1.)&~~ (A^(-1))^(-1)=A\,;\\ \bold(2.)&~~ (AB)^(-1 )=B^(-1)A^(-1)\,;\\ \bold(3.)&~~ (A^T)^(-1)=(A^(-1))^T\ ,;\\ \bold(4.)&~~ \det(A^(-1))=\frac(1)(\det(A))\,;\\ \bold(5.)&~~ E^(-1)=E\,. \end(aligned)


if the operations specified in equalities 1-4 make sense.

Let's prove property 2: if the product AB of non-singular square matrices of the same order has an inverse matrix, then (AB)^(-1)=B^(-1)A^(-1).

Indeed, the determinant of the product of matrices AB is not equal to zero, since

\det(A\cdot B)=\det(A)\cdot\det(B), Where \det(A)\ne0,~\det(B)\ne0

Therefore, the inverse matrix (AB)^(-1) exists and is unique. Let us show by definition that the matrix B^(-1)A^(-1) is the inverse of the matrix AB. Really.

Finding the inverse matrix- a problem that is often solved by two methods:

  • the method of algebraic additions, which requires finding determinants and transposing matrices;
  • the Gaussian method of eliminating unknowns, which requires performing elementary transformations of matrices (add rows, multiply rows by the same number, etc.).

For those who are especially curious, there are other methods, for example, the method of linear transformations. In this lesson we will analyze the three mentioned methods and algorithms for finding the inverse matrix using these methods.

Inverse matrix A, such a matrix is ​​called

A
. (1)

Inverse matrix , which needs to be found for a given square matrix A, such a matrix is ​​called

the product of which the matrices A on the right is the identity matrix, i.e.
. (1)

An identity matrix is ​​a diagonal matrix in which all diagonal elements are equal to one.

Theorem.For every non-singular (non-degenerate, non-singular) square matrix, one can find an inverse matrix, and only one. For a special (degenerate, singular) square matrix, the inverse matrix does not exist.

The square matrix is ​​called not special(or non-degenerate, non-singular), if its determinant is not zero, and special(or degenerate, singular) if its determinant is zero.

The inverse of a matrix can only be found for a square matrix. Naturally, the inverse matrix will also be square and of the same order as the given matrix. A matrix for which an inverse matrix can be found is called an invertible matrix.

For inverse matrix There is a relevant analogy with the inverse of a number. For every number a, not equal to zero, there is such a number b that the work a And b equals one: ab= 1 . Number b called the inverse of a number b. For example, for the number 7 the reciprocal is 1/7, since 7*1/7=1.

Finding the inverse matrix using the method of algebraic additions (allied matrix)

For a non-singular square matrix A the inverse is the matrix

where is the determinant of the matrix A, a is a matrix allied with the matrix A.

Allied with a square matrix A is a matrix of the same order, the elements of which are the algebraic complements of the corresponding elements of the determinant of the matrix transposed with respect to the matrix A. Thus, if

That

And

Algorithm for finding the inverse matrix using the method of algebraic additions

1. Find the determinant of this matrix A. If the determinant is equal to zero, finding the inverse matrix stops, since the matrix is ​​singular and its inverse does not exist.

2. Find the matrix transposed with respect to A.

3. Calculate the elements of the union matrix as algebraic complements of the maritz found in step 2.

4. Apply formula (2): multiply the inverse of the matrix determinant A, to the union matrix found in step 4.

5. Check the result obtained in step 4 by multiplying this matrix A to the inverse matrix. If the product of these matrices is equal to the identity matrix, then the inverse matrix was found correctly. Otherwise, start the solution process again.

Example 1. For matrix

find the inverse matrix.

Solution. To find the inverse matrix, you need to find the determinant of the matrix A. We find by the rule of triangles:

Therefore, the matrix A– non-singular (non-degenerate, non-singular) and there is an inverse for it.

Let's find a matrix allied to this matrix A.

Let's find the matrix transposed with respect to the matrix A:

We calculate the elements of the allied matrix as algebraic complements of the matrix transposed with respect to the matrix A:

Therefore, the matrix allied to the matrix A, has the form

Comment. The order in which the elements are calculated and the matrix is ​​transposed may be different. You can first calculate the algebraic complements of the matrix A, and then transpose the algebraic complement matrix. The result should be the same elements of the union matrix.

Applying formula (2), we find the matrix inverse to the matrix A:

Finding the inverse matrix using the Gaussian unknown elimination method

The first step to find the inverse of a matrix using the Gaussian elimination method is to assign to the matrix A identity matrix of the same order, separating them with a vertical bar. We will get a dual matrix. Let's multiply both sides of this matrix by , then we get

,

Algorithm for finding the inverse matrix using the Gaussian unknown elimination method

1. To the matrix A assign an identity matrix of the same order.

2. Transform the resulting dual matrix so that on its left side you get a unit matrix, then on the right side, in place of the identity matrix, you automatically get an inverse matrix. Matrix A on the left side is transformed into the identity matrix by elementary matrix transformations.

2. If in the process of matrix transformation A in the identity matrix there will be only zeros in any row or in any column, then the determinant of the matrix is ​​equal to zero, and, consequently, the matrix A will be singular, and it does not have an inverse matrix. In this case, further determination of the inverse matrix stops.

Example 2. For matrix

find the inverse matrix.

and we will transform it so that on the left side we get an identity matrix. We begin the transformation.

Multiply the first row of the left and right matrix by (-3) and add it to the second row, and then multiply the first row by (-4) and add it to the third row, then we get

.

To ensure that there are no fractional numbers in subsequent transformations, let us first create a unit in the second row on the left side of the dual matrix. To do this, multiply the second line by 2 and subtract the third line from it, then we get

.

Let's add the first line with the second, and then multiply the second line by (-9) and add it with the third line. Then we get

.

Divide the third line by 8, then

.

Multiply the third line by 2 and add it to the second line. It turns out:

.

Let's swap the second and third lines, then we finally get:

.

We see that on the left side we have the identity matrix, therefore, on the right side we have the inverse matrix. Thus:

.

You can check the correctness of the calculations by multiplying the original matrix by the found inverse matrix:

The result should be an inverse matrix.

Example 3. For matrix

find the inverse matrix.

Solution. Compiling a dual matrix

and we will transform it.

We multiply the first line by 3, and the second by 2, and subtract from the second, and then we multiply the first line by 5, and the third by 2 and subtract from the third line, then we get

.

We multiply the first line by 2 and add it to the second, and then subtract the second from the third line, then we get

.

We see that in the third line on the left side all elements are equal to zero. Therefore, the matrix is ​​singular and has no inverse matrix. We stop further finding the inverse maritz.