Linear and algebraic equations can be found in almost all branches of numerical analysis. But their most natural application in engineering is in the analysis of linear systems. The broad class of linear systems includes structures, elastic solids, heat flows, electromagnetic fields, electrical circuits and much more.

Modeling linear systems gives rise to linear equations of the form Ax = b, where b is the input vector and the vector x represents the response of the system. The matrix of coefficients A, reflecting the intrinsic characteristics of the system, is independent of the input. That is, if we modify the input, the system of linear equations to be solved will have a different vector b but the same matrix of coefficients A.

## Methods of solving systems of linear equations

In addition to iterative methods, which we will not talk about here, there are so-called direct methods. Their common feature is that they try to transform the original equations into an equivalent system that is easier to solve.

This transformation can be obtained by applying 3 fundamental operations:

- Exchange two lines of A (the determinant of A changes sign);
- Multiply a row of A by a non-zero scalar λ (the determinant of A is multiplied by the same scalar);
- Replace a row of A with the one obtained by adding that row with another row multiplied by a scalar (leaves the determinant of A unchanged);

Of course these operations do not affect the solutions of the system that remain the same, but they can affect the coefficients of A and its determinant.

The following table summarizes the 3 main direct methods of resolution:

Method | Initial form | Final form |
---|---|---|

Gauss elimination | Ax=b | Ux=c |

LU decomposition | Ax=b | LUx=b |

Gauss-Jordan elimination | Ax=b | Ix=c |

In the table, in the column of the final form of the equations, 3 standard matrices appear, which are:

upper triangular;

lower triangular;

identity matrix.

The reason why direct methods of resolution try to transform matrix A into a triangular matrix is immediately evident when writing, for example, Ux=c, which means solving the following system of linear equations:

as you can see it is easy to solve starting from the last equation.

## Gauss elimination method

It is the most familiar method for solving systems of linear equations. It consists of two phases: the elimination phase and the backward substitution phase. The first phase has the purpose, as indicated in the previous table, to transform the equations from the form Ax=b to that of immediate solution Ux=c. The elimination phase proceeds by repeatedly performing the 3rd fundamental operation of the list previously discussed. Without getting lost in the complications of notation for the general case, we will proceed with an example. We will solve the following system of linear equations:

## Manual solution by Gauss’s method

Before we see the algorithm encoded in Python, we will proceed manually with the Gauss elimination method. Matrix A of the system is:

and the vector of the coefficients b is:

As mentioned, the purpose of the first phase of Gauss’s method is to transform the matrix A into a higher triangular matrix U. As can be seen from A, the first step will be to make the terms a_{21} and a_{31}null and void. To do this we must find a constant λ such that multiplied by the row R1 and added to the row R2 and then R3, makes the desired coefficients null.

1. To cancel a_{21} we will multiply R_{1} by -1 and then add it to R_{2}:

Note that the same operation is also performed on the corresponding elements of vector b. Similarly for the cancellation of the coefficient a_{31}:

Replacing the new transformed rows in matrix A, we have:

and the transformed vector b is now:

2. Now let’s cancel the coefficient a_{32}.

Substituting the new line as follows:

and vector b, which remains unchanged:

The elimination process is concluded, as can be seen, the matrix **A **has been transformed into an upper triangular matrix **U **and the vector b is also transformed into a new vector that in the table we called **c**. At this point, starting from the bottom, it is easy to find the solutions of the linear system:

The solutions then are:

## Gauss elimination algorithm in Python

There are two variations with respect to the manual procedure: the first is that the lines are transformed by subtraction instead of by sum; the second is that the transformed rows are not replaced by the original rows of the A matrix, only the elements proper to the upper triangular matrix are replaced. In fact, elements not belonging to U (transformed matrix) are irrelevant to the calculation of solutions.

```
import numpy as np
def gaussElim(a,b):
n = len(b)
# Elimination phase
for k in range(0,n-1):
for i in range(k+1,n):
if a[i,k] != 0.0:
#if not null define λ
lam = a [i,k]/a[k,k]
#we calculate the new row of the matrix
a[i,k+1:n] = a[i,k+1:n] - lam*a[k,k+1:n]
#we update vector b
b[i] = b[i] - lam*b[k]
# backward substitution
for k in range(n-1,-1,-1):
b[k] = (b[k] - np.dot(a[k,k+1:n],b[k+1:n]))/a[k,k]
return b
#initial coefficients
a=np.array([[1.0,1.0,1.0],[1.0,-1.0,-1.0],[1.0,-2.0,3.0]])
b=np.array([1.0,1.0,-5.0])
aOrig = a.copy() # save original matrix A
bOrig = b.copy() #save original vector b
x = gaussElim(a,b)
#print A transformed for check
print(a)
print("x =\n",x)
#det = np.prod(np.diagonal(a)) #determinant
#print("\ndet =",det)
#check result and numerical precision
print("\nCheck result: [a]{x} - b =\n",np.dot(aOrig,x) - bOrig)
```

## Execution:

```
Python 3.8.8
[evaluate ex-gauss.py]
A=
[[ 1. 1. 1.]
[ 1. -2. -2.]
[ 1. -3. 5.]]
x =
[ 1. 1.2 -1.2]
Check result: [a]{x} - b =
[2.22044605e-16 0.00000000e+00 0.00000000e+00]
```

As we said, matrix A does not appear triangularized but only because the elements below the diagonal are not actually replaced being irrelevant for the calculation of the solutions. If we look instead at the non-zero elements of U:

they correspond exactly to those obtained in our manual process.