This post features two examples of algebraic manipulation and their relation to the symmetry of the matrices representing the expressions. The algebraic expressions we investigate are quadratic, multivariate and can be written in the matrix form:
Where X is a row matrix, which contains all variables, like: X = [x y z], and M is a square matrix of order n, if X has n variables. Let`s see an example:
So this 2 variable quadratic expression can be represented by a square matrix of order 2.
We will see in the examples below how to take advantage of some properties of these matrices by using them for specific kinds of factorization, and how to reach algebraic manipulation by the rearrangement of the factors.
The first example has theoretical significance only, because the efforts required to reach the matrix form are nearly the same as factoring the expression directly by noticing a pattern. The method described in the second example is used regularly in mechanics.
1. Symmetric matrices of rank 1
Every rank 1 matrix can be written as the product of a column and a row matrix. In addition to this, symmetric matrices of rank 1 can be expressed as the product of a column matrix and its transpose. Example:
The 3×3 matrix in the example above have two properties:
- its rank is 1
So we can factor it as explained before. The elements of the factor matrix can be reached as the square roots of the values in the diagonal of the 3×3 matrix: N = [2 6 7]. With this, the following steps are possible:
So we have factored the algebraic expression, using the properties the the matrix M.
2. Sum of squares
Every quadratic expression which can be written in the matrix form by a symmetric matrix, can be transformed to a sum of squares, using the spectral theorem of linear algebra. Example:
Notice that this 2×2 matrix in symmetric, but its rank is 2 and not 1, so we could not use the method from the first example. But we don`t want to do that. The manipulation in this example only requires one property:
Then we have to calculate the eigenvalues, and the eigenvectors of the matrix:
- The eigenvalues are: 9, 1
- And the eigenvectors:
Now it is possible to factor the matrix by the spectral theorem, which states:
Where Q is the eigenvector matrix divided by the length of the eigenvectors, and Λ (upper case lambda) is the eigenvalue matrix, which contains all eigenvalues in its diagonal, while its other values are zero.
Now the following steps are possible:
Note: In the final step we could move the coefficients easily under the squares, but leaving it this way refers to the original use of this theorem: getting the axes of an ellipse.
An ellipse defined by this equation:
This is why the spectral theorem also has the name: principal axis theorem, which was discovered by James Joseph Sylvester.