All the data you need.

Tag: Linear Algebra

Powers of a 2×2 matrix in closed form
Here’s something I found surprising: the powers of a 2×2 matrix have a fairly simple closed form. Also, the derivation is only one page [1]. Let A be a 2×2 matrix with eigenvalues α and β. (3Blue1Brown made a nice jingle for finding the eigenvalues of a 2×2 matrix.) If …
Bibliography histogram
I recently noticed something in a book I’ve had for five years: the bibliography section ends with a histogram of publication dates for references. I’ve used the book over the last few years, but maybe I haven’t needed to look at the bibliography before. This is taken from Bernstein’s Matrix …
Cofactors, determinants, and adjugates
Let A be an n × n matrix over a field F. The cofactor of an element Aij is the matrix formed by removing the ith row and jth column, denoted A[i, j]. This terminology is less than ideal. The matrix just described is called the cofactor of Aij, but …
Circulant matrices commute
A few days ago I wrote that circulant matrices all have the same eigenvectors. This post will show that it follows that circulant matrices commute with each other. Recall that a circulant matrix is a square matrix in which the rows are cyclic permutations of each other. If we number …
Circulant matrices, eigenvectors, and the FFT
A circulant matrix is a square matrix in which each row is a rotation of the previous row. This post will illustrate a connection between circulant matrices and the FFT (Fast Fourier Transform). Circulant matrices Color in the first row however you want. Then move the last element to the …
What does rotating a matrix do to its determinant?
This post will look at rotating a matrix 90° and what that does to the determinant. This post was motivated by the previous post. There I quoted a paper that had a determinant with 1s in the right column. I debated rotating the matrix so that the 1s would be …
Gaussian elimination
When you solve systems of linear equations, you probably use Gaussian elimination, even if you don’t call it that. You may learn Gaussian elimination before you see it formalized in terms of matrices. So if you’ve had a course in linear algebra, and you sign up for a course in …
Self-orthogonal vectors and coding
One of the surprising things about linear algebra over a finite field is that a non-zero vector can be orthogonal to itself. When you take the inner product of a real vector with itself, you get a sum of squares of real numbers. If any element in the sum is …
Ternary Golay code in Python
Marcel Golay discovered two “perfect” error-correcting codes: one binary and one ternary. These two codes stick out in the classification of perfect codes [1]. The ternary code is a linear code over GF(3), the field with three elements. You can encode a list of 5 base-three digits by multiplying the …
Non-associative multiplication
There are five ways to parenthesize a product of four things: ((ab)c)d (ab)(cd) (a(b(cd)) (a(bc))d (a((bc)d) In a context where multiplication is not associative, the five products above are not necessarily the same. Maybe all five are different. This post will give two examples where the products above are all …
Vector spaces and subspaces over finite fields
A surprising amount of linear algebra doesn’t depend on the field you’re working over. You can implicitly assume you’re working over the real numbers R and prove all the basic theorems—say all the theorems that come before getting into eigenvalues in a typical course—and all or nearly all of the …
Face Recognition using Principal Component Analysis
Last Updated on October 30, 2021 Recent advance in machine learning has made face recognition not a difficult problem. But […] The post Face Recognition using Principal Component Analysis appeared first on Machine Learning Mastery.
Using Singular Value Decomposition to Build a Recommender System
Last Updated on October 29, 2021 Singular value decomposition is a very popular linear algebra technique to break down a […] The post Using Singular Value Decomposition to Build a Recommender System appeared first on Machine Learning Mastery.
A Gentle Introduction to Vector Space Models
Vector space models are to consider the relationship between data that are represented by vectors. It is popular in information […] The post A Gentle Introduction to Vector Space Models appeared first on Machine Learning Mastery.
Principal Component Analysis for Visualization
Last Updated on October 20, 2021 Principal component analysis (PCA) is an unsupervised machine learning technique. Perhaps the most popular […] The post Principal Component Analysis for Visualization appeared first on Machine Learning Mastery.
Cyclic permutations and trace
The trace of a square matrix is the sum of the elements on its main diagonal. The order in which you multiply matrices matters: in general, matrix multiplication is not commutative. But the trace of a product of matrices may or may not depend on the order of multiplication. Specifically, …
Illustrating Gershgorn disks with NumPy
Gershgorn’s theorem gives bounds on the locations of eigenvalues for an arbitrary square complex matrix. The eigenvalues are contained in disks, known as Gershgorn disks, centered on the diagonal elements of the matrix. The radius of the disk centered on the kth diagonal element is the sum of the absolute …
Moving between vectors and diagonal matrices
This is the first of two posts on moving between vectors and diagonal matrices. The next post is Broadcasting and functors. Motivation When I first saw the product of two vectors in R, I was confused. If x and y are vectors, what does x*y mean? An R programmer would …