#machine learning WIP

We can solve a system with two variables like this:

$3x + 2y = 12 \\ x + 5y = 20​$

with replacing or substituting – no problem. But as soon as you have more than three variables this manual, error prone approach will put you in a world of pain.

That's where Linear Algebra saves the day – among other things, it gives us tools to batch mathematical operations and represent them as a single transform.

Notation
Cheatsheet with Definitions
Matrices are caps $A$ ​, scalars $a$ ​, vectors $\vec{a}$ ​

Movie Analogy

  • Vectors , lines in n-dimensional space, are actors in the movie.
  • Matrices squish, move, flip, rotate or collapse vector spaces. They move the plot forward
  • {Determinants, Eigenthings, Dot Products, Spans, …}  talk about relationships of vectors among themselves and to matrices – they write the reviews. For example, a determinant can tell you how much a transform squishes or distorts a vector space. Eigenvectors tell you if a transform changes a vector's spans (usually true for rotations or shears) and much more
[src]

Overview

3Blue1Brown's Essence of Linear Algebra is an absolute must see
Interesting Take From the Trenches

Some Explorables:
Coursera Interactive –  great interactive on transforms, but you will need a Coursera Login. Gives a good feeling for dilations, rotations, shears, reflections, projections.
Determinant, Eigenvector, Span
Matrix Multiplication

Term Describe Formalize
Vector Newsflash**: A vector is sectrely a transform $\vec{x}
Matrix A transform in the making. A matrix is actually not something that is somewhere in vector space. It's a function, and isn't instantiated. Intuition: Vector $x$ lands on $v$ after we apply transform $A$ (*also*: $T_A$) $ A\vec{x} = \vec{v} $
Inverse $String + Z $ Reverts a transform $I: I(A) = A \\ A^{-1}A = I $
Unit Vector Unit vectors describe the axis of a coordinate system. Denoted $\hat{i}, \hat{j}, \hat{k} $…. for x, y, z ... axis. A good way to imagine what a Matrix $A$ does is to see each column $C$ as where the respective unit vector ($C_1 \to \hat{i}$) ends up after the transform.
Identity Unit vectors lead to the *Identity* transform (*and vice versa*). If $\begin{bmatrix} 1\\ 0 \end{bmatrix}$ = $\hat{i}$ and $\begin{bmatrix} 0\\ 1 \end{bmatrix}$ = $\hat{j}$ are the unit vectors then $I$ is $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ $I \\ I \ of \ A^{2x2} = [\hat{i} \ \hat{j}]$
Tensor Generalizes $Matrix \to Tensor_{rank2}$, $Vector \to Tensor_{rank1}$
Dot Product Tells you how two vectors directions relate – opposed, co-linear, orthogonal (right angle) $\vec{a} * \vec{b} = x
Determinant Describes if a transforms preservers (*positive*), reverses (*negative*), or collapses (*0*) the orientation of the n-space. $|A| \ or \ det(A) \\ det(\begin{bmatrix} a & c \\ b & d \end{bmatrix})= ad -bc $
Dot Product Tells you how two vectors directions relate – opposed, co-linear, orthogonal (right angle) $\vec{a} * \vec{b} = x$
Rank Although the determinant tells us if a matrix collapses n-space, it couldn't tell us how many dimensions it collapsed. Full-rank means, the matrix doesn't change dimensions $D$. Rank1 means the output is $D = 1$ and so on.
Span of Vector Line that passes through a *vectors* origin and tip. Or: Infinite extension of the vector.Most transforms will knock most vectors of their span, but the vectors that stay on span are called ***eigenvectors***.
Eigen It's said that the most frequent use for Eigen-x is when we want to reduce complexity. This compression goes by many terms depending on field like PCA, SVD... $Av = \lambda v \\ Av – \lambda Iv = 0 \\ (A – \lambda I)v = 0 \\ det(A – \lambda I) = 0 $

Transform in Code

Look at the Transforms ${A_1, A_2, A_3}$​​ . What happens to the unit vectors? What is the determinant? What would inverse the transform? Try to find the solutions and play with the code.

Plot: Each dot is a vector. Information design critique: the matrices should match their plots vertically

Python Play

TODO