# Linear Map

A linear map is a mapping between two vector spaces that preserves linearity.

Formally, a map $L:U\to V$ from vector space $U$ to vector space $V$ is linear if the following equation holds for all vectors $u_1,u_2\in U$ and scalar $\lambda$:

$L(u_1 +\lambda u_2) = L(u_1) + \lambda L(u_2).$

Of course, $U$ and $V$ should have the same underlying scalar field for this to make sense.

A linear map may sometimes be called a linear transformation, especially if $U=V$. The term linear operator is also common, especially in physics.

### Rank-Nullity Theorem

Given a linear map $L:U\to V$, one can construct a number of relevant vector subspaces of $U$ or $V$:

• The kernel of $L$, denoted by $\text{ker }L$, is defined to be the subspace of $U$ consisting of vectors $u$ such that $L(u)=0$. The dimension of the kernel is the nullity of $L$.
• The image of $L$, denoted by $\text{im }L$ consists of vectors of the form $L(u)$ for any $u\in U$. The image of $L$ is a subspace of $V$. The dimension of the image of $L$ is called the rank of $L$.

The rank-nullity theorem states that the nullity and rank of $L$ sums up to the dimension of $V$:

$\text{dim}(\text{ker }L) + \text{dim}(\text{im }L) = \text{dim}(V).$

The rank-nullity theorem is basically a manifestation theorem of the first isomorphism theorem for groups.

### Matrix Representation

Consider a linear map of the form $L: K^N \to K^M$ for some scalar field $K$. Then there uniquely exists an $N \times M$ matrix $[L]$ such that $L(u) = [L]u$ for all $u\in\mathbf R^N$, with $u$ being treated as a "column vector" ($N\times 1$ matrix).

More generally, let $L:U\to V$ where $U \simeq K^N$ and $V \simeq K^M$. And consider isomorphisms $B_U:K^N\to U$ and $B_V:K^M\to V$. These isomorphisms may be identified as a basis on $U$ and $V$. With respect to the basis $B_U$ and $B_V$, one can define the matrix $[L]_{B,B'}$ as the matrix corresponding to $B_V^{-1} \circ L\circ B_U$. See Change of Basis for more details.

The algebra of linear maps corresponds directly to matrix algebra. Let $L$ and $L'$ be linear maps between finite-dimensional vector spaces. Then, with respect to some given basis:

\begin{aligned} [L + L'] &= [L] + [L'] \\ [\lambda L] &= \lambda [L] \\ [L \circ L'] &= [L][L'] \end{aligned}

Here, $\lambda$ is a scalar.

# Miso Soup

500 ml of broth/stock. I use 10g of Better than Bouillon concentrated mushroom stock. Traditionally, fish stock (Dashi) would be used.

Boil the broth with other desired toppings until top are soft. For the additional ingrediants, just about anything would work: green onions, tofu, seaweed, fried bean curd, egg, spinach, &c. I like mine plain and consumed like tea.

Turn the heat off and and temper in about 20g of miso paste with some of the warm broth and add back to the soup. Serve hot.

# Standard Basis

Let $K$ be a field of scalars, and $K^n$ be the corresponding $n$ dimensional cartesian vector space. The standard basis $\{e_1,\ldots,e_n\}$ is the basis of $K^n$ defined such that the $j$ th component of $e_{i}$ is 1 if $i=j$ and 0 otherwise.

# Vector Space

Vector spaces are used to model a collection of objects – vectors – that can be combined in a linear way to form more vectors. Vector spaces may also be called linear spaces.

### Definition

Formally, a vector space $V$ over a field $K$ (with $K$ usually being either $\mathbf R$ or $\mathbf C$) of "scalars" is a set together with two operations:

1. Vector Addition. A vector space forms an additive abelian group. That is, $V$ is equiped with a binary operator $+$ satisfying the following properties for all $u,v,w\in V$:

• $(u + v) + w = u + (v + w)$
• There exists a $\mathbf 0\in V$ such that $v+ \mathbf 0=\mathbf 0 + v= v$.
• $u+v=v+u$.
2. Scalar multiplication. Combining the scalar $\lambda$ with the vector $v$ results in a vector $\lambda v$. Scalar multiplication satisfies the following properties for all scalars $\lambda,\mu$ and vectors $u,v$:

• $(\lambda \mu)v = \lambda (\mu v)$
• $(\mu + \lambda) v = \mu v + \lambda v$
• $\lambda(u+ v)=\lambda u + \lambda v$
• $1 v=v$

### Linear Maps

A linear map is a homomorphism between two vector spaces. That is, it is a map from one vector space to another vector space that preserves its linear structure (vector addition and scalar multiplication).

### Dimensionality

A vector space may be equipped with a basis. If this basis is finite, then the vector space is said to have a finite dimension. The dimension of the vector space is the cardinality of any of its basis (invariant across basis).

If $V$ is $n$ dimensional, then it is isomorphic to the cartesian space $K^n$ (see below).

### Subspaces

A vector subspace $W$ is a subset of some vector space $V$ that is itself a vector space when inheriting vector addition and scalar multiplication from $V$. Equivalently, $W$ is a vector space whenever it is closed under vector addition and scalar multiplication.

### Examples

Some common examples of vector spaces include:

• $K^n$, where $K$ is a field and $n$ a natural number. This is the set of all $n$-tuples of elements of the field $K$. Vector addition and scalar multiplication are defined component-wise. Special examples include $\mathbf R^n$ and $\mathbf C^n$.
• The set of all polynomial functions, possibly up to a maximal degree.
• The set of all $n$-times differentiable functions ($n=0$ for continuous) over the set $U$ forms the vector space $C^{n}(U)$.
• The set of linear maps between two vector spaces is itself a vector space. In particular, the set of all linear maps from a vector space $V$ (defined over the field $K$) to the field $K$ is the dual space $V^{\star}$.

# Change of Basis

A change of basis is a process of converting the components of a vector (or matrix or tensor) with respect to one basis to components in another basis.

Let $V$ be a finite-dimensional vector space with dimension $n$. And let $B$ and $B'$ be two basis on $V$. Then there uniquely exists an $n\times n$ matrix $T_{B'\gets B}$ called a transition matrix or change of basis matrix such that

$[v]_{B'} = T_{B'\gets B} [v]_B$

for all $v\in V$. Here, $[v]_B$ and $[v]_{B'}$ are the $n\times 1$ column vectors written with respect to $B$ and $B'$, respectively.

Transition matrices have a number of properties:

• $T_{B'\gets B}$ and $T_{B\gets B'}$ are inverses of each other.
• For any three basis $A,B,C$: $T_{A\gets C} = T_{A\gets B} T_{B\gets C}$.

Each basis on $V$ is uniquely associated with a linear isomorphism $K^n\to V$. Let $\phi_B:K^n\to V$ and $\phi_{B'}:K^n\to V$ be isomorphisms corresponding to $B$ and $B'$ respectively. Then the matrix $T_{B'\gets B}$ is the matrix representation of the $K^n\to K^n$ map given by $\phi_{B'}^{-1}\circ\phi_{B}$.

# Matrix Algebra

Matrices may be produced from binary and unary operations on other matrices.

In the following, suppose that $A$ and $B$ are matrices with elements $a_{ij}$ and $b_{ij}$, respectively.

Matrices can be added element-wise. Suppose that the dimensions of $A$ and $B$ are $m\times n$. Then $A+B$ is an $m\times n$ matrix such that the $ij$ th element is $a_{ij} + b_{ij}$. Matrices must be of equal dimensions to be added.
Scalar Multiplication
Matrices are scaled element-wise. That is, for some scalar $\lambda$, $\lambda A$ is a matrix with the same dimensions as $A$ and whose $ij$ th element is $\lambda_{ij}$
Matrix Multiplixation
Suppose the dimensions of $A$ is $m\times k$ and the dimensions of $B$ are $k\times n$. Then their product $C=AB$ is an $m\times n$ matrix with components given by the following formula: $c_{ij} = \sum_{l=1}^{k} a_{il} b_{lj}.$
Transposition
Suppose $A$ is an $m\times n$ matrix. Then the transpose of $A$, written as either $A^T$ or as $A'$, is an $n\times m$ matrix whose $ij$ th component is $a_{j i}$.
Conjuage Transposition
The conjugate transpose $A^{\dagger}$ of a matrix $A$ is a matrix consisting of the complex conjugate of all components in $A^T$. This operation is only meaningful for complex matrices.
Inversion
Let $A$ be an $m \times m$ matrix such that there exists a matrix $A^{-1}$ such that $A A^{-1}$ is the identity matrix. Then $A^{-1}$ is the inverse of $A$ and is unique.

# Calorie Tables (Grains)

### Cereal Grains (Raw and Parboiled)

Item Variant kcal / 100g
Barley Raw 353
Cooked 123
Rice White, Raw 365
White, Cooked 130
Brown, Raw 370
Brown Cooked 111
Oats Raw 389
Cooked 63

### Baked Wheat Products

Item Variant kcal / 100g
Rye 260
Multigrain 265
Pita 275
Bagel 250
English Muffin 235

# Quotient Vector Space

A quotient (vector) space is an extension of the concept of a quotient group for vector spaces.

Let $W$ be a vector subspace of $V$. Since $W$ is a normal subgroup of $V$ with respect to vector addition, one can construct the quotient group $V/W$ consisting of cosets of $W$ (affine hyperplanes parallel to $W$).

Moreover, $V/W$ also inherits a form of scalar multiplication, making itself a vector space. Let $v\in V/W$ be a vector in the quotient space. That is, $v$ is a coset of $W$. Scalar multiplication of every point in $v$ yields another coset of $W$. Hence, scalar multiplication is well-defined for vector spaces.

The definition of a quotient vector space can be readily generalized to a module. That is, one can construct quotient modules in a similar fashion.

# Algebraic Dual Space

The (algebraic) dual space $V^{\star}$ of the vector space $V$ over the scalar field $K$ consists of all linear mappings of the form $V \to K$. $V^\star$ is itself a vector space over $K$. Elements of $V^\star$ may be referred to as linear functionals or covectors or dual vectors. The notion of a dual space is integral in the theory of tensors.

$V$ and $V^\star$ are isomorphic if they finite-dimenstional. That is, they share the same dimension. Moreover, for any non-degenerate bilinear form $<\cdot, \cdot>: V \times V \to K$, the bijection $v \leftrightarrow $ is an isomoprhism between $V$ and $V^\star$. Thus, a finite-dimensional inner product space has a "natural" isomorphism to its dual.

### Double Duals

Regardless of the dimensionality of $V$, there exists a natural homomorphism from $V$ to the dual of its dual, $V^{\star \star} = (V^\star)^\star$. This natural homorphism maps each $v\in V$ to the "evaluation map" defined by $f \mapsto f(v)$. For finite-dimensional $V$, this homomorphism is also an isomorphism.

The existance of a natural isomorphism means that $V$ and $V^{\star\star}$ may roughly be treated as equivalent.

### Dual Basis

Let $e_1,\ldots,e_n$ form a basis of $V$. Define the corresponding dual basis $e^1,\ldots,e^n\in V^\star$ as the sequence of covectors satisfying the following:

$e^{i}(e_{j}) = \delta^i_j.$

Here, $\delta^i_j=1$ if $i=j$, otherwise $\delta^i_j=0$ (this quantity is sometimes called the Kronecker delta). The dual basis is a basis of $V^\star$.

# Group Homomorphism

A homomorphism is a mapping that preserves algebraic structure. In the context of group theory, a homomoprhism between a group $(G, \cdot)$ and $(H, \star)$ is a mapping $\phi: G \to H$ with the following property for all $g,g'\in G$:

$\phi(g\cdot g') = \phi(g)\star\phi(g').$

$\phi$ may be categorized as a special "type" of homomorphism according to additional properties it may hold:

• $\phi$ is an isomophism if it is bijective and its inverse is a homomorphism. In this case, $(G, \cdot)$ and $(H, \star)$ are isomorphic. An isomorphism from a group to itself is called an automorphism.
• $\phi$ is an epimorphism if it is surjective. An epimorphism from a group to itself is called a endomorphism.
• $\phi$ is a monomorphism if it is injective.

# Eigendecomposition

In linear algebra, the eigendecomposition of a square matrix $M$, also known as diagonalization, is the representation of $M$ in the following factored form:

$M=Q\Lambda Q^{-1},$

where $\Lambda$ is a diagonal matrix whose diagonal entries are eigenvalues of $M$. And $Q$ is a matrix whose corresponding columns are eigenvectors. Here, $M$, $\Lambda$ and $Q$ share the same number of dimensions.

A matrix can be eigendecomposed if and only if it is diagonalizable.

### Evaluation of Matrix Functions

Let $Q\Lambda Q^-1$ be the eigendecomposition of $M$ and let $f(x)=\sum_{n=0}^{\infty}a_n x^n$ be some analytic function. Then it can be shown that

$f(M) = Q f(\Lambda) Q^{-1}.$

Moreover, the calculation of $f(\Lambda)$ is straightforward. It is a diagonal matrix whose $(i,i)$th element is $f(\lambda_i)$, where $\lambda_i$ is the eigenvalue located at the $(i,i)$th index of $\Lambda$.

# Topological Space

A topological space is a set equipped topological structure. Essentially, a topological space is defined so that the "limit" of a sequences o "points" in the space may be defined.

### Topology

A topological space may be defined as a set $\Omega$ to together with a topology $\mathcal T$, which consists of subsets of $\Omega$. Elements of this topology are called open sets. A topology satisfies the following axioms:

1. The empty set belongs to $\mathcal T$.
2. $\Omega \in \mathcal T$
3. If $O_1,\ldots,O_n$ is a finite sequence of open sets, then their intersection $O_1 \cap\ldots\cap O_n$ is also open.
4. If $\{O_{\alpha}\}$ is a (potentially infinite) set of open sets (with $\alpha$ being some indexing parameter), then their union $\bigcup_{\alpha} O_{\alpha}$ is also open.

A set $C\in\Omega$ is said to be closed if its complement $\Omega - C$ is open. The set of closed sets uniquely specifies space's topology.

### Construction of Topological Spaces

Topologies are rarely defined by directly specifying the open sets directly. Rather, they are usually generated from simpler constructs. For example:

# Bilinear Form

A bilinear form over a vector space $V$ (over some scalar field $K$) is a mapping $V \times V \to K$ that is linear in each argument.

Bilinear forms may be characterized by additional properties they posses. Let $B$ be a bilinear form. Then $B$ is:

• non-degenerate if $B(w,v)=0$ for all $v\in V$ only when $w=0$. Equivalently, the mapping $v\mapsto(w\mapsto B(w,v))$ is an isomorphism from $V$ to its algebraic dual $V^\star$.
• symmetric if $B(v,w)=B(w,v)$ for all $w, v\in V$,
• skew-symmetric if $B(v,w)+B(w,v)=0$ for all $w,v \in V$,
• alternating if $B(v, v) = 0$ for all $v \in V$ (alternating implies skew-symmetry).
• reflexive if $B(v, w)=0$ implies that $B(w, v)=0$.

Bilinear forms may be represented as matrices for finite-dimensional vector spaces. Let $V=K^{n}$. Then every bilinear form can written in the following fashion:

\begin{aligned} B(u,v) &= \sum_{i,j} u_{j} B_{ij} v_{j} \\ &= \mathbf{u}^{T} \mathbf B \mathbf v, \end{aligned}

where $\mathbf B$ is the matrix representation of $B$, and $\mathbf u$ and $\mathbf v$ are column vector representations of $u$ and $v$.

# Chicken Rice Porridge

A combination of chinese congee and greek avgolemeno. Good use of leftover chicken.

Combine one part jasmine rice to about ten parts chicken stock, by weight. Half a cup of rice (or 115g) and four cups of stock should be good for four servings. Season with salt and add coarsely chopped ginger to taste. Cook in a pressure cooker until thick and hot. Remove the ginger. Optionally thicken further with an immersion blender.

Add cooked chicken meat (thighs work best), shredded or chopped into small pieces. Add in one large egg yolk per two cups of stock. Temper the yolks with the porridge to prevent curdling. Add lemon juice to taste, but be generous. Add in some rice flake noodles for texture. Wait for rice noodles to soften before serving. Garnish with whatever you like, such as: green onion, fried scallions, fermented soy beans, chili paste, hot sauce, cilantro, mint, dill, youtiao, a poached egg.

Vegetarian Alternative: Replace chicken and chicken stock with cooked mushrooms and mushroom stock.

# Vector Basis

A basis of a vector space $V$ over a scalar field $K$ is a set of vectors that are linearly independent and span $V$.

More explicitly, let $B\subset V$ be a basis. Then, every $v\in V$ can be uniquely written in the form $\lambda_1 v_1 + \ldots \lambda_n v_n$ for some $v_1, \ldots, v_n \in B$.

If $B$ is finite, then $V$ is said to have a finite dimension. And the dimension of $V$ is the cardinality of $B$. Otherwise, $V$ is said to be infinitely dimensional.

All basis of a vector space share the same cardinality.

### Coordinate Systems

Let $V$ be $n$ dimensional ($n$ finite). Then a basis $B=\{v_1,\ldots,v_n\}$ may be uniquely identified by a linear isomorphism $\phi_B:K^n \to V$ via the following construction:

$\phi_B(\lambda) = \lambda_1 v_1 + \ldots \lambda_n v_n.$

Here, $\lambda_i$ is the $i$ th component of $\lambda\in K^n$. $\phi_B$ is an coordinate system on $V$.

Converting from one basis to another may be done using transition matrices.

# Affine Space

An affine space can be thought of a vector space that has "lost it's origin".

Formally, an affine space $A$ is a set of points together with a vector space $V$ of "displacements". Points in $A$ can be subtracted to form a vector. That is $a,b\in A\implies a-b \in V$. Subtraction must satisfy the following properties:

1. For all $v \in V$ and $b \in A$, there exists a $a\in A$ such that $a - b = v$.
2. For all $a,b,c \in A$, $(c-b) + (b-a) = c -a$.

These conditions are called the Weyl axioms. An alternative formulation is to assert that $V$ acts on $A$ as a free action (treating $V$ as a group under vector addition).

When $V$ is equipped with an inner product, then $A$ becomes a metric space with the metric $d(a,b) = \sqrt{\langle a -b, a -b, \rangle}$. In particular, if $V = \mathbb{R}^N$ and equipped with an inner product, then $A$ is said to be euclidean.

# Linear Independence

Let $X$ be a set of vectors in some vector space $V$. These vectors are mutually linearly dependent if there exists some finite subset $\{v_1,\ldots,v_n\}\subset X$ and sequence of non-zero scalars $\lambda_1,\ldots,\lambda_n$ such that

$\sum_{i=1}^{n} \lambda_i v_i = 0.$

If $X$ is not linearly dependent, then it is constituent vectors are linearly independent.

# Linear Combination

A linear combination is an expression of the form

$\lambda_1 v_1 + \ldots + \lambda_n v_n,$

where $v_1,\ldots,v_n$ are vectors belonging to some vector space and $\lambda_1,\ldots,\lambda_n$ are scalars.

# Identity Matrix

An identity matrix is a square matrix with 1's on the diagnonal and 0's everywhere else. The $n\times n$ identity matrix is often written as $I_n$. For example,

$I_3 = \left[\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right].$

The identity matrix $I_n$ act as the identity element on the general linear group $GL(n,F)$. That is, for any $n\times n$ matrix $A$, $AI_n = I_n A = A$.

# Group

A group is a simple algebraic structure that represents a composable set of permutations. There are two popular definitions of a group: one as a set of permutations and one as a set equipped with a binary operator. Both definitions are essentially equivalent by Caley's Theorem.

### Permutation Group

One formulation of a group is as a set of permutations. That is, a permutation group is a set $G$ of bijections of some set $X$ with the following closure properties:

1. The identity map belongs to $G$,
2. If $g\in G$, then $g^{-1}\in G$, and
3. If $g,h\in G$, then $g\circ h\in G$.

### Abstract Group

An abstract group is a set $G$ together with a mapping $\cdot:G\times G \to G$ called a binary operator. We write $\cdot (g, h)$ as $g \cdot h$ or simply as $gh$.

An abstract group must satisfy the following properties:

1. There exists an identity element $e \in G$ such that $eg=ge=g$ for all $g\in G$.
2. For all $g \in G$, there exists an inverse element $g^{-1} \in G$ such that $g g^{-1}= g^{-1} g = e$.
3. For all $f,g,h \in G$, $(fg)h=f(gh)$.

It is clear that a permutation group is an abstract group by having composition as the chosen binary operator.

Note: For notational convenience, the binary operator is usually assumed and a group is identified by the underlying set. So, for example, claiming that "$g$ is an element of the group $G$" means that "$g$ is an element of the underlying set of the group $G$".

# Linear Algebra

Linear Algebra is the study of linear structure.

### Vector Spaces

The most basic objects are vector spaces over defined some given field. Vector spaces are basically spaces of elements (vectors) that can be combined linearly. Example of vector spaces include:

• The space of geometric displacements. As such, vectors are a useful tool in studying euclidean geometry. However, vectors may be more generally applied to affine geometry. The study of displacements over time is known as kinematics.
• Spaces of functions, such as a space of integrable functions. This particular application is called functional analysis.

Vector spaces are often metric spaces via the inclusion of inner products. The archetypal example of an inner product space is a euclidean space.

### Linear Maps

Morphisms of vector spaces – that is, maps between vector spaces that preserve linearity – are called linear maps. Linear maps can be represented as matrices, rectangular numerical arrays, via a basis. Matrices may be combined and manipulated numerically, .

Linear maps that are also isomorphisms are called linear transformations. Linear transformations that are also isometries (preserving an inner product) are called unitary (or orthogonal for real scalar fields).

An important operation with linear transformations is their eigendecompostion to some canonical form using eigenvalues. Eigenvalues are usually defined using determinants.

### Multilinear Algebra

Linear maps take in a single vector argument. Maps that take into multiple vector arguments and are linear componentwise are called tensors. The study of tensors belong to multilinear algebra, a sub-field of linear algebra. This includes exterior algebra, which has extensive applications in physics and geometry.

### Applications

Linear algebra has notable applications in the following:

# Trace

Let $M$ be an $n \times n$ matrix. The trace of this matrix is the sum of its diagonal elements:

$\text{tr}(M)=\sum_{i=1}^{n} M_{ii}.$

The trace can also be calculated as the sum of $M$'s eigenvalues (adjusting for multiplicities). That is, let the characteristic equation of $M$ be given by $P(\lambda)=\prod_i (\lambda-\lambda_i)^{m_i}$. Then

$\text{tr}(M)=\sum_{i} m_i \lambda_i.$

Since the eigenvalues of a matrix are invariant under coordinate transforms, the trace of a linear transformation of a finite-dimensional space can be defined as the trace of one of its matrix representations.

Traces satisfy the following properties:

• Traces are linear functionals of matrices. That is, let $L$ and $M$ be square matrices of the same dimension and $\lambda$ a scalar. Then $\text{tr}(L+\lambda M) = \text{tr}(L) + \lambda \text{tr}(M)$.
• Let $L$ and $M$ be square matrices of the same dimension. Then $\text{tr}(LM)=\text{tr}(ML)$.

# Determinant

Let $V$ be a finite-dimensional real or complex vector space and $L:V\to V$ be a linear transformation. The determinant of $L$, notated as either $\text{det }L$ or as $|L|$, is a number that may be defined in various ways.

### General Properties

Before defining the determinant, it is useful to list some useful properties:

1. Determinants commute over multiplication. If $L:V\to V$ and $M:V\to V$ are linear transformations, then $\text{det } LV = \text{det }L \cdot \text{det }M$.
2. The determinant of the identity map is one.
3. Determinants are invariant over similar transformations. That is, if $L=B^{-1}M B$ for some $L,M,B:V\to V$, then $\text{det } L = \text{det }M$.
4. $\det L^{-1} = (\det L)^{-1}$. Consequently, $L$ is only invertible (non-singular) if it has a nonzero determinant.
5. Determinants are invariant under transposition.

Let $L$ specifically be a matrix, then:

1. Swapping two columns of $L$ changes the sign of the determinant.
2. If the column vectors are linearly dependent, then the determinant is zero.
3. The determinant is linear with respect to each column. That is $\det L$ can be thought of as a mulitlinear map of its column vectors.

### Definition via Eigenvalues

Let $\lambda_{1},\ldots,\lambda_{n}$ be the algebraically distinct eigenvalues of $L$. The determinant of $L$ is defined as their product:

$\text{det }L = \lambda_{1} \ldots \lambda_n.$

Here, "algebraically distinct" refers to the fact that the eigenvalues may not be numerically distinct, but act as distinct factors of the characteristic polynomial $P(\lambda)$ of $L$:

$P(\lambda) = a\prod_{i} (\lambda-\lambda_{i}).$

This definition may seem circular since the characteristic polynomial is itself usually defined by the determinant ($P(\lambda) = \text{det} (L-\lambda I)$). However, it is possible to define this polynomial in other ways. This is the approach taken, for example, in Axler's Linear Algebra Done Right.

### Definition via Grassman Algebra

It is possible to define a determinant in the context of Grassman algebra.

Let $V$ be $n$-dimensional. Then the exterior power $\Lambda^n V$ is one dimensional. The corresponding exterior power of $L$, $\Lambda^n L:\Lambda^n V \to \Lambda^n V$, can be written in the form $x\mapsto\alpha x$. And $\alpha$ can thus be defined as the determinant of $L$.

### Definition via Matrix Computation

Let $[L]$ be the matrix representation of $L$ with respect to some basis of $V$. Then it is possible to define the determinant of $L$ as a function of the corresponding matrix elements $l_{ij}$.

Suppose $V$ is $n$-dimensional. A bijection of the form $\sigma:\{1,\ldots,n\}\to\{1,\ldots,n\}$ is called an ($n$-fold) permutation. The sign of this permutation, denoted as $\text{sgn }\sigma$, is equal to 1 if $\sigma$ can be written as an even number of "swaps" (permutations that swap two elements but leave the others unchanged) and -1 if the number of swaps is odd.

Then the determinant of $L$ and $[L]$ is:

$\text{det }L = \sum_{\sigma} \text{sgn } \sigma \prod_{i=1}^{n} l_{i,\sigma(i)}.$

Here, the above summation is over all possible $n$-fold permutations.

# Tensor

Tensors are the fundamental tools of multilinear algebra, describing the relationship between multiple vectors and covectors.

Let $V$ be a finite dimensional vector space over some scalar field $K$. And let $V^\star$ be its corresponding dual space. A tensor is a multilinear map of the form

$(V^\star)^p \times V^q \to K.$

Such a tensor is said to be of type $(p,q)$ (or $(q,p)$ in some literature). If $p=0$, the tensor is said to be covariant. If $q=0$, the tensor is said to be contravariant. Otherwise, the tensor is said to be mixed.

### Basis

Let $e_1, \ldots, e_n$ be a basis of $V$. And let $e^1, \ldots, e^n$ be the corresponding basis in $V^\star$ (that is, $e^i(e_j)=\delta^i_j$). Then a natural basis for the space of $(p,q)$-typed tensors on $V$ is given by

$e^{i_p}\otimes\ldots\otimes e^{i_p} \otimes e_{j_1} \otimes\ldots\otimes e_{j_q}$

for all possible sequences $i_{1},\ldots,i_p$ and $j_1,\ldots,j_q$ in $1,\ldots,n$. The dimension of the space of $(p,q)$-typed tensors is thus $n^{p+q}$.

# Inner Product Space

An inner product space is a vector space $V$ over a scalar field $K\in \{\mathbf R, \mathbf C\}$ together with a bilinear form $\langle\cdot ,\cdot\rangle: V \times V \to K$, called an inner product, that satisfies the following properties:

• Positive-definite: $\langle v, v\rangle >0$ for all non-zero $v\in V$.
• Conjuage-symmetric: $\langle u, v\rangle = \overline{\langle v, u\rangle}$

The vector space $\mathbf R^N$ together with an inner product is called a euclidean space. An inner product space with complex scalars is sometimes called a unitary space.

The function $\rVert \cdot \lVert: V\to \mathbf R$ defined by $\sqrt{\langle v, v \rangle}$ satisfies the properties of a norm (thus an inner product space is a nomed vector space). The inner product can be recovered from its norm using the polarization identity.

The inner product norm further satisfies the Cauchy-Schwartz Inequality and Triangle Inequality.

A linear transformation is unitary if the inner product is preserved.

# General Linear Group

The general linear group of degree $n$ and field $F$ is the set of $n \times n$ invertible matrices over the field $F$. The general linear group is a Lie group with respect to matrix multiplication. It may be denoted as $GL(n, F)$.

# Caley's Theorem

Caley's theorem asserts that a permutation group and an abstract group are essentially equivalent concepts. It is clear that a permutation group meets the definition of an abstract group. Caley's theorem states that an abstract group is isomorphic to some permutation group.

### Proof

For a given $g \in G$, the bijection $G \to G$ defined by $h \mapsto gh$ is the left action by $g$. Similarly, the map $h\mapsto hg$ is the right action by $g$. The mapping from $g$ to the corresponding left action (or right action) is an isomorphism.

# Neighborhood

In a topological space, a neighborhood is a set of points surrounding some particular point. Formally, a neighborhood around a given point is a set of points containing an open set that itself contains the given point. The given point is said to be in the interior of the neighborhood.

### Neighborhood Systems

Consider a set $\Omega$. For each $x\in\Omega$, suppose $\mathcal N_x \subset 2^{\Omega}$ is a collection of subsets of $\Omega$ obeying the following axioms:

1. $N\in\mathcal N_x\implies x\in N$
2. $N\in\mathcal N_x$ and $N \subset M$ implies $M\in\mathcal N_x$.
3. $N, M \in\mathcal N_x \implies N\cap M\in\mathcal N_x$
4. If $N\in\mathcal N_x$, there exists a $M\in\mathcal N_x$ such that $N\in\mathcal N_y$ for all $y\in M$.

Then $x \mapsto\mathcal N_x$ is said to be a system of neighborhoods for $\Omega$.

Given such a system of neighborhoods, a set $O\subset\Omega$ is said to be open if $x\in O\implies O\in\mathcal N_x$. The collection of such sets forms a topological space. More importantly, every topological space can be formed in this manner. It can be shown that $\mathcal N_x$ is theo collection of all neighborhoods around the point $x$.

Some authors use the above fact to define a topological space using such a system of neighborhoods instead of by the properties of its open sets. The neighborhood formulation, while less verbose, is arguably more intuitive.

Every topological space can be uniquely specified from the system of neighborhoods of each of its points.

# Galilean Relativity

Galilean Relativity refers to a theory of relativity consistent with Newton's laws.

It is named after Galileo's thought experiment involving a vessel traveling with a perfectly uniform and linear motion. An observer contained within the vessel, observing only phenomena also contained within the vessel, will have no means of determining the vessel's speed or direction of travel.

### Galilean Spacetime

A galilean space(time) is a four-dimensional affine space $A^4$. Points in this space are called "events". The set of displacements forms a four-dimensional, real vector space $V\simeq\mathbb R^4$.

There exists a rank-1 linear map $\tau : V\mapsto\mathbb R$ mapping spatio-temporal displacements to time intervals. Two events $a$ and $b$ are simultaneous if $\tau(a-b)=0$.

The three-dimensional quotient space $V/\text{ker }\tau$ is euclidean (that is, equipped an inner product $\langle\cdot,\cdot\rangle$). From this, the distance between simultaneous events $a$ and $b$ is defined as

$d(a,b) = \sqrt{\langle \pi(a-b), \pi(a-b) \rangle},$

where $\pi$ is the natural projection (epimorphism) from $V$ to $V/\text{ker }\tau$.

### Galilean Transformations

An isomorphism between galilean spaces is a bijection that preserves galilean structure (affinity, euclidean distance between simultaneous events, and time intervals).

All galilean spaces are isomorphic to $\mathbb R \times \mathbb R^3$, where the $\mathbb R$ and $\mathbb R^3$ components are each equiped with the standard inner product. Isomorphisms of spacetime onto $\mathbb R\times\mathbb R^3$ are called inertial reference frames or galilean coordinates.

Automorphisms of $\mathbb R \times \mathbb R^3$ are called galilean transformations, which forms the galilean group. The galilean group is generated from the following galilean transformations:

1. Rectilinear Motion: $(t, \mathbf{x}) \mapsto (t, \mathbf{x}+t\mathbf{v})$ for some $\mathbf{v}\in\mathbb R^3$.
2. Spatial Translations: $(t, \mathbf{x})\mapsto(t, \mathbf{x}+\mathbf{y})$ for some $\mathbf{y}\in\mathbb R^3$.
3. Temporal Translations: $(t, \mathbf x)\mapsto(t+s,\mathbf x)$ for some $s\in\mathbb R$.
4. Rotations and Reflections: $(t, \mathbf x) \mapsto (t, \mathbf{G x})$, where $\mathbf{G}\in O(3)$. That is, $\mathbf{G}$ is a real, invertible 3x3 matrix such that $\mathbf G^T \mathbf G = \mathbf G \mathbf G^T \mathbf = \mathbf{I}_3$.

# Group Representation

Let $G$ be a group and $V$ be a vector space over the field $K$. A representation with respect to these objects is a homomorphism $\pi$ from $G$ to $\text{GL}(V)$, the general linear group of $V$. That is, a representation is just a group action consisting of linear transformations. The study of group representations forms much of representation theory.

Usually, $V \simeq\mathbf C^n$ and $G$ is finite. In this case, $G$ (or, rather, $G/\text{ker }V$) may be identified as a finite set of matrices under some coordinate system.

### Irreducible Representations

The subspace $W \subset V$ is said to be invariant with respect to $\pi$ if $\pi(g)w=w$ for all $w\in W$ and $g\in G$.

The representation $\pi$ is said to be irreducible over $V$ if the only invariant subspaces are $V$ and the subspace consisting of just the zero element.

An important problem of representation theory is decomposing $V$ into irreducible components That is, write $V=V_1\oplus\ldots\oplus V_n$ with $V_1,\ldots,V_n$ irreducible and none of $V_1,\ldots,V_n$ is equal to $V$. If this is possible, then the representation is said to be fully reducible.

# Matrix

A matrix is a rectangular array of numbers, commonly used in applications of linear algebra.

For example, the following is a two-by-four ($2\times 4$) matrix of real numbers:

$\left[\begin{matrix} 1.1 & 8.6 & -5.42 \\ 0 & 5.73 & 0 \end{matrix}\right]$

This matrix has two rows and four columns.

A matrix may generally contain any number of rows or columns. The entries of a matrix usually belong to some specified field, usually $\mathbf R$ or $\mathbf C$.

Matrix variables are often denoted by capital letters ($A,B,C,\ldots$), sometimes bolded ($\mathbf{A}, \mathbf{B},\ldots$).

An entry of a matrix may be located by specifying which row and column the entry belongs to. This can be done by supplying an "index" to the desired row and column. For example, denote the entries of the afforementioned $2\times 4$ matrix as $a_{ij}$. Then $a_{12} = a_{1,2} = 8.6$ is the entry in the first row (counting top-to-bottom) and second column (counting left-to-right).

Matrices may be combined and transformed according to the conventions of matrix algebra.

In linear algebra, an $N\times M$ matrix is a representation of a linear map of the form $K^N\to K^M$ for some field $K$.

# Polarization Identity

For any two vectors $u$ and $v$ in a real or complex inner product-space, the polarization identity asserts that:

\begin{aligned} \text{Re }(\langle u, v \rangle) &= \frac{1}{4}(\lVert u+v\rVert^2 - \lVert u- v \rVert^2) \\ \text{Im }(\langle u, v \rangle) &= \frac{1}{4}(\lVert u-i v\rVert^2 - \lVert u+i v \rVert^2). \end{aligned}

Essentially, this means that inner products can be recovered from norms.

# procfs

On Linux, procfs is a virtual filesystem, mounted at /proc, containing information on processes, threads, and the overall system.

Directories of the form /proc/[0-9]+ contain process-specific information. Paths of the form /proc/[a-z]+ contain system-specific information.

### Notable Paths In /proc

Sourced from the RHEL 6 Deployment Guide.

Process-specific data:

• /proc/$pid/ is a directory containing process information for the process with identifer $pid. The path /path/self/ links to this directory corresponding for the calling process.
• /proc/$pid/cwd links to the process's working directory. • /proc/$pid/fd/ is a directory containing links to file decriptors opened by the file.
• /proc/$pid/environ contains process-specific environment variables • /proc/$pid/exe links to the process's executable
• /proc/$pid/maps/ contains the process's memory maps. • /proc/$pid/mem contains a mapping to a process's memory. This files is not normally available without attaching ptrace.
• /proc/$pid/task/$tid/ is a directory of a process's thread with identifier \$tid

System-wide data:

• /proc/bus/ contains information about available buses. In particlar /proc/bus/pci contains information about available PCI devices.
• /proc/cpuinfo conatains information about the system's CPU (model name, cache size, feature flags, …).
• /proc/filesystems contains a list of filesystems.
• /proc/iomem maps memory regions to physical devices.
• /proc/kcore contains a view into the system's memory.
• /proc/loadavg shows relative load across CPU cores.
• /proc/locks list of file locks held by the kernel.
• /proc/meminfo displays statistics on memory usage.
• /proc/modules contains a list of kernel modules.
• /proc/mounts contains a list of filesystem mounts.
• /proc/stats contains a large amount of statistics collected since the system was last restarded.
• /proc/uptime shows how long the system has been running since last restart.
• /proc/version shows the kernel version, including compiler info.

# Cauchy-Schwartz Inequality

Let $u$, $v$ be vectors in an inner product space. The Cauchy-Schwartz inequality asserts that

$\lvert\langle u, v \rangle\rvert \leq\lVert u \rVert\lVert v \rVert.$

# Normed Vector Space

A normed vector space is a vector space $V$ over $K\in\{\mathbf R, \mathbf C\}$ together with a function $\lVert \cdot \rVert: V \to \mathbf R$, the norm, satisfying the following properties for all $u, v\in V$ and $\lambda\in K$:

• $\lVert v \rVert \geq 0$ with equality if and only if $v=0$.
• $\lVert \lambda v\rVert = \lvert \lambda \rvert \lVert v \rVert$
• $\lVert u+v \rVert \leq \lVert u\rVert +\lVert v \rVert$, also known the triangle inequality.

A normed vector space is also a metric space under the metric $d(u, v) = \lVert u - v \rVert$.

A seminorm has the properties of the norm, except with the property that $\lVert v \rVert$ may be zero for nonzero $v$.

# Unitary Transformation

A unitary transformation is an isomorphism of an inner product space. That is, it is a linear transformation $T:V\to V'$ preserving inner products:

$\langle u,v \rangle_{V} = \langle T(u), T(v) \rangle_{V'}$.

For euclidean vector spaces ($V \simeq \mathbf R^N$), unitary transformations are called orthonormal transformations.

The eigenspaces of a linear transformation are orthogonal.

### Unitary Matrix

The matrix representation of a unitary transformation is called a unitary matrix. Such matrices $U$ have the following properties:

1. $U^{-1} = U^{\dagger}$ (sufficient condition for unitarity)
2. $\left| \text{det } U \right| = 1$
3. $U = VDV^{\dagger}$ for some diagonal matrix $D$. That is, unitary matrices are diagonalizable.
4. $U=e^{iH}$ for some $H=H^{\dagger}$ ($H$ is hermition).
5. The column vectors of $U$ are orthonormal (sufficient for unitary)

Here, $U^{\dagger}$ denotes the conjugate transpose of $U$ (obtained from $U$ by taking the complex conjugate of each element and then transposing the matrix).

For euclidean spaces, a unitary matrix is specifically called an orthonormal matrix.

# Dual Number

The dual numbers can be thought of an extension of the real numbers with an infinitesimal offset. A dual number may be written as $a+b\epsilon$, where $a$ and $b$ are real numbers. And $\epsilon$ may be thought to be "infinitesimal". That is, $\epsilon$ is distinct from zero despite that $\epsilon^2=0$.

The expression $a+b\epsilon$ is linear in $a$ and $b$ and obeys the following rule of multiplication:

$(a+b\epsilon)(a'+b'\epsilon) = aa' + (ab' +b'a)\epsilon.$

### Use in Automatic Differentiation

For any polynomial $P$, it may be readily shown that:

$P(a+b\epsilon) = P(a) + bP'(a)\epsilon.$

And for any analytic function $f:R\to R$, one can similarly extend $f$ to the dual numbers:

\begin{aligned} f(a+b\epsilon) &= \sum_{n=0}^{\infty}\frac{1}{n!} f^{(n)}(a) b^n \epsilon^n \\ &= f(a) + bf'(b)\epsilon \end{aligned}

With this, it is possible to "automatically differentiate" a function $f:R\to R$ by calculating the "epsilon" component of $f(x+\epsilon)$.

The ForwardDiff.jl Julia package follows this exact approach, utilizing a multidimensional analogue of dual numbers to calculate gradients.

# Euclidean Space

A euclidean space is a mathematical formalization and abstraction of physical space. The study of objects in euclidean space is known as euclidean geometry.

Formally, a euclidean space is an affine space whose underlying vector space is $\bf{R}^N$ equipped with an inner product. That is, a euclidean space is a metric space.

# Quotient Group

A quotient group $G/N$ for a given group $G$ and a "normal" subgroup $N$ of $G$ is a group that has a "coarser" or "more relaxed" algebraic structure relative to $G$. The notion of a quotient group is essential in a number of isomorphism theorems.

### Cosets

Let $(G, \cdot)$ be a group with subgroup $(H, \cdot)$.

A left coset of $(H, \cdot)$, denoted as $g\cdot H$ or $gH$ for some $g\in G$, is the set consisting of elements of the form $gh$ for some $h$. That is, $gH$ is the image of $H$ under the left action induced by $g$.

Similarly, a right coset of $(H, \cdot)$, is the image of $H$ under the right action induced by $g$.

### Normal Subgroups

The subgroup $H$ is said to be normal if $gH = Hg$ for all $g\in G$. In this case, there is no distinction between "right" and "left" cosets.

The cosets of a normal subgroup itself forms a group

### Lagrange's Theorem

Let $G$ and $H$ be finite and $H$ normal. Lagrange's theorem states that

$\lvert G/H \rvert = \lvert G\rvert / \lvert H \rvert .$

# Metric Space

A metric space is a space equpped with an abstraction notion of distance, called a metric. Metric spaces form an important class of topological spaces. The archetypal example of a metric space is a euclidean space.

Formally, a matrix space is a set $X$ equipped with a metric or distance function $d:X\times X\to \mathbf R$ satisfying the following properties for all $x,y,z\in X$

1. $d(x,y)=d(y,x)$,
2. $d(x,y)=0$ if and only if $x=y$,
3. $d(x,y)+d(y,z)\leq d(x,z)$

The last of these properties is known as the triangle inequality. An important corollary of these properties is that $d(x,y)\geq 0$ for all $x,y\in X$.

# Abelien Group

A group $G$ is abelian if its binary operator is commutative. That is, whenever $gh=hg$ for all $g,h\in G$.

# Vector Span

The vector span or linear span of a finite set of vectors $\{v_1,\ldots,v_N\}$ is the set of vectors that can be written as

$\lambda_1 v_1 + \ldots + \lambda_n v_n$

for some sequence of scalars $\lambda_1,\ldots,\lambda$. A vector span is itself a vector space. If $v_1,\ldots,v_n$ are linearly independent, then they form a basis of the span.

# Subgroup

Let $G$ be a group. If $H \subset G$, then $(H, \cdot)$ is a subgroup of $(G, \cdot)$ if $(H, \cdot)$ is itself a group. Equivalently, $H$ is a subgroup of $G$ if $hg^-1\in H$ whenever $h,g\in H$.

# Sigma Algebra

A sigma algebra is a space of mathematical statements that may be combined using a countable number of boolean operations.

More precisely, a sigma algebra $\Sigma$ over a set $\Omega$ is a collection of subsets in $\Omega$ with the following closure properties:

1. $\Omega\in\Sigma$
2. $B\in\Sigma\implies \Omega-B\in\Sigma$
3. $B_1,B_2,\ldots\in\Sigma\implies\bigcup B_n\in\Sigma$

The tuple $(\Omega,\Sigma)$ is called a measurable space. $\Sigma'$ is said to be a sub-sigma algebra of $\Sigma$ if $(\Omega,\Sigma')$ is measurable and $\Sigma'\subset\Sigma$. $\Sigma'$ is said to be "coarser" than $\Sigma$.

Elements of a sigma algebra are called measurable sets.

### Generating Sigma Algebras

Sigma algebras are often generating from other sigma algebras:

• Intersections: The intersection of any number (including uncountably infinite!) of sigma algebras over some set is also a sigma algebra. This means it is always possible to define a "coarsest" sigma algebra satisfying some desired property.
• Products: Suppose $(X,\Sigma_X)$ and $(Y,\Sigma_Y)$ are measurable spaces. Then it is possible to construct the product sigma algebra $\Sigma_X \times \Sigma_Y$ consisting of measurable sets of the form $B_X \times B_Y$ for $B_X\in\Sigma_X$ and $B_Y\in\Sigma_Y$
• Pre-images: Let $f:X\to Y$ be some map with $(Y,\Sigma)$ being measurable. Then the preimages $\{f^{-1}(\xi) | \xi\in\Sigma \}$ of $\Sigma$-measurable sets is itself a sigma algebra with respect to $X$. This sigma algebra, denoted $\sigma (f)$, happens to be the coarsest sigma algebra for which $f$ is "measurable".

# Eigenvalue

Let $L:V\to V$ be a linear operator and $V$ a vector space. An eigenvalue $\lambda$ of $L$ is a scalar such that $L(v)=\lambda v$ for some non-zero $v$, called an eigenvector. The set of eigenvalues of $L$ forms the eigenspectrum of $L$. The linear combination of all eigenvectors corresponding to a given eigenvalue forms an eigenspace.

### Properties of Eigenvalues

The following properties hold regardless of the dimensionality of $V$:

• All vectors belonging to an eigenspace are eigenvectors.
• Let $v_1, \ldots, v_n$ be eigenvectors of $L$ with corresponding distinct eigenvalues $\lambda_1,\ldots,\lambda_n$. Then $v_1,\ldots,v_n$ are linearly independent. Correspondingly, the number of distinct eigenvalues cannot exceed the dimension of $V$.

The following properties apply when $V$ is finite-dimensional and the underlying field is real or complex.

• The determinant of $L$ can be calculated as the product $L$'s algebraically distinct eigenvalues.
• Similarly, the trace of $L$ can be calculated as the sum of $L$'s algebraically distinct eigenvalues.
• Let $[L]$ be a matrix representation of $L$ (with respect to any basis). Then the eigenvalues of $L$ are the roots of the characteristic polynomial $P(\lambda) = \text{det}([L]-\lambda I)$. The multiplicity of a root equals the dimension of the corresponding eigenspace. Correspondingly, the number of distinct eigenvalues cannot exceed the dimension of $V$.

Here, "algebraically distinct" eigenvalues means that the eigenvalues appear as algebraically distinct roots in the charachteristic polynomial. That is, let the characteristic polynomial of $L$ be written as

$P(\lambda) = a \prod (\lambda-\lambda_i).$

Then

\begin{aligned} \text{det }L&=\prod\lambda_i \\ \text{tr }L&=\sum\lambda_i. \end{aligned}

# Characteristic Polynomial (Matrix)

Let $M$ be a square matrix with real or complex entries. The characteristic polynomial $P$ of this matrix is defined as:

$P(\lambda) = \text{det } (M - \lambda I).$

Here, $I$ is the identity matrix of the same dimension as $M$. Of interest is this polynomial written in factored form:

$P(\lambda) = a \prod_i (\lambda - \lambda_i).$

Here, the $\lambda_i$'s are the "algebraically distinct" eigenvalues of $M$ if $M$ is a complex matrix. Otherwise, if $M$ is a real matrix, then only the real roots are considered eigenvalues.