On Linux, procfs
is a virtual filesystem, mounted at /proc
, containing information on processes, threads, and the overall system.
Directories of the form /proc/[0-9]+
contain process-specific information. Paths of the form /proc/[a-z]+
contain system-specific information.
/proc
Sourced from the RHEL 6 Deployment Guide.
Process-specific data:
/proc/$pid/
is a directory containing process information for the process with identifer $pid
. The path /path/self/
links to this directory corresponding for the calling process./proc/$pid/cwd
links to the process's working directory./proc/$pid/fd/
is a directory containing links to file decriptors opened by the file./proc/$pid/environ
contains process-specific environment variables/proc/$pid/exe
links to the process's executable/proc/$pid/maps/
contains the process's memory maps./proc/$pid/mem
contains a mapping to a process's memory. This files is not normally available without attaching ptrace
./proc/$pid/task/$tid/
is a directory of a process's thread with identifier $tid
System-wide data:
/proc/bus/
contains information about available buses. In particlar /proc/bus/pci
contains information about available PCI devices./proc/cpuinfo
conatains information about the system's CPU (model name, cache size, feature flags, …)./proc/filesystems
contains a list of filesystems./proc/iomem
maps memory regions to physical devices./proc/kcore
contains a view into the system's memory./proc/loadavg
shows relative load across CPU cores./proc/locks
list of file locks held by the kernel./proc/meminfo
displays statistics on memory usage./proc/modules
contains a list of kernel modules./proc/mounts
contains a list of filesystem mounts./proc/stats
contains a large amount of statistics collected since the system was last restarded./proc/uptime
shows how long the system has been running since last restart./proc/version
shows the kernel version, including compiler info.Matrices may be produced from binary and unary operations on other matrices.
In the following, suppose that and are matrices with elements and , respectively.
An identity matrix is a square matrix with 1's on the diagnonal and 0's everywhere else. The identity matrix is often written as . For example,
The identity matrix act as the identity element on the general linear group . That is, for any matrix , .
The vector span or linear span of a finite set of vectors is the set of vectors that can be written as
for some sequence of scalars . A vector span is itself a vector space. If are linearly independent, then they form a basis of the span.
Tensors are the fundamental tools of multilinear algebra, describing the relationship between multiple vectors and covectors.
Let be a finite dimensional vector space over some scalar field . And let be its corresponding dual space. A tensor is a multilinear map of the form
Such a tensor is said to be of type (or in some literature). If , the tensor is said to be covariant. If , the tensor is said to be contravariant. Otherwise, the tensor is said to be mixed.
Let be a basis of . And let be the corresponding basis in (that is, ). Then a natural basis for the space of -typed tensors on is given by
for all possible sequences and in . The dimension of the space of -typed tensors is thus .
Caley's theorem asserts that a permutation group and an abstract group are essentially equivalent concepts. It is clear that a permutation group meets the definition of an abstract group. Caley's theorem states that an abstract group is isomorphic to some permutation group.
For a given , the bijection defined by is the left action by . Similarly, the map is the right action by . The mapping from to the corresponding left action (or right action) is an isomorphism.
Minimalist, non-traditional miso soup.
500 ml of broth/stock. I use 10g of Better than Bouillon concentrated mushroom stock. Traditionally, fish stock (Dashi) would be used.
Boil the broth with other desired toppings until top are soft. For the additional ingrediants, just about anything would work: green onions, tofu, seaweed, fried bean curd, egg, spinach, &c. I like mine plain and consumed like tea.
Turn the heat off and and temper in about 20g of miso paste with some of the warm broth and add back to the soup. Serve hot.
The (algebraic) dual space of the vector space over the scalar field consists of all linear mappings of the form . is itself a vector space over . Elements of are referred to as linear functionals or covectors or dual vectors. The notion of a dual space is integral in the theory of tensors.
If is finite-dimensional, then shares the same dimension. That is, and are isomorphic. Moreover, for any non-degenerate bilinear form , the bijection is an isomoprhism between and . Thus, a finite-dimensional inner product space has a "natural" isomorphism to its dual.
Regardless of the dimensionality of , there is a natural homomorphism from to is dual of its dual, . This is because the "evaluation map" corresponding to , defined by , is clearly am element of o . And the relationship to of a vector and its evaluation map are clearly homomorphic. Moreover, this homomorphism is an isomorphism for finite-dimensional . Because this isomorphism is natural, double duals and other "higher order duals" are usually ignored.
Let form a basis of . Define the dual basis as the sequence of linear functionals of the property:
Here, if , otherwise (this quantity is sometimes called the Kronecker delta). The dual basis is a basis on .
Item | Variant | kcal / 100g |
---|---|---|
Barley | Raw | 353 |
Cooked | 123 | |
Rice | White, Raw | 365 |
White, Cooked | 130 | |
Brown, Raw | 370 | |
Brown Cooked | 111 | |
Oats | Raw | 389 |
Cooked | 63 |
Item | Variant | kcal / 100g |
---|---|---|
Bread | White | 263 |
Rye | 260 | |
Multigrain | 265 | |
Pita | 275 | |
Bagel | 250 | |
English Muffin | 235 | |
A sigma algebra is a space of mathematical statements that may be combined using a countable number of boolean operations.
More precisely, a sigma algebra over a set is a collection of subsets in with the following closure properties:
The tuple is called a measurable space. is said to be a sub-sigma algebra of if is measurable and . is said to be "coarser" than .
Elements of a sigma algebra are called measurable sets.
Sigma algebras are often generating from other sigma algebras:
Let be a finite-dimensional real or complex vector space and be a linear transformation. The determinant of , notated as either or as , is a number that may be defined in various ways.
Before defining the determinant, it is useful to list some useful properties:
Let specifically be a matrix, then:
Let be the algebraically distinct eigenvalues of . The determinant of is defined as their product:
Here, "algebraically distinct" refers to the fact that the eigenvalues may not be numerically distinct, but act as distinct roots of the characteristic polynomial of :
This definition may seem circular since the characteristic polynomial is itself usually defined by the determinant (). However, it is possible to define this polynomial in other ways. This is the approach taken, for example, in Axler's Linear Algebra Done Right.
It is possible to define a determinant in the context of Grassman algebra.
Let be -dimensional. Then the exterior power is one dimensional. The corresponding exterior power of , , can be written in the form . And can thus be defined as the determinant of .
Let be the matrix representation of with respect to some basis of . Then it is possible to define the determinant of as a function of the corresponding matrix elements .
Suppose is -dimensional. A bijection of the form is called an (-fold) permutation. The sign of this permutation, denoted as , is equal to 1 if can be written as an even number of "swaps" (permutations that swap two elements but leave the others unchanged) and -1 if the number of swaps is odd.
Then the determinant of and is:
Here, the above summation is over all possible -fold permutations.
Let be an matrix. The trace of this matrix is the sum of its diagonal elements:
The trace can also be calculated as the sum of 's eigenvalues (adjusting for multiplicities). That is, let the characteristic equation of be given by . Then
Since the eigenvalues of a matrix are invariant under coordinate transforms, the trace of a linear transformation of a finite-dimensional space can be defined as the trace of one of its matrix representations.
Traces satisfy the following properties:
A linear map is a mapping between two vector spaces that preserves linearity.
Formally, a map from vector space to vector space is linear if the following equation holds for all vectors and scalar :
Of course, and should have the same underlying scalar field for this to make sense.
A linear map may sometimes be called a linear transformation, especially if . The term linear operator is also common, especially in physics.
Given a linear map , one can construct a number of relevant vector subspaces of or :
The rank-nullity theorem states that the nullity and rank of sums up to the dimension of :
The rank-nullity theorem is basically a manifestation theorem of the first isomorphism theorem for groups.
Consider a linear map of the form for some scalar field . Then there uniquely exists an matrix such that for all , with being treated as a "column vector" ( matrix).
More generally, let where and . And consider isomorphisms and . These isomorphisms may be identified as a basis on and . With respect to the basis and , one can define the matrix as the matrix corresponding to . See Change of Basis for more details.
The algebra of linear maps corresponds directly to matrix algebra. Let and be linear maps between finite-dimensional vector spaces. Then, with respect to some given basis:
Here, is a scalar.
Let be a group and be a vector space over the field . A representation with respect to these objects is a homomorphism from to , the set of isomophisms of . That is, a representation is just a group action consisting of linear transformations. The study of group representations forms much of representation theory.
Usually, and is finite. In this case, (or, rather, ) may be identified as a finite set of matrices under some coordinate system.
The subspace is said to be invariant with respect to if for all and .
The representation is said to be irreducible over if the only invariant subspaces are and the subspace consisting of just the zero element.
An important problem of representation theory is decomposing into irreducible components That is, write with irreducible and none of is equal to . If this is possible, then the representation is said to be fully reducible.
A homomorphism is a mapping that preserves algebraic structure. In the context of group theory, a homomoprhism between a group and is a mapping with the following property for all :
may be categorized as a special "type" of homomorphism according to additional properties it may hold:
A quotient group for a given group and a "normal" subgroup of is a group that has a "coarser" or "more relaxed" algebraic structure relative to . The notion of a quotient group is essential in a number of isomorphism theorems.
Let be a group with subgroup .
A left coset of , denoted as or for some , is the set consisting of elements of the form for some . That is, is the image of under the left action induced by .
Similarly, a right coset of , is the image of under the right action induced by .
The subgroup is said to be normal if for all . In this case, there is no distinction between "right" and "left" cosets.
The cosets of a normal subgroup itself forms a group
Let and be finite and normal. Lagrange's theorem states that
Linear Algebra is the study of linear structure.
The most basic objects are vector spaces over defined some given field. Vector spaces are basically spaces of elements (vectors) that can be combined linearly. Example of vector spaces include:
Vector spaces are often metric spaces via the inclusion of inner products. The archetypal example of an inner product space is a euclidean space.
Morphisms of vector spaces – that is, maps between vector spaces that preserve linearity – are called linear maps. Linear maps can be represented as matrices, rectangular numerical arrays, via a basis. Matrices may be combined and manipulated numerically, .
Linear maps that are also isomorphisms are called linear transformations. Linear transformations that are also isometries (preserving an inner product) are called unitary (or orthogonal for real scalar fields).
An important operation with linear transformations is their eigendecompostion to some canonical form using eigenvalues. Eigenvalues are usually defined using determinants.
Linear maps take in a single vector argument. Maps that take into multiple vector arguments and are linear componentwise are called tensors. The study of tensors belong to multilinear algebra, a sub-field of linear algebra. This includes exterior algebra, which has extensive applications in physics and geometry.
Linear algebra has notable applications in the following:
A change of basis is a process of converting the components (or coordinates) of a vector (or matrix) with respect to one basis to components of another basis.
Let be a finite-dimensional vector space with dimension . And let and be two basis on . Then there uniquely exists an matrix called a transition matrix or change of basis matrix such that
for all . Here, and are the column vectors with respect to and , respectively.
Transition matrices have a number of properties:
Each basis on is uniquely associated with a linear isomorphism . Let and be isomorphisms corresponding to and respectively. Then the matrix is a representation of the map given by .
In a topological space, a neighborhood is a set of points surrounding some particular point. Formally, a neighborhood around a given point is a set of points containing an open set that itself contains the given point. The given point is said to be in the interior of the neighborhood.
Consider a set . For each , suppose is a collection of subsets of obeying the following axioms:
Then is said to be a system of neighborhoods for .
Given such a system of neighborhoods, a set is said to be open if . The collection of such sets forms a topological space. More importantly, every topological space can be formed in this manner. It can be shown that is theo collection of all neighborhoods around the point .
Some authors use the above fact to define a topological space using such a system of neighborhoods instead of by the properties of its open sets. The neighborhood formulation, while less verbose, is arguably more intuitive.
Every topological space can be uniquely specified from the system of neighborhoods of each of its points.
Vector spaces are used to model a collection of objects – vectors – that can be combined in a linear way to form more vectors. Vector spaces may also be called linear spaces.
Formally, a vector space over a field (with usually being either or ) of "scalars" is a set together with two operations:
Vector Addition. A vector space forms an additive abelian group. That is, is equiped with a binary operator satisfying the following properties for all :
Scalar multiplication. Combining the scalar with the vector results in a vector . Scalar multiplication satisfies the following properties for all scalars and vectors :
A linear map is a homomorphism between two vector spaces. That is, it is a map from one vector space to another vector space that preserves its linear structure (vector addition and scalar multiplication).
A vector space may be equipped with a basis. If this basis is finite, then the vector space is said to have a finite dimension. The dimension of the vector space is the cardinality of any of its basis (invariant across basis).
If is dimensional, then it is isomorphic to the cartesian space (see below).
A vector subspace is a subset of some vector space that is itself a vector space when inheriting vector addition and scalar multiplication from . Equivalently, is a vector space whenever it is closed under vector addition and scalar multiplication.
Some common examples of vector spaces include:
A basis of a vector space over a scalar field is a set of vectors that are linearly independent and span .
More explicitly, let be a basis. Then, every can be uniquely written in the form for some .
If is finite, then is said to have a finite dimension. And the dimension of is the cardinality of . Otherwise, is said to be infinitely dimensional.
All basis of a vector space share the same cardinality.
Let be dimensional ( finite). Then a basis may be uniquely identified by a linear isomorphism via the following construction:
Here, is the th component of . is an coordinate system on .
Converting from one basis to another may be done using transition matrices.
Galilean Relativity refers to a theory of relativity consistent with Newton's laws.
It is named after Galileo's thought experiment involving a vessel traveling with a perfectly uniform and linear motion. An observer contained within the vessel, observing only phenomena also contained within the vessel, will have no means of determining the vessel's speed or direction of travel.
A galilean space(time) is a four-dimensional affine space . Points in this space are called "events". The set of displacements forms a four-dimensional, real vector space .
There exists a rank-1 linear map mapping spatio-temporal displacements to time intervals. Two events and are simultaneous if .
The three-dimensional quotient space is euclidean (that is, equipped an inner product ). From this, the distance between simultaneous events and is defined as
where is the natural projection (epimorphism) from to .
An isomorphism between galilean spaces is a bijection that preserves galilean structure (affinity, euclidean distance between simultaneous events, and time intervals).
All galilean spaces are isomorphic to , where the and components are each equiped with the standard inner product. Isomorphisms of spacetime onto are called inertial reference frames or galilean coordinates.
Automorphisms of are called galilean transformations, which forms the galilean group. The galilean group is generated from the following galilean transformations:
Let be a linear operator and a vector space. An eigenvalue of is a scalar such that for some non-zero , called an eigenvector. The set of eigenvalues of forms the eigenspectrum of . The linear combination of all eigenvectors corresponding to a given eigenvalue forms an eigenspace.
The following properties hold regardless of the dimensionality of :
The following properties apply when is finite-dimensional and the underlying field is real or complex.
An inner product space is a vector space over a scalar field together with a bilinear form , called an inner product, that satisfies the following properties:
The vector space together with an inner product is called a euclidean space. An inner product space with complex scalars is sometimes called a unitary space.
The function defined by satisfies the properties of a norm (thus an inner product space is a nomed vector space). The inner product can be recovered from its norm using the polarization identity:
The inner product norm further satisfies the famous Cauchy-Schwartz inequality:
A linear transformation is unitary if the inner product is preserved.
Let be a set of vectors in some vector space . These vectors are mutually linearly dependent if there exists some finite subset and sequence of non-zero scalars such that
If is not linearly dependent, then it is constituent vectors are linearly independent.
The dual numbers can be thought of an extension of the real numbers with an infinitesimal offset. A dual number may be written as , where and are real numbers. This expression is linear in and and is said to obey the following rule of multiplication:
In particular, .
For a polynomial , it can be readily shown that:
And for any analytic function , one can similarly extend to the dual numbers:
With this, it is possible to "automatically differentiate" a function by calculating the "epsilon" component of . The practicality of this approach depends on how is written.
The ForwardDiff.jl Julia package follows this exact approach, utilizing a multidimensional analogue of dual numbers to calculate gradients.
A unitary transformation is an isomorphism of an inner product space. That is, it is a linear transformation preserving inner products:
.
For euclidean vector spaces (), unitary transformations are called orthonormal transformations.
The eigenspaces of a linear transformation are orthogonal.
The matrix representation of a unitary transformation is called a unitary matrix. Such matrices have the following properties:
Here, denotes the conjugate transpose of (obtained from by taking the complex conjugate of each element and then transposing the matrix).
For euclidean spaces, a unitary matrix is specifically called an orthonormal matrix.
An affine space can be thought of a vector space that has "lost it's origin".
Formally, an affine space is a set of points together with a vector space of "displacements". Points in can be subtracted to form a vector. That is . Subtraction must satisfy the following properties:
These conditions are called the Weyl axioms. An alternative formulation is to assert that acts on as a free action (treating as a group under vector addition).
When is equipped with an inner product, then becomes a metric space with the metric . With this, and are clearly isometric.
An group is a simple algebraic structure that represents a composable set of permutations. There are two popular definitions of a group: one as a set of permutations and one as a set equipped with a binary operator. Both definitions are essentially equivalent by Caley's Theorem.
One formulation of a group is as a set of permutations. That is, a permutation group is a set of bijections of some set with the following closure properties:
An abstract group is a set together with a mapping called a binary operator. We write as or simply as .
An abstract group must satisfy the following properties:
It is clear that a permutation group is an abstract group by having composition as the chosen binary operator. Caley's theorem states that the reverse is true in a sense.
Note: For notational convenience, the binary operator is usually assumed and a group is identified by the underlying set. So, for example, claiming that " is an element of the group " means that " is an element of the underlying set of the group ".
If , then is a subgroup of if is itself a group. Equivalently, whenever is is closed under inversion and composition.
A group is abelian if its binary operator is commutative. That is, whenever for all .
The general linear group of degree and field is the set of invertible matrices over the field . The general linear group is a Lie group with respect to matrix multiplication. It may be denoted as .
A combination of chinese congee and greek avgolemeno. Good use of leftover chicken.
Combine one part jasmine rice to about ten parts chicken stock, by weight. Half a cup of rice (or 115g) and four cups of stock should be good for four servings. Season with salt and add coarsely chopped ginger to taste. Cook in a pressure cooker until thick and hot. Remove the ginger. Optionally thicken further with an immersion blender.
Add cooked chicken meat (thighs work best), shredded or chopped into small pieces. Add in one large egg yolk per two cups of stock. Temper the yolks with the porridge to prevent curdling. Add lemon juice to taste, but be generous. Add in some rice flake noodles for texture. Wait for rice noodles to soften before serving. Garnish with whatever you like, such as: green onion, fried scallions, fermented soy beans, chili paste, hot sauce, cilantro, mint, dill, youtiao, a poached egg.
Vegetarian Alternative: Replace chicken and chicken stock with cooked mushrooms and mushroom stock.
A quotient (vector) space is an extension of the concept of a quotient group for vector spaces.
Let be a vector subspace of . Since is a normal subgroup of with respect to vector addition, one can construct the quotient group consisting of cosets of (affine hyperplanes parallel to ).
Moreover, also inherits a form of scalar multiplication, making itself a vector space. Let be a vector in the quotient space. That is, is a coset of . Scalar multiplication of every point in yields another coset of . Hence, scalar multiplication is well-defined for vector spaces.
The definition of a quotient vector space can be readily generalized to a module. That is, one can construct quotient modules in a similar fashion.
A topological space is a set equipped topological structure. Essentially, a topological space is defined so that the "limit" of a sequences o "points" in the space may be defined.
A topological space may be defined as a set to together with a topology , which consists of subsets of . Elements of this topology are called open sets. A topology satisfies the following axioms:
A set is said to be closed if its complement is open. The set of closed sets uniquely specifies space's topology.
Topologies are rarely defined by directly specifying the open sets directly. Rather, they are usually generated from simpler constructs. For example:
Let be a field of scalars, and be the corresponding dimensional cartesian vector space. The standard basis is the basis of defined such that the th component of is 1 if and 0 otherwise.
A matrix is a rectangular array of numbers, commonly used in applications of linear algebra.
For example, the following is a two-by-four () matrix of real numbers:
This matrix has two rows and four columns.
A matrix may generally contain any number of rows or columns. The entries of a matrix usually belong to some specified field, usually or .
Matrix variables are often denoted by capital letters (), sometimes bolded ().
An entry of a matrix may be located by specifying which row and column the entry belongs to. This can be done by supplying an "index" to the desired row and column. For example, denote the entries of the afforementioned matrix as . Then is the entry in the first row (counting top-to-bottom) and second column (counting left-to-right).
Matrices may be combined and transformed according to the conventions of matrix algebra.
In linear algebra, an matrix is a representation of a linear map of the form for some field .
A bilinear form over a vector space (over some scalar field ) is a mapping that is linear in each argument.
Bilinear forms may be characterized by additional properties they posses. Let be a bilinear form. Then is:
non-degenerate if for all only when ,
symmetric if for all ,
skew-symmetric if for all ,
alternating if for all (alternating implies skew-symmetric).
reflexive if implies that .
Bilinear forms may be represented as matrices for finite-dimensional vector spaces. Let . And every bilinear form can be defined in the following fashion:
where is the matrix representation of , and and are treated as column vectors.
A normed vector space is a vector space over together with a function , the norm, satisfying the following properties for all and :
A normed vector space is also a metric space under the metric .
A seminorm has the properties of the norm, except with the property that may be zero for nonzero .