What is Geometric Algebra?

Basics of Geometric Algebra

Geometric algebra is the Clifford algebra of a real finite dimensional vector space or the algebra that results when the vector space is extended with a product of vectors (geometric product) that is associative, left and right distributive, and yields a real number for the square (geometric product) of any vector [HS84], [DL03]. The elements of the geometric algebra are called multivectors and consist of the linear combination of scalars, vectors, and the geometric product of two or more vectors. The additional axioms for the geometric algebra are that for any vectors \(a\), \(b\), and \(c\) in the base vector space ([DL03],p85):

\[\begin{split}\begin{array}{c} a\lp bc \rp = \lp ab \rp c \\ a\lp b+c \rp = ab+ac \\ \lp a + b \rp c = ac+bc \\ aa = a^{2} \in \Re. \end{array}\end{split}\]

If the dot (inner) product of two vectors is defined by ([DL03],p86)

\[\be a\cdot b \equiv (ab+ba)/2, \ee\]

then we have

\[\begin{split}\begin{aligned} c &= a+b \\ c^{2} &= (a+b)^{2} \\ c^{2} &= a^{2}+ab+ba+b^{2} \\ a\cdot b &= (c^{2}-a^{2}-b^{2})/2 \in \Re \end{aligned}\end{split}\]

Thus \(a\cdot b\) is real. The objects generated from linear combinations of the geometric products of vectors are called multivectors. If a basis for the underlying vector space are the vectors \({\left \{{{{\eb}}_{1},\dots,{{\eb}}_{n}} \rbrc}\) (we use boldface \(\eb\)’s to denote basis vectors) a complete basis for the geometric algebra is given by the scalar \(1\), the vectors \({{\eb}}_{1},\dots,{{\eb}}_{n}\) and all geometric products of vectors

\[\be {{\eb}}_{i_{1}}{{\eb}}_{i_{2}}\dots {{\eb}}_{i_{r}} \mbox{ where } 0\le r \le n\mbox{, }0 \le i_{j} \le n \mbox{ and } i_{1}<i_{2}<\dots<i_{r} \ee\]

Each base of the complete basis is represented by a non-commutative symbol (except for the scalar 1) with name \({{\eb}}_{i_{1}}\dots {{\eb}}_{i_{r}}\) so that the general multivector \({\boldsymbol{A}}\) is represented by (\(A\) is the scalar part of the multivector and the \(A^{i_{1},\dots,i_{r}}\) are scalars)

\[\begin{split}\be {\boldsymbol{A}} = A + \sum_{r=1}^{n}\sum_{\substack{i_{1},\dots,i_{r}\\ 0\le i_{j}<i_{j+1} \le n}} A^{i_{1},\dots,i_{r}}{{\eb}}_{i_{1}}{{\eb}}_{i_{2}}\dots {{\eb}}_{r} \ee\end{split}\]

The critical operation in setting up the geometric algebra is reducing the geometric product of any two bases to a linear combination of bases so that we can calculate a multiplication table for the bases. Since the geometric product is associative we can use the operation (by definition for two vectors \(a\cdot b \equiv (ab+ba)/2\) which is a scalar)

\[\be \label{reduce} {{\eb}}_{i_{j+1}}{{\eb}}_{i_{j}} = 2{{\eb}}_{i_{j+1}}\cdot {{\eb}}_{i_{j}} - {{\eb}}_{i_{j}}{{\eb}}_{i_{j+1}} \ee\]

These processes are repeated until every basis list in \({\boldsymbol{A}}\) is in normal (ascending) order with no repeated elements. As an example consider the following

\[\begin{split}\begin{aligned} {{\eb}}_{3}{{\eb}}_{2}{{\eb}}_{1} &= (2({{\eb}}_{2}\cdot {{\eb}}_{3}) - {{\eb}}_{2}{{\eb}}_{3}){{\eb}}_{1} \\ &= 2{\lp {{{\eb}}_{2}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{1} - {{\eb}}_{2}{{\eb}}_{3}{{\eb}}_{1} \\ &= 2{\lp {{{\eb}}_{2}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{1} - {{\eb}}_{2}{\lp {2{\lp {{{\eb}}_{1}\cdot {{\eb}}_{3}} \rp }-{{\eb}}_{1}{{\eb}}_{3}} \rp } \\ &= 2{\lp {{\lp {{{\eb}}_{2}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{1}-{\lp {{{\eb}}_{1}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{2}} \rp }+{{\eb}}_{2}{{\eb}}_{1}{{\eb}}_{3} \\ &= 2{\lp {{\lp {{{\eb}}_{2}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{1}-{\lp {{{\eb}}_{1}\cdot {{\eb}}_{3}} \rp }{{\eb}}_{2}+ {\lp {{{\eb}}_{1}\cdot {{\eb}}_{2}} \rp }{{\eb}}_{3}} \rp }-{{\eb}}_{1}{{\eb}}_{2}{{\eb}}_{3} \end{aligned}\end{split}\]

which results from repeated application of eq. (\(\ref{reduce}\)). If the product of basis vectors contains repeated factors eq. (\(\ref{reduce}\)) can be used to bring the repeated factors next to one another so that if \({{\eb}}_{i_{j}} = {{\eb}}_{i_{j+1}}\) then \({{\eb}}_{i_{j}}{{\eb}}_{i_{j+1}} = {{\eb}}_{i_{j}}\cdot {{\eb}}_{i_{j+1}}\) which is a scalar that commutes with all the terms in the product and can be brought to the front of the product. Since every repeated pair of vectors in a geometric product of \(r\) factors reduces the number of non-commutative factors in the product by \(r-2\). The number of bases in the multivector algebra is \(2^{n}\) and the number containing \(r\) factors is \({n\choose r}\) which is the number of combinations or \(n\) things taken \(r\) at a time (binomial coefficient).

The other construction required for formulating the geometric algebra is the outer or wedge product (symbol \({\wedge}\)) of \(r\) vectors denoted by \(a_{1}{\wedge}\dots{\wedge}a_{r}\). The wedge product of \(r\) vectors is called an \(r\)-blade and is defined by ([DL03],p86)

\[\be a_{1}{\wedge}\dots{\wedge}a_{r} \equiv \sum_{i_{j_{1}}\dots i_{j_{r}}} \epsilon^{i_{j_{1}}\dots i_{j_{r}}}a_{i_{j_{1}}}\dots a_{i_{j_{1}}} \ee\]

where \(\epsilon^{i_{j_{1}}\dots i_{j_{r}}}\) is the contravariant permutation symbol which is \(+1\) for an even permutation of the superscripts, \(0\) if any superscripts are repeated, and \(-1\) for an odd permutation of the superscripts. From the definition \(a_{1}{\wedge}\dots{\wedge}a_{r}\) is antisymmetric in all its arguments and the following relation for the wedge product of a vector \(a\) and an \(r\)-blade \(B_{r}\) can be derived

\[\be \label{wedge} a{\wedge}B_{r} = (aB_{r}+(-1)^{r}B_{r}a)/2 \ee\]

Using eq. (\(\ref{wedge}\)) one can represent the wedge product of all the basis vectors in terms of the geometric product of all the basis vectors so that one can solve (the system of equations is lower diagonal) for the geometric product of all the basis vectors in terms of the wedge product of all the basis vectors. Thus a general multivector \({\boldsymbol{B}}\) can be represented as a linear combination of a scalar and the basis blades.

\[\be {\boldsymbol{B}} = B + \sum_{r=1}^{n}\sum_{i_{1},\dots,i_{r},\;\forall\; 0\le i_{j} \le n} B^{i_{1},\dots,i_{r}}{{\eb}}_{i_{1}}{\wedge}{{\eb}}_{i_{2}}{\wedge}\dots{\wedge}{{\eb}}_{r} \ee\]

Using the blades \({{\eb}}_{i_{1}}{\wedge}{{\eb}}_{i_{2}}{\wedge}\dots{\wedge}{{\eb}}_{r}\) creates a graded algebra where \(r\) is the grade of the basis blades. The grade-\(r\) part of \({\boldsymbol{B}}\) is the linear combination of all terms with grade \(r\) basis blades.

Grade Projection

The scalar part of \({\boldsymbol{B}}\) is defined to be grade-\(0\). Now that the blade expansion of \({\boldsymbol{B}}\) is defined we can also define the grade projection operator \({\left <{{\boldsymbol{B}}} \right >_{r}}\) by

\[\be {\left <{{\boldsymbol{B}}} \right >_{r}} = \sum_{i_{1},\dots,i_{r},\;\forall\; 0\le i_{j} \le n} B^{i_{1},\dots,i_{r}}{{\eb}}_{i_{1}}{\wedge}{{\eb}}_{i_{2}}{\wedge}\dots{\wedge}{{\eb}}_{r} \ee\]

and

\[\be {\left <{{\boldsymbol{B}}} \right >_{}} \equiv {\left <{{\boldsymbol{B}}} \right >_{0}} = B \ee\]

Multivector Products

Then if \({\boldsymbol{A}}_{r}\) is an \(r\)-grade multivector and \({\boldsymbol{B}}_{s}\) is an \(s\)-grade multivector we have

\[\be {\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s} = {\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{{\left |{r-s}\right |}}}+{\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{{\left |{r-s}\right |}+2}}+\cdots {\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{r+s}} \ee\]

and define ([HS84],p6)

\[\begin{split}\begin{aligned} {\boldsymbol{A}}_{r}{\wedge}{\boldsymbol{B}}_{s} &\equiv {\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{r+s}} \\ {\boldsymbol{A}}_{r}\cdot{\boldsymbol{B}}_{s} &\equiv {\left \{ { \begin{array}{cc} r\mbox{ and }s \ne 0: & {\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{{\left |{r-s}\right |}}} \\ r\mbox{ or }s = 0: & 0 \end{array}} \right \}} \end{aligned}\end{split}\]

where \({\boldsymbol{A}}_{r}\cdot{\boldsymbol{B}}_{s}\) is called the dot or inner product of two pure grade multivectors. For the case of two non-pure grade multivectors

\[\begin{split}\begin{aligned} {\boldsymbol{A}}{\wedge}{\boldsymbol{B}} &= \sum_{r,s}{\left <{{\boldsymbol{A}}} \right >_{r}}{\wedge}{\left <{{\boldsymbol{B}}} \right >_{{s}}} \\ {\boldsymbol{A}}\cdot{\boldsymbol{B}} &= \sum_{r,s\ne 0}{\left <{{\boldsymbol{A}}} \right >_{r}}\cdot{\left <{{\boldsymbol{B}}} \right >_{{s}}} \end{aligned}\end{split}\]

Two other products, the left (\(\rfloor\)) and right (\(\lfloor\)) contractions, are defined by

\[\begin{split}\begin{aligned} {\boldsymbol{A}}\lfloor{\boldsymbol{B}} &\equiv \sum_{r,s}{\left \{ {\begin{array}{cc} {\left <{{\boldsymbol{A}}_r{\boldsymbol{B}}_{s}} \right >_{r-s}} & r \ge s \\ 0 & r < s \end{array}} \right \}} \\ {\boldsymbol{A}}\rfloor{\boldsymbol{B}} &\equiv \sum_{r,s}{\left \{ {\begin{array}{cc} {\left <{{\boldsymbol{A}}_{r}{\boldsymbol{B}}_{s}} \right >_{s-r}} & s \ge r \\ 0 & s < r\end{array}} \right \}} \end{aligned}\end{split}\]

Reverse of Multivector

A final operation for multivectors is the reverse. If a multivector \({\boldsymbol{A}}\) is the geometric product of \(r\) vectors (versor) so that \({\boldsymbol{A}} = a_{1}\dots a_{r}\) the reverse is defined by

\[\begin{aligned} {\boldsymbol{A}}^{{\dagger}} \equiv a_{r}\dots a_{1} \end{aligned}\]

where for a general multivector we have (the the sum of the reverse of versors)

\[\be {\boldsymbol{A}}^{{\dagger}} = A + \sum_{r=1}^{n}(-1)^{r(r-1)/2}\sum_{i_{1},\dots,i_{r},\;\forall\; 0\le i_{j} \le n} A^{i_{1},\dots,i_{r}}{{\eb}}_{i_{1}}{\wedge}{{\eb}}_{i_{2}}{\wedge}\dots{\wedge}{{\eb}}_{r} \ee\]

note that if \({\boldsymbol{A}}\) is a versor then \({\boldsymbol{A}}{\boldsymbol{A}}^{{\dagger}}\in\Re\) and (\(AA^{{\dagger}} \ne 0\))

\[\be {\boldsymbol{A}}^{-1} = {\displaystyle\frac{{\boldsymbol{A}}^{{\dagger}}}{{\boldsymbol{AA}}^{{\dagger}}}} \ee\]

The reverse is important in the theory of rotations in \(n\)-dimensions. If \(R\) is the product of an even number of vectors and \(RR^{{\dagger}} = 1\) then \(RaR^{{\dagger}}\) is a composition of rotations of the vector \(a\). If \(R\) is the product of two vectors then the plane that \(R\) defines is the plane of the rotation. That is to say that \(RaR^{{\dagger}}\) rotates the component of \(a\) that is projected into the plane defined by \(a\) and \(b\) where \(R=ab\). \(R\) may be written \(R = e^{\frac{\theta}{2}U}\), where \(\theta\) is the angle of rotation and \(U\) is a unit blade \(\lp U^{2} = \pm 1\rp\) that defines the plane of rotation.

Reciprocal Frames

If we have \(M\) linearly independent vectors (a frame), \(a_{1},\dots,a_{M}\), then the reciprocal frame is \(a^{1},\dots,a^{M}\) where \(a_{i}\cdot a^{j} = \delta_{i}^{j}\), \(\delta_{i}^{j}\) is the Kronecker delta (zero if \(i \ne j\) and one if \(i = j\)). The reciprocal frame is constructed as follows:

\[\be E_{M} = a_{1}{\wedge}\dots{\wedge}a_{M} \ee\]
\[\be E_{M}^{-1} = {\displaystyle\frac{E_{M}}{E_{M}^{2}}} \ee\]

Then

\[\be a^{i} = \lp -1\rp ^{i-1}\lp a_{1}{\wedge}\dots{\wedge}\breve{a}_{i} {\wedge}\dots{\wedge}a_{M}\rp E_{M}^{-1} \ee\]

where \(\breve{a}_{i}\) indicates that \(a_{i}\) is to be deleted from the product. In the standard notation if a vector is denoted with a subscript the reciprocal vector is denoted with a superscript. The set of reciprocal vectors will be calculated if a coordinate set is given when a geometric algebra is instantiated since they are required for geometric differentiation when the Ga member function Ga.mvr() is called to return the reciprocal basis in terms of the basis vectors.

Manifolds and Submanifolds

A \(m\)-dimensional vector manifold4, \(\mathcal{M}\), is defined by a coordinate tuple (tuples are indicated by the vector accent “\(\vec{\;\;\;}\)”)

\[\be \vec{x} = \paren{x^{1},\dots,x^{m}}, \ee\]

and the differentiable mapping (\(U^{m}\) is an \(m\)-dimensional subset of \(\Re^{m}\))

\[\be \f{\bm{e}^{\mathcal{M}}}{\vec{x}}\colon U^{m}\subseteq\Re^{m}\rightarrow \mathcal{V}, \ee\]

where \(\mathcal{V}\) is a vector space with an inner product5 (\(\cdot\)) and is of \({{\dim}\lp {\mathcal{V}} \rp } \ge m\).

Then a set of basis vectors for the tangent space of \(\mathcal{M}\) at \(\vec{x}\), \({{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\), are

\[\be \bm{e}_{i}^{\mathcal{M}} = \pdiff{\bm{e}^{\mathcal{M}}}{x^{i}} \ee\]

and

\[\be \f{g_{ij}^{\mathcal{M}}}{\vec{x}} = \bm{e}_{i}^{\mathcal{M}}\cdot\bm{e}_{j}^{\mathcal{M}}. \ee\]

A \(n\)-dimensional (\(n\le m\)) submanifold \(\mathcal{N}\) of \(\mathcal{M}\) is defined by a coordinate tuple

\[\be \vec{u} = \paren{u^{1},\dots,u^{n}}, \ee\]

and a differentiable mapping

\[\be \label{eq_79} \f{\vec{x}}{\vec{u}}\colon U^{n}\subseteq\Re^{n}\rightarrow U^{m}\subseteq\Re^{m}, \ee\]

Then the basis vectors for the tangent space \({{{\mathcal{T}_{\vec{u}}}\lp {\mathcal{N}} \rp }}\) are (using \({{{{\eb}}^{\mathcal{N}}}\lp {\vec{u}} \rp } = {{{{\eb}}^{\mathcal{M}}}\lp {{{\vec{x}}\lp {\vec{u}} \rp }} \rp }\) and the chain rule)6

\[\be \f{\bm{e}_{i}^{\mathcal{N}}}{\vec{u}} = \pdiff{\f{\bm{e}^{\mathcal{N}}}{\vec{u}}}{u^{i}} = \pdiff{\f{\bm{e}^{\mathcal{M}}}{\vec{x}}}{x^{j}}\pdiff{x^{j}}{u^{i}} = \f{\bm{e}_{j}^{\mathcal{M}}}{\f{\vec{x}}{\vec{u}}}\pdiff{x^{j}}{u^{i}}, \ee\]

and

\[\be \label{eq_81} \f{g_{ij}^{\mathcal{N}}}{\vec{u}} = \pdiff{x^{k}}{u^{i}}\pdiff{x^{l}}{u^{j}} \f{g_{kl}^{\mathcal{M}}}{\f{\vec{x}}{\vec{u}}}. \ee\]

Going back to the base manifold, \(\mathcal{M}\), note that the mapping \({{{\eb}^{\mathcal{M}}}\lp {\vec{x}} \rp }\colon U^{n}\subseteq\Re^{n}\rightarrow \mathcal{V}\) allows us to calculate an unnormalized pseudo-scalar for \({{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\),

\[\be \f{I^{\mathcal{M}}}{\vec{x}} = \f{\bm{e}_{1}^{\mathcal{M}}}{\vec{x}} \W\dots\W\f{\bm{e}_{m}^{\mathcal{M}}}{\vec{x}}. \ee\]

With the pseudo-scalar we can define a projection operator from \(\mathcal{V}\) to the tangent space of \(\mathcal{M}\) by

\[\be \f{P_{\vec{x}}}{\bm{v}} = (\bm{v}\cdot \f{I^{\mathcal{M}}}{\vec{x}}) \paren{\f{I^{\mathcal{M}}}{\vec{x}}}^{-1} \;\forall\; \bm{v}\in\mathcal{V}. \ee\]

In fact for each tangent space \({{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) we can define a geometric algebra \({{\mathcal{G}}\lp {{{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}} \rp }\) with pseudo-scalar \(I^{\mathcal{M}}\) so that if \(A \in {{\mathcal{G}}\lp {\mathcal{V}} \rp }\) then

\[\be \f{P_{\vec{x}}}{A} = \paren{A\cdot \f{I^{\mathcal{M}}}{\vec{x}}} \paren{\f{I^{\mathcal{M}}}{\vec{x}}}^{-1} \in \f{\mathcal{G}}{\Tn{\mathcal{M}}{\vec{x}}}\;\forall\; A \in \f{\mathcal{G}}{\mathcal{V}} \ee\]

and similarly for the submanifold \(\mathcal{N}\).

If the embedding \({{{\eb}^{\mathcal{M}}}\lp {\vec{x}} \rp }\colon U^{n}\subseteq\Re^{n}\rightarrow \mathcal{V}\) is not given, but the metric tensor \({{g_{ij}^{\mathcal{M}}}\lp {\vec{x}} \rp }\) is given the geometric algebra of the tangent space can be constructed. Also the derivatives of the basis vectors of the tangent space can be calculated from the metric tensor using the Christoffel symbols, \({{\Gamma_{ij}^{k}}\lp {\vec{u}} \rp }\), where the derivatives of the basis vectors are given by

\[\be \pdiff{\bm{e}_{j}^{\mathcal{M}}}{x^{i}} =\f{\Gamma_{ij}^{k}}{\vec{u}}\bm{e}_{k}^{\mathcal{M}}. \ee\]

If we have a submanifold, \(\mathcal{N}\), defined by eq. (\(\ref{eq_79}\)) we can calculate the metric of \(\mathcal{N}\) from eq. (\(\ref{eq_81}\)) and hence construct the geometric algebra and calculus of the tangent space, \({{{\mathcal{T}_{\vec{u}}}\lp {\mathcal{N}} \rp }}\subseteq {{{\mathcal{T}_{{{\vec{x}}\lp {\vec{u}} \rp }}}\lp {\mathcal{M}} \rp }}\).

Note:

If the base manifold is normalized (use the hat symbol to denote normalized tangent vectors, \(\hat{{\eb}}_{i}^{\mathcal{M}}\), and the resulting metric tensor, \(\hat{g}_{ij}^{\mathcal{M}}\)) we have \(\hat{{\eb}}_{i}^{\mathcal{M}}\cdot\hat{{\eb}}_{i}^{\mathcal{M}} = \pm 1\) and \(\hat{g}_{ij}^{\mathcal{M}}\) does not posses enough information to calculate \(g_{ij}^{\mathcal{N}}\). In that case we need to know \(g_{ij}^{\mathcal{M}}\), the metric tensor of the base manifold before normalization. Likewise, for the case of a vector manifold unless the mapping, \({{{\eb}^{\mathcal{M}}}\lp {\vec{x}} \rp }\colon U^{m}\subseteq\Re^{m}\rightarrow \mathcal{V}\), is constant the tangent vectors and metric tensor can only be normalized after the fact (one cannot have a mapping that automatically normalizes all the tangent vectors).

Geometric Derivative

The directional derivative of a multivector field \({{F}\lp {x} \rp }\) is defined by (\(a\) is a vector and \(h\) is a scalar)

\[\be \paren{a\cdot\nabla_{x}}F \equiv \lim_{h\rightarrow 0}\bfrac{\f{F}{x+ah}-\f{F}{x}}{h}. \label{eq_50} \ee\]

Note that \(a\cdot\nabla_{x}\) is a scalar operator. It will give a result containing only those grades that are already in \(F\). \({\lp {a\cdot\nabla_{x}} \rp }F\) is the best linear approximation of \({{F}\lp {x} \rp }\) in the direction \(a\). Equation (\(\ref{eq_50}\)) also defines the operator \(\nabla_{x}\) which for the basis vectors, \({\left \{{{\eb}_{i}} \rbrc}\), has the representation (note that the \({\left \{{{\eb}^{j}} \rbrc}\) are reciprocal basis vectors)

\[\be \nabla_{x} F = {\eb}^{j}{\displaystyle\frac{\partial F}{\partial x^{j}}} \ee\]

If \(F_{r}\) is a \(r\)-grade multivector (if the independent vector, \(x\), is obvious we suppress it in the notation and just write \(\nabla\)) and \(F_{r} = F_{r}^{i_{1}\dots i_{r}}{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}\) then

\[\be \nabla F_{r} = {\displaystyle\frac{\partial F_{r}^{i_{1}\dots i_{r}}}{\partial x^{j}}}{\eb}^{j}\lp {\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}} \rp \ee\]

Note that \({\eb}^{j}\lp {\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}} \rp\) can only contain grades \(r-1\) and \(r+1\) so that \(\nabla F_{r}\) also can only contain those grades. For a grade-\(r\) multivector \(F_{r}\) the inner (div) and outer (curl) derivatives are

\[\be \nabla\cdot F_{r} = \left < \nabla F_{r}\right >_{r-1} = {\eb}^{j}\cdot {{\displaystyle\frac{\partial {F_{r}}}{\partial {x^{j}}}}} \ee\]

and

\[\be \nabla{\wedge}F_{r} = \left < \nabla F_{r}\right >_{r+1} = {\eb}^{j}{\wedge}{{\displaystyle\frac{\partial {F_{r}}}{\partial {x^{j}}}}} \ee\]

For a general multivector function \(F\) the inner and outer derivatives are just the sum of the inner and outer derivatives of each grade of the multivector function.

Geometric Derivative on a Manifold

In the case of a manifold the derivatives of the \({\eb}_{i}\)’s are functions of the coordinates, \({\left \{{x^{i}} \rbrc}\), so that the geometric derivative of a \(r\)-grade multivector field is

\[\begin{split}\begin{aligned} \nabla F_{r} &= {\eb}^{i}{{\displaystyle\frac{\partial {F_{r}}}{\partial {x^{i}}}}} = {\eb}^{i}{{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}} {\lp {F_{r}^{i_{1}\dots i_{r}}{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp } \nonumber \\ &= {{\displaystyle\frac{\partial {F_{r}^{i_{1}\dots i_{r}}}}{\partial {x^{i}}}}}{\eb}^{i}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp } +F_{r}^{i_{1}\dots i_{r}}{\eb}^{i}{{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp }\end{aligned}\end{split}\]

where the multivector functions \({\eb}^{i}{{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp }\) are the connection for the manifold7.

The directional (material/convective) derivative, \({\lp {v\cdot\nabla} \rp }F_{r}\) is given by

\[\begin{split}\begin{aligned} {\lp {v\cdot\nabla} \rp } F_{r} &= v^{i}{{\displaystyle\frac{\partial {F_{r}}}{\partial {x^{i}}}}} = v^{i}{{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}} {\lp {F_{r}^{i_{1}\dots i_{r}}{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp } \nonumber \\ &= v^{i}{{\displaystyle\frac{\partial {F_{r}^{i_{1}\dots i_{r}}}}{\partial {x^{i}}}}}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp } +v^{i}F_{r}^{i_{1}\dots i_{r}}{{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp },\end{aligned}\end{split}\]

so that the multivector connection functions for the directional derivative are \({{\displaystyle\frac{\partial {}}{\partial {x^{i}}}}}{\lp {{\eb}_{i_{1}}{\wedge}\dots{\wedge}{\eb}_{i_{r}}} \rp }\). Be careful and note that \({\lp {v\cdot\nabla} \rp } F_{r} \ne v\cdot {\lp {\nabla F_{r}} \rp }\) since the dot and geometric products are not associative with respect to one another (\(v\cdot\nabla\) is a scalar operator).

Normalizing Basis for Derivatives

The basis vector set, \({\left \{ {{\eb}_{i}} \rbrc}\), is not in general normalized. We define a normalized set of basis vectors, \({\left \{{{\boldsymbol{\hat{e}}}_{i}} \rbrc}\), by

\[\be {\boldsymbol{\hat{e}}}_{i} = {\displaystyle\frac{{\eb}_{i}}{\sqrt{{\left |{{\eb}_{i}^{2}}\right |}}}} = {\displaystyle\frac{{\eb}_{i}}{{\left |{{\eb}_{i}}\right |}}}. \ee\]

This works for all \({\eb}_{i}^{2} \neq 0\). Note that \({\boldsymbol{\hat{e}}}_{i}^{2} = \pm 1\).

Thus the geometric derivative for a set of normalized basis vectors is (where \(F_{r} = F_{r}^{i_{1}\dots i_{r}} \bm{\hat{e}}_{i_{1}}\W\dots\W\bm{\hat{e}}_{i_{r}}\) and [no summation] \(\hat{F}_{r}^{i_{1}\dots i_{r}} = F_{r}^{i_{1}\dots i_{r}} \abs{\bm{\hat{e}}_{i_{1}}}\dots\abs{\bm{\hat{e}}_{i_{r}}}\)).

\[\be \nabla F_{r} = \eb^{i}\pdiff{F_{r}}{x^{i}} = \pdiff{F_{r}^{i_{1}\dots i_{r}}}{x^{i}}\bm{e}^{i} \paren{\bm{\hat{e}}_{i_{1}}\W\dots\W\bm{\hat{e}}_{i_{r}}} +F_{r}^{i_{1}\dots i_{r}}\bm{e}^{i}\pdiff{}{x^{i}} \paren{\bm{\hat{e}}_{i_{1}}\W\dots\W\bm{\hat{e}}_{i_{r}}}. \ee\]

To calculate \({\eb}^{i}\) in terms of the \({\boldsymbol{\hat{e}}}_{i}\)’s we have

\[\begin{split}\begin{aligned} {\eb}^{i} &= g^{ij}{\eb}_{j} \nonumber \\ {\eb}^{i} &= g^{ij}{\left |{{\eb}_{j}}\right |}{\boldsymbol{\hat{e}}}_{j}.\end{aligned}\end{split}\]

This is the general (non-orthogonal) formula. If the basis vectors are orthogonal then (no summation over repeated indexes)

\[\begin{split}\begin{aligned} {\eb}^{i} &= g^{ii}{\left |{{\eb}_{i}}\right |}{\boldsymbol{\hat{e}}}_{i} \nonumber \\ {\eb}^{i} &= {\displaystyle\frac{{\left |{{\eb}_{i}}\right |}}{g_{ii}}}{\boldsymbol{\hat{e}}}_{i} = {\displaystyle\frac{{\left |{{\boldsymbol{\hat{e}}}_{i}}\right |}}{{\eb}_{i}^{2}}}{\boldsymbol{\hat{e}}}_{i}.\end{aligned}\end{split}\]

Additionally, one can calculate the connection of the normalized basis as follows

\[\begin{split}\begin{aligned} {{\displaystyle\frac{\partial {{\lp {{\left |{{\eb}_{i}}\right |}{\boldsymbol{\hat{e}}}_{i}} \rp }}}{\partial {x^{j}}}}} =& {{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}}, \nonumber \\ {{\displaystyle\frac{\partial {{\left |{{\eb}_{i}}\right |}}}{\partial {x^{j}}}}}{\boldsymbol{\hat{e}}}_{i} +{\left |{{\eb}_{i}}\right |}{{\displaystyle\frac{\partial {{\boldsymbol{\hat{e}}}_{i}}}{\partial {x^{j}}}}} =& {{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}}, \nonumber \\ {{\displaystyle\frac{\partial {{\boldsymbol{\hat{e}}}_{i}}}{\partial {x^{j}}}}} =& {\displaystyle\frac{1}{{\left |{{\eb}_{i}}\right |}}}{\lp {{{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}} -{{\displaystyle\frac{\partial {{\left |{{\eb}_{i}}\right |}}}{\partial {x^{j}}}}}{\boldsymbol{\hat{e}}}_{i}} \rp },\nonumber \\ =& {\displaystyle\frac{1}{{\left |{{\eb}_{i}}\right |}}}{{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}} -{\displaystyle\frac{1}{{\left |{{\eb}_{i}}\right |}}}{{\displaystyle\frac{\partial {{\left |{{\eb}_{i}}\right |}}}{\partial {x^{j}}}}}{\boldsymbol{\hat{e}}}_{i},\nonumber \\ =& {\displaystyle\frac{1}{{\left |{{\eb}_{i}}\right |}}}{{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}} -{\displaystyle\frac{1}{2g_{ii}}}{{\displaystyle\frac{\partial {g_{ii}}}{\partial {x^{j}}}}}{\boldsymbol{\hat{e}}}_{i},\end{aligned}\end{split}\]

where \({{\displaystyle\frac{\partial {{\eb}_{i}}}{\partial {x^{j}}}}}\) is expanded in terms of the \({\boldsymbol{\hat{e}}}_{i}\)’s.

Linear Differential Operators

First a note on partial derivative notation. We shall use the following notation for a partial derivative where the manifold coordinates are \(x_{1},\dots,x_{n}\):

\[\be\label{eq_66a} \bfrac{\partial^{j_{1}+\cdots+j_{n}}}{\partial x_{1}^{j_{1}}\dots\partial x_{n}^{j_{n}}} = \partial_{j_{1}\dots j_{n}}. \ee\]

If \(j_{k}=0\) the partial derivative with respect to the \(k^{th}\) coordinate is not taken. If \(j_{k} = 0\) for all \(1 \le k \le n\) then the partial derivative operator is the scalar one. If we consider a partial derivative where the \(x\)’s are not in normal order such as

\[\be {\displaystyle\frac{\partial^{j_{1}+\cdots+j_{n}}}{\partial x_{i_{1}}^{j_{1}}\dots\partial x_{i_{n}}^{j_{n}}}}, \ee\]

and the \(i_{k}\)’s are not in ascending order. The derivative can always be put in the form in eq (\(\ref{eq_66a}\)) since the order of differentiation does not change the value of the partial derivative (for the smooth functions we are considering). Additionally, using our notation the product of two partial derivative operations is given by

\[\be \partial_{i_{1}\dots i_{n}}\partial_{j_{1}\dots j_{n}} = \partial_{i_{1}+j_{1},\dots, i_{n}+j_{n}}. \ee\]

A general general multivector linear differential operator is a linear combination of multivectors and partial derivative operators denoted by

\[\be\label{eq_66b} D \equiv D^{i_{1}\dots i_{n}}\partial_{i_{1}\dots i_{n}}. \ee\]

Equation (\(\ref{eq_66b}\)) is the normal form of the differential operator in that the partial derivative operators are written to the right of the multivector coefficients and do not operate upon the multivector coefficients. The operator of eq (\(\ref{eq_66b}\)) can operate on mulitvector functions, returning a multivector function via the following definitions.

\(F\) as

\[\be D\circ F = D^{j_{1}\dots j_{n}}\circ\partial_{j_{1}\dots j_{n}}F,\label{eq_67a} \ee\]

, or

\[\be F\circ D = \partial_{j_{1}\dots j_{n}}F\circ D^{j_{1}\dots j_{n}},\label{eq_68a} \ee\]

where the \(D^{j_{1}\dots j_{n}}\) are multivector functions and \(\circ\) is any of the multivector multiplicative operations.

Equations (\(\ref{eq_67a}\)) and (\(\ref{eq_68a}\)) are not the most general multivector linear differential operators, the most general would be

\[\be D \left( F \right) = {D^{j_{1}\dots j_{n}}}\left({\partial_{j_{1}\dots j_{n}}F}\right), \ee\]

where \({{D^{j_{1}\dots j_{n}}}\lp {} \rp }\) are linear multivector functionals.

The definition of the sum of two differential operators is obvious since any multivector operator, \(\circ\), is a bilinear operator \({\lp {{\lp {D_{A}+D_{B}} \rp }\circ F = D_{A}\circ F+D_{B}\circ F} \rp }\), the product of two differential operators \(D_{A}\) and \(D_{B}\) operating on a multivector function \(F\) is defined to be (\(\circ_{1}\) and \(\circ_{2}\) are any two multivector multiplicative operations)

\[\begin{split}\begin{aligned} {\lp {D_{A}\circ_{1}D_{B}} \rp }\circ_{2}F &\equiv {\lp {D_{A}^{i_{1}\dots i_{n}}\circ_{1} \partial_{i_{1}\dots i_{n}}{\lp {D_{B}^{j_{1}\dots j_{n}} \partial_{j_{1}\dots j_{n}}} \rp }} \rp }\circ_{2}F \nonumber \\ &= {\lp {D_{A}^{i_{1}\dots i_{n}}\circ_{1} {\lp {{\lp {\partial_{i_{1}\dots i_{n}}D_{B}^{j_{1}\dots j_{n}}} \rp } \partial_{j_{1}\dots j_{n}}+ D_{B}^{j_{1}\dots j_{n}}} \rp } \partial_{i_{1}+j_{1},\dots, i_{n}+j_{n}}} \rp }\circ_{2}F \nonumber \\ &= {\lp {D_{A}^{i_{1}\dots i_{n}}\circ_{1}{\lp {\partial_{i_{1}\dots i_{n}}D_{B}^{j_{1}\dots j_{n}}} \rp }} \rp } \circ_{2}\partial_{j_{1}\dots j_{n}}F+ {\lp {D_{A}^{i_{1}\dots i_{n}}\circ_{1}D_{B}^{j_{1}\dots j_{n}}} \rp } \circ_{2}\partial_{i_{1}+j_{1},\dots, i_{n}+j_{n}}F,\end{aligned}\end{split}\]

where we have used the fact that the \(\partial\) operator is a scalar operator and commutes with \(\circ_{1}\) and \(\circ_{2}\).

Thus for a pure operator product \(D_{A}\circ D_{B}\) we have

\[\be D_{A}\circ D_{B} = \paren{D_{A}^{i_{1}\dots i_{n}}\circ\paren{\partial_{i_{1}\dots i_{n}}D_{B}^{j_{1}\dots j_{n}}}} \partial_{j_{1}\dots j_{n}}+ \paren{D_{A}^{i_{1}\dots i_{n}}\circ_{1}D_{B}^{j_{1}\dots j_{n}}} \partial_{i_{1}+j_{1},\dots, i_{n}+j_{n}} \label{eq_71a} \ee\]

and the form of eq (\(\ref{eq_71a}\)) is the same as eq (\(\ref{eq_67a}\)). The basis of eq (\(\ref{eq_71a}\)) is that the \(\partial\) operator operates on all object to the right of it as products so that the product rule must be used in all differentiations. Since eq (\(\ref{eq_71a}\)) puts the product of two differential operators in standard form we also evaluate \(F\circ_{2}{\lp {D_{A}\circ_{1}D_{B}} \rp }\).

We now must distinguish between the following cases. If \(D\) is a differential operator and \(F\) a multivector function should \(D\circ F\) and \(F\circ D\) return a differential operator or a multivector. In order to be consistent with the standard vector analysis we have \(D\circ F\) return a multivector and \(F\circ D\) return a differential operator. Then we define the complementary differential operator \(\bar{D}\) which is identical to \(D\) except that \(\bar{D}\circ F\) returns a differential operator according to eq (\(\ref{eq_71a}\))8 and \(F\circ\bar{D}\) returns a multivector according to eq (\(\ref{eq_68a}\)).

A general differential operator is built from repeated applications of the basic operator building blocks \({\lp {\bar{\nabla}\circ A} \rp }\), \({\lp {A\circ\bar{\nabla}} \rp }\), \({\lp {\bar{\nabla}\circ\bar{\nabla}} \rp }\), and \({\lp {A\pm \bar{\nabla}} \rp }\). Both \(\nabla\) and \(\bar{\nabla}\) are represented by the operator

\[\be \nabla = \bar{\nabla} = e^{i}\pdiff{}{x^{i}}, \ee\]

but are flagged to produce the appropriate result.

In the our notation the directional derivative operator is \(a\cdot\nabla\), the Laplacian \(\nabla\cdot\nabla\) and the expression for the Riemann tensor, \(R^{i}_{jkl}\), is

\[\be \paren{\nabla\W\nabla}\eb^{i} = \half R^{i}_{jkl}\paren{\eb^{j}\W\eb^{k}}\eb^{l}. \ee\]

We would use the complement if we wish a quantum mechanical type commutator defining

\[\be \com{x,\nabla} \equiv x\nabla - \bar{\nabla}x, \ee\]

, or if we wish to simulate the dot notation (Doran and Lasenby)

\[\be \dot{F}\dot{\nabla} = F\bar{\nabla}. \ee\]

Split Differential Operator

To implement the general “dot” notation for differential operators in python is not possible. Another type of symbolic notation is required. I propose what one could call the “split differential operator.” For \(\nabla\) denote the corresponding split operator by two operators \({{\nabla}_{\mathcal{G}}}\) and \({{\nabla}_{\mathcal{D}}}\) where in practice \({{\nabla}_{\mathcal{G}}}\) is a tuple of vectors and \({{\nabla}_{\mathcal{D}}}\) is a tuple of corresponding partial derivatives. Then the equivalent of the “dot” notation would be

\[\be \dot{\nabla}{\lp {A\dot{B}C} \rp } = {{\nabla}_{\mathcal{G}}}{\lp {A{\lp {{{\nabla}_{\mathcal{D}}}B} \rp }C} \rp }.\label{splitopV} \ee\]

We are using the \(\mathcal{G}\) subscript to indicate the geometric algebra parts of the multivector differential operator and the \(\mathcal{D}\) subscript to indicate the scalar differential operator parts of the multivector differential operator. An example of this notation in 3D Euclidean space is

\[\begin{split}\begin{aligned} {{\nabla}_{\mathcal{G}}} &= {\lp {{{\eb}}_{x},{{\eb}}_{y},{{\eb}}_{z}} \rp }, \\ {{\nabla}_{\mathcal{D}}} &= {\lp {{{\displaystyle\frac{\partial {}}{\partial {x}}}},{{\displaystyle\frac{\partial {}}{\partial {y}}}},{{\displaystyle\frac{\partial {}}{\partial {x}}}}} \rp },\end{aligned}\end{split}\]

To implement \({{\nabla}_{\mathcal{G}}}\) and \({{\nabla}_{\mathcal{D}}}\) we have in the example

\[\begin{split}\begin{aligned} {{\nabla}_{\mathcal{D}}}B &= {\lp {{{\displaystyle\frac{\partial {B}}{\partial {x}}}},{{\displaystyle\frac{\partial {B}}{\partial {y}}}},{{\displaystyle\frac{\partial {B}}{\partial {z}}}}} \rp } \\ {\lp {{{\nabla}_{\mathcal{D}}}B} \rp }C &= {\lp {{{\displaystyle\frac{\partial {B}}{\partial {x}}}}C,{{\displaystyle\frac{\partial {B}}{\partial {y}}}}C,{{\displaystyle\frac{\partial {B}}{\partial {z}}}}C} \rp } \\ A{\lp {{{\nabla}_{\mathcal{D}}}B} \rp }C &= {\lp {A{{\displaystyle\frac{\partial {B}}{\partial {x}}}}C,A{{\displaystyle\frac{\partial {B}}{\partial {y}}}}C,A{{\displaystyle\frac{\partial {B}}{\partial {z}}}}C} \rp }.\end{aligned}\end{split}\]

Then the final evaluation is

\[\be {{\nabla}_{\mathcal{G}}}{\lp {A{\lp {{{\nabla}_{\mathcal{D}}}B} \rp }C} \rp } = {{\eb}}_{x}A{{\displaystyle\frac{\partial {B}}{\partial {x}}}}C+{{\eb}}_{y}A{{\displaystyle\frac{\partial {B}}{\partial {y}}}}C+{{\eb}}_{z}A{{\displaystyle\frac{\partial {B}}{\partial {z}}}}C, \ee\]

which could be called the “dot” product of two tuples. Note that \(\nabla = {{\nabla}_{\mathcal{G}}}{{\nabla}_{\mathcal{D}}}\) and \(\dot{F}\dot{\nabla} = F\bar{\nabla} = {\lp {{{\nabla}_{\mathcal{D}}}F} \rp }{{\nabla}_{\mathcal{G}}}\).

For the general multivector differential operator, \(D\), the split operator parts are \({{D}_{\mathcal{G}}}\), a tuple of basis blade multivectors and \({{D}_{\mathcal{D}}}\), a tuple of scalar differential operators that correspond to the coefficients of the basis-blades in the total operator \(D\) so that

\[\be \dot{D}{\lp {A\dot{B}C} \rp } = {{D}_{\mathcal{G}}}{\lp {A{\lp {{{D}_{\mathcal{D}}}B} \rp }C} \rp }. \label{splitopM} \ee\]

If the index set for the basis blades of a geometric algebra is denoted by \({\left \{{n} \rbrc}\) where \({\left \{{n} \rbrc}\) contains \(2^{n}\) indices for an \(n\) dimensional geometric algebra then the most general multivector differential operator can be written9

\[\be D = {{\displaystyle}\sum_{l\in{\left \{ {n} \rbrc}}{{\eb}}^{l}D_{{\left \{ {l} \rbrc}}} \ee\]
\[\be \dot{D}{\lp {A\dot{B}C} \rp } = {{D}_{\mathcal{G}}}{\lp {A{\lp {{{D}_{\mathcal{D}}}B} \rp }C} \rp } = {{\displaystyle}\sum_{l\in{\left \{ {n} \rbrc}}{{\eb}}^{l}{\lp {A{\lp {D_{l}B} \rp }C} \rp }} \ee\]

or

\[\be {\lp {A\dot{B}C} \rp }\dot{D} = {\lp {A{\lp {{{D}_{\mathcal{D}}}B} \rp }C} \rp }{{D}_{\mathcal{G}}} = {{\displaystyle}\sum_{l\in{\left \{ {n} \rbrc}}{\lp {A{\lp {D_{l}B} \rp }C} \rp }{{\eb}}^{l}}. \ee\]

The implementation of equations \(\ref{splitopV}\) and \(\ref{splitopM}\) is described in sections Instantiating a Multivector and Multivector Derivatives.

Linear Transformations/Outermorphisms

In the tangent space of a manifold, \(\mathcal{M}\), (which is a vector space) a linear transformation is the mapping \(\underline{T}\colon{{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\rightarrow{{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) (we use an underline to indicate a linear transformation) where for all \(x,y\in {{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) and \(\alpha\in\Re\) we have

\[\begin{split}\begin{aligned} {{\underline{T}}\lp {x+y} \rp } =& {{\underline{T}}\lp {x} \rp } + {{\underline{T}}\lp {y} \rp } \\ {{\underline{T}}\lp {\alpha x} \rp } =& \alpha{{\underline{T}}\lp {x} \rp }\end{aligned}\end{split}\]

The outermorphism induced by \(\underline{T}\) is defined for \(x_{1},\dots,x_{r}\in{{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) where \(\newcommand{\f}[2]{{#1}\lp {#2} \rp } \newcommand{\Tn}[2]{\f{\mathcal{T}_{#2}}{#1}} r\le\f{\dim}{\Tn{\mathcal{M}}{\vec{x}}}\)

\[\be \newcommand{\f}[2]{{#1}\lp {#2} \rp } \newcommand{\W}{\wedge} \f{\underline{T}}{x_{1}\W\dots\W x_{r}} \equiv \f{\underline{T}}{x_{1}}\W\dots\W\f{\underline{T}}{x_{r}} \ee\]

If \(I\) is the pseudo scalar for \({{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) we also have the following definitions for determinate, trace, and adjoint (\(\overline{T}\)) of \(\underline{T}\)

\[\begin{split}\begin{align} \f{\underline{T}}{I} \equiv&\; \f{\det}{\underline{T}}I\text{,} \label{eq_82}\\ \f{\tr}{\underline{T}} \equiv&\; \nabla_{y}\cdot\f{\underline{T}}{y}\text{,} \label{eq_83}\\ x\cdot \f{\overline{T}}{y} \equiv&\; y\cdot \f{\underline{T}}{x}.\ \label{eq_84}\\ \end{align}\end{split}\]

If \({\left \{{{{\eb}}_{i}} \rbrc}\) is a basis for \({{{\mathcal{T}_{\vec{x}}}\lp {\mathcal{M}} \rp }}\) then we can represent \(\underline{T}\) with the matrix \(\underline{T}_{i}^{j}\) used as follows (Einstein summation convention as usual) -

\[\be \f{\underline{T}}{\eb_{i}} = \underline{T}_{i}^{j}\eb_{j}, \label{eq_85} \ee\]

The let \({\lp {\underline{T}^{-1}} \rp }_{m}^{n}\) be the inverse matrix of \(\underline{T}_{i}^{j}\) so that \({\lp {\underline{T}^{-1}} \rp }_{m}^{k}\underline{T}_{k}^{j} = \delta^{j}_{m}\) and

\[\be \underline{T}^{-1}{\lp {a^{i}{{\eb}}_{i}} \rp } = a^{i}{\lp {\underline{T}^{-1}} \rp }_{i}^{j}{{\eb}}_{j} \label{eq_85a} \ee\]

and calculate

\[\begin{split}\begin{aligned} \underline{T}^{-1}{\lp {\underline{T}{\lp {a} \rp }} \rp } &= \underline{T}^{-1}{\lp {\underline{T}{\lp {a^{i}{{\eb}}_{i}} \rp }} \rp } \nonumber \\ &= \underline{T}^{-1}{\lp {a^{i}\underline{T}_{i}^{j}{{\eb}}_{j}} \rp } \nonumber \\ &= a^{i}{\lp {\underline{T}^{-1}} \rp }_{i}^{j} \underline{T}_{j}^{k}{{\eb}}_{k} \nonumber \\ &= a^{i}\delta_{i}^{j}{{\eb}}_{j} = a^{i}{{\eb}}_{i} = a.\end{aligned}\end{split}\]

Thus if eq \(\ref{eq_85a}\) is used to define the \(\underline{T}_{i}^{j}\) then the linear transformation defined by the matrix \({\lp {\underline{T}^{-1}} \rp }_{m}^{n}\) is the inverse of \(\underline{T}\).

In eq. (\(\ref{eq_85}\)) the matrix, \(\underline{T}_{i}^{j}\), only has it’s usual meaning if the \({\left \{{{{\eb}}_{i}} \rbrc}\) form an orthonormal Euclidean basis (Minkowski spaces not allowed). Equations (\(\ref{eq_82}\)) through (\(\ref{eq_84}\)) become

\[\begin{split}\begin{aligned} {{\det}\lp {\underline{T}} \rp } =&\; {{\underline{T}}\lp {{{\eb}}_{1}{\wedge}\dots{\wedge}{{\eb}}_{n}} \rp }{\lp {{{\eb}}_{1}{\wedge}\dots{\wedge}{{\eb}}_{n}} \rp }^{-1},\\ {{{\mbox{tr}}}\lp {\underline{T}} \rp } =&\; \underline{T}_{i}^{i},\\ \overline{T}_{j}^{i} =&\; g^{il}g_{jp}\underline{T}_{l}^{p}.\end{aligned}\end{split}\]

A important form of linear transformation with a simple representation is the spinor transformation. If \(S\) is an even multivector we have \(SS^{{\dagger}} = \rho^{2}\), where \(\rho^{2}\) is a scalar. Then \(S\) is a spinor transformation is given by (\(v\) is a vector)

\[\be {{S}\lp {v} \rp } = SvS^{{\dagger}} \ee\]

if \({{S}\lp {v} \rp }\) is a vector and

\[\be {{S^{-1}}\lp {v} \rp } = \frac{S^{{\dagger}}vS}{\rho^{4}}. \ee\]

Thus

\[\begin{split}\begin{aligned} {{S^{-1}}\lp {{{S}\lp {v} \rp }} \rp } &= \frac{S^{{\dagger}}SvS^{{\dagger}}S}{\rho^{4}} \nonumber \\ &= \frac{\rho^{2}v\rho^{2}}{\rho^{4}} \nonumber \\ &= v. \end{aligned}\end{split}\]

One more topic to consider is whether or not \(T^{i}_{j}\) should be called the matrix representation of \(T\) ? The reason that this is a question is that for a general metric \(g_{ij}\) is that because of the dependence of the dot product on the metric \(T^{i}_{j}\) does not necessarily show the symmetries of the underlying transformation \(T\). Consider the expression

\[\begin{split}\begin{aligned} a\cdot{{T}\lp {b} \rp } &= a^{i}{{\eb}}_{i}\cdot{{T}\lp {b^{j}{{\eb}}_{j}} \rp } \nonumber \\ &= a^{i}{{\eb}}_{i}\cdot {{T}\lp {{{\eb}}_{j}} \rp }b^{j} \nonumber \\ &= a^{i}{{\eb}}_{i}\cdot{{\eb}}_{k} T_{j}^{k}b^{j} \nonumber \\ &= a^{i}g_{ik}T_{j}^{k}b^{j}.\end{aligned}\end{split}\]

It is

\[\be T_{ij} = g_{ik}T_{j}^{k} \ee\]

that has the proper symmetry for self adjoint transformations \((a\cdot{{T}\lp {b} \rp } = b\cdot{{T}\lp {a} \rp })\) in the sense that if \(T = \overline{T}\) then \(T_{ij} = T_{ji}\). Of course if we are dealing with a manifold where the \(g_{ij}\)’s are functions of the coordinates then the matrix representation of a linear transformation will also be a function of the coordinates. Assuming we use \(T_{ij}\) for the matrix representation of the linear transformation, \(T\), then if we given the matrix representation, \(T_{ij}\), we can construct the linear transformation given by \(T^{i}_{j}\) as follows

\[\begin{split}\begin{aligned} T_{ij} &= g_{ik}T_{j}^{k} \nonumber \\ g^{li}T_{ij} &= g^{li}g_{ik}T_{j}^{k} \nonumber \\ g^{li}T_{ij} &= \delta_{k}^{l}T_{j}^{k} \nonumber \\ g^{li}T_{ij} &= T_{j}^{l}.\end{aligned}\end{split}\]

Any program/code that represents \(T\) should allow one to define \(T\) in terms of \(T_{ij}\) or \(T_{j}^{l}\) and likewise given a linear transformation \(T\) obtain both \(T_{ij}\) and \(T_{j}^{l}\) from it. Please note that these considerations come into play for any non-Euclidean metric with respect to the trace and adjoint of a linear transformation since calculating either requires a dot product.

Multilinear Functions

A multivector multilinear function10 is a multivector function \({{T}\lp {A_{1},\dots,A_{r}} \rp }\) that is linear in each of it arguments11 (it could be implicitly non-linearly dependent on a set of additional arguments such as the position coordinates, but we only consider the linear arguments). \(T\) is a tensor of degree \(r\) if each variable \(A_{j}\) is restricted to the vector space \(\mathcal{V}_{n}\). More generally if each \(A_{j}\in{{\mathcal{G}}\lp {\mathcal{V}_{n}} \rp }\) (the geometric algebra of \(\mathcal{V}_{n}\)), we call \(T\) an extensor of degree-\(r\) on \({{\mathcal{G}}\lp {\mathcal{V}_{n}} \rp }\).

If the values of \({{T} \lp {a_{1},\dots,a_{r}} \rp }\) \(\lp a_{j}\in\mathcal{V}_{n}\;\forall\; 1\le j \le r \rp\) are \(s\)-vectors (pure grade \(s\) multivectors in \({{\mathcal{G}}\lp {\mathcal{V}_{n}} \rp }\)) we say that \(T\) has grade \(s\) and rank \(r+s\). A tensor of grade zero is called a multilinear form.

In the normal definition of tensors as multilinear functions the tensor is defined as a mapping

\[T:{\huge \times}_{i=1}^{r}\mathcal{V}_{i}\rightarrow\Re,\]

so that the standard tensor definition is an example of a grade zero degree/rank$ r $ tensor in our definition.

Algebraic Operations

The properties of tensors are (\(\alpha\in\Re\), \(a_{j},b\in\mathcal{V}_{n}\), \(T\) and \(S\) are tensors of rank \(r\), and \(\circ\) is any multivector multiplicative operation)

\[\begin{split}\begin{aligned} {{T}\lp {a_{1},\dots,\alpha a_{j},\dots,a_{r}} \rp } =& \alpha{{T}\lp {a_{1},\dots,a_{j},\dots,a_{r}} \rp }, \\ {{T}\lp {a_{1},\dots,a_{j}+b,\dots,a_{r}} \rp } =& {{T}\lp {a_{1},\dots,a_{j},\dots,a_{r}} \rp }+ {{T}\lp {a_{1},\dots,a_{j-1},b,a_{j+1},\dots,a_{r}} \rp }, \\ {{\lp T\pm S\rp }\lp {a_{1},\dots,a_{r}} \rp } \equiv& {{T}\lp {a_{1},\dots,a_{r}} \rp }\pm{{S}\lp {a_{1},\dots,a_{r}} \rp }.\end{aligned}\end{split}\]

Now let \(T\) be of rank \(r\) and \(S\) of rank \(s\) then the product of the two tensors is

\[\be \f{\lp T\circ S\rp}{a_{1},\dots,a_{r+s}} \equiv \f{T}{a_{1},\dots,a_{r}}\circ\f{S}{a_{r+1},\dots,a_{r+s}}, \ee\]

where “\(\circ\)” is any multivector multiplicative operation.

Covariant, Contravariant, and Mixed Representations

The arguments (vectors) of the multilinear function can be represented in terms of the basis vectors or the reciprocal basis vectors

\[\begin{split}\begin{aligned} a_{j} =& a^{i_{j}}{{\eb}}_{i_{j}}, \label{vrep}\\ =& a_{i_{j}}{{\eb}}^{i_{j}}. \label{rvrep}\end{aligned}\end{split}\]

Equation (\(\ref{vrep}\)) gives \(a_{j}\) in terms of the basis vectors and eq (\(\ref{rvrep}\)) in terms of the reciprocal basis vectors. The index \(j\) refers to the argument slot and the indices \(i_{j}\) the components of the vector in terms of the basis. The covariant representation of the tensor is defined by

\(\newcommand{\indices}[1]{#1}\begin{aligned} T\indices{_{i_{1}\dots i_{r}}} \equiv& {{T}\lp {{{\eb}}_{i_{1}},\dots,{{\eb}}_{i_{r}}} \rp } \\ {{T}\lp {a_{1},\dots,a_{r}} \rp } =& {{T}\lp {a^{i_{1}}{{\eb}}_{i_{1}},\dots,a^{i_{r}}{{\eb}}_{i_{r}}} \rp } \nonumber \\ =& {{T}\lp {{{\eb}}_{i_{1}},\dots,{{\eb}}_{i_{r}}} \rp }a^{i_{1}}\dots a^{i_{r}} \nonumber \\ =& T\indices{_{i_{1}\dots i_{r}}}a^{i_{1}}\dots a^{i_{r}}.\end{aligned}\)$

Likewise for the contravariant representation

\[\begin{split}\begin{aligned} T\indices{^{i_{1}\dots i_{r}}} \equiv& {{T}\lp {{{\eb}}^{i_{1}},\dots,{{\eb}}^{i_{r}}} \rp } \\ {{T}\lp {a_{1},\dots,a_{r}} \rp } =& {{T}\lp {a_{i_{1}}{{\eb}}^{i_{1}},\dots,a_{i_{r}}{{\eb}}^{i_{r}}} \rp } \nonumber \\ =& {{T}\lp {{{\eb}}^{i_{1}},\dots,{{\eb}}^{i_{r}}} \rp }a_{i_{1}}\dots a_{i_{r}} \nonumber \\ =& T\indices{^{i_{1}\dots i_{r}}}a_{i_{1}}\dots a_{i_{r}}.\end{aligned}\end{split}\]

One could also have a mixed representation

\[\begin{split}\begin{aligned} T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}} \equiv& {{T}\lp {{{\eb}}_{i_{1}},\dots,{{\eb}}_{i_{s}},{{\eb}}^{i_{s+1}}\dots{{\eb}}^{i_{r}}} \rp } \\ {{T}\lp {a_{1},\dots,a_{r}} \rp } =& {{T}\lp {a^{i_{1}}{{\eb}}_{i_{1}},\dots,a^{i_{s}}{{\eb}}_{i_{s}}, a_{i_{s+1}}{{\eb}}^{i_{s}}\dots,a_{i_{r}}{{\eb}}^{i_{r}}} \rp } \nonumber \\ =& {{T}\lp {{{\eb}}_{i_{1}},\dots,{{\eb}}_{i_{s}},{{\eb}}^{i_{s+1}},\dots,{{\eb}}^{i_{r}}} \rp } a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}},\dots a^{i_{r}} \nonumber \\ =& T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots a^{i_{r}}.\end{aligned}\end{split}\]

In the representation of \(T\) one could have any combination of covariant (lower) and contravariant (upper) indexes.

To convert a covariant index to a contravariant index simply consider

\[\begin{split}\begin{aligned} \f{T}{\eb_{i_{1}},\dots,\eb^{i_{j}},\dots,\eb_{i_{r}}} =& \f{T}{\eb_{i_{1}},\dots,g^{i_{j}k_{j}}\eb_{k_{j}},\dots,\eb_{i_{r}}} \nonumber \\ =& g^{i_{j}k_{j}}\f{T}{\eb_{i_{1}},\dots,\eb_{k_{j}},\dots,\eb_{i_{r}}} \nonumber \\ T_{i_{1}\dots}{}^{i_{j}}{}_{\dots i_{r}} =& g^{i_{j}k_{j}}T\indices{_{i_{1}\dots i_{j}\dots i_{r}}}. \end{aligned}\end{split}\]

Similarly one could lower an upper index with \(g_{i_{j}k_{j}}\).

Contraction and Differentiation

The contraction of a tensor between the \(j^{th}\) and \(k^{th}\) variables (slots) is

\[\be \f{T}{a_{i},\dots,a_{j-1},\nabla_{a_{k}},a_{j+1},\dots,a_{r}} = \nabla_{a_{j}}\cdot\lp \nabla_{a_{k}}\f{T}{a_{1},\dots,a_{r}}\rp . \ee\]

This operation reduces the rank of the tensor by two. This definition gives the standard results for metric contraction which is proved as follows for a rank \(r\) grade zero tensor (the circumflex “\(\breve{\:\:}\)” indicates that a term is to be deleted from the product).

\[\begin{split}\begin{align} \f{T}{a_{1},\dots,a_{r}} =& a^{i_{1}}\dots a^{i_{r}}T_{i_{1}\dots i_{r}} \\ \nabla_{a_{j}}T =& \eb^{l_{j}} a^{i_{1}}\dots\lp\partial_{a^{l_j}}a^{i_{j}}\rp\dots a_{i_{r}}T_{i_{1}\dots i_{r}} \nonumber \\ =& \eb^{l_{j}}\delta_{l_{j}}^{i_{j}} a^{i_{1}}\dots \breve{a}^{i_{j}}\dots a^{i_{r}}T_{i_{1}\dots i_{r}} \\ \nabla_{a_{m}}\cdot\lp\nabla_{a_{j}}T\rp =& \eb^{k_{m}}\cdot\eb^{l_{j}}\delta_{l_{j}}^{i_{j}} a^{i_{1}}\dots \breve{a}^{i_{j}}\dots\lp\partial_{a^{k_m}}a^{i_{m}}\rp \dots a^{i_{r}}T_{i_{1}\dots i_{r}} \nonumber \\ =& g^{k_{m}l_{j}}\delta_{l_{j}}^{i_{j}}\delta_{k_{m}}^{i_{m}} a^{i_{1}}\dots \breve{a}^{i_{j}}\dots\breve{a}^{i_{m}} \dots a^{i_{r}}T_{i_{1}\dots i_{r}} \nonumber \\ =& g^{i_{m}i_{j}}a^{i_{1}}\dots \breve{a}^{i_{j}}\dots\breve{a}^{i_{m}} \dots a^{i_{r}}T_{i_{1}\dots i_{j}\dots i_{m}\dots i_{r}} \nonumber \\ =& g^{i_{j}i_{m}}a^{i_{1}}\dots \breve{a}^{i_{j}}\dots\breve{a}^{i_{m}} \dots a^{i_{r}}T_{i_{1}\dots i_{j}\dots i_{m}\dots i_{r}} \nonumber \\ =& \lp g^{i_{j}i_{m}}T_{i_{1}\dots i_{j}\dots i_{m}\dots i_{r}}\rp a^{i_{1}}\dots \breve{a}^{i_{j}}\dots\breve{a}^{i_{m}}\dots a^{i_{r}} \label{eq108} \end{align}\end{split}\]

Equation (\(\ref{eq108}\)) is the correct formula for the metric contraction of a tensor.

If we have a mixed representation of a tensor, \(T\indices{_{i_{1}\dots}{}^{i_{j}}{}_{\dots i_{k}\dots i_{r}}}\), and wish to contract between an upper and lower index (\(i_{j}\) and \(i_{k}\)) first lower the upper index and then use eq (\(\ref{eq108}\)) to contract the result. Remember lowering the index does not change the tensor, only the representation of the tensor, while contraction results in a new tensor. First lower index

\[\be T\indices{_{i_{1}\dots}{}^{i_{j}}{}_{\dots i_{k}\dots i_{r}}} \xRightarrow{\small Lower Index} g_{i_{j}k_{j}}T\indices{_{i_{1}\dots}{}^{k_{j}}{}_{\dots i_{k}\dots i_{r}}} \ee\]

Now contract between \(i_{j}\) and \(i_{k}\) and use the properties of the metric tensor.

\[\begin{split}\begin{aligned} g_{i_{j}k_{j}}T\indices{_{i_{1}\dots}{}^{k_{j}}{}_{\dots i_{k}\dots i_{r}}} \xRightarrow{\small Contract}& g^{i_{j}i_{k}}g_{i_{j}k_{j}}T\indices{_{i_{1}\dots}{}^{k_{j}}{}_{\dots i_{k}\dots i_{r}}} \nonumber \\ =& \delta_{k_{j}}^{i_{k}}T\indices{_{i_{1}\dots}{}^{k_{j}}{}_{\dots i_{k}\dots i_{r}}}. \label{114a}\end{aligned}\end{split}\]

Equation (\(\ref{114a}\)) is the standard formula for contraction between upper and lower indexes of a mixed tensor.

Finally if \({{T}\lp {a_{1},\dots,a_{r}} \rp }\) is a tensor field (implicitly a function of position) the tensor derivative is defined as

\[\begin{aligned} {{T}\lp {a_{1},\dots,a_{r};a_{r+1}} \rp } \equiv \lp a_{r+1}\cdot\nabla\rp {{T}\lp {a_{1},\dots,a_{r}} \rp },\end{aligned}\]

assuming the \(a^{i_{j}}\) coefficients are not a function of the coordinates.

This gives for a grade zero rank \(r\) tensor

\[\begin{split}\begin{aligned} \lp a_{r+1}\cdot\nabla\rp {{T}\lp {a_{1},\dots,a_{r}} \rp } =& a^{i_{r+1}}\partial_{x^{i_{r+1}}}a^{i_{1}}\dots a^{i_{r}} T_{i_{1}\dots i_{r}}, \nonumber \\ =& a^{i_{1}}\dots a^{i_{r}}a^{i_{r+1}} \partial_{x^{i_{r+1}}}T_{i_{1}\dots i_{r}}.\end{aligned}\end{split}\]

From Vector to Tensor

A rank one tensor is a vector since it satisfies all the axioms for a vector space, but a vector in not necessarily a tensor since not all vectors are multilinear (actually in the case of vectors a linear function) functions. However, there is a simple isomorphism between vectors and rank one tensors defined by the mapping \({{v}\lp {a} \rp }:\mathcal{V}\rightarrow\Re\) such that if \(v,a \in\mathcal{V}\)

\[\be \f{v}{a} \equiv v\cdot a. \ee\]

So that if \(v = v^{i}{{\eb}}_{i} = v_{i}{{\eb}}^{i}\) the covariant and contravariant representations of \(v\) are (using \({{\eb}}^{i}\cdot{{\eb}}_{j} = \delta^{i}_{j}\))

\[\be \f{v}{a} = v_{i}a^{i} = v^{i}a_{i}. \ee\]

Parallel Transport and Covariant Derivatives

The covariant derivative of a tensor field \({{T}\lp {a_{1},\dots,a_{r};x} \rp }\) (\(x\) is the coordinate vector of which \(T\) can be a non-linear function) in the direction \(a_{r+1}\) is (remember \(a_{j} = a_{j}^{k}{{\eb}}_{k}\) and the \({{\eb}}_{k}\) can be functions of \(x\)) the directional derivative of \({{T}\lp {a_{1},\dots,a_{r};x} \rp }\) where all the arguments of \(T\) are parallel transported. The definition of parallel transport is if \(a\) and \(b\) are tangent vectors in the tangent spaced of the manifold then

\[\be \paren{a\cdot\nabla_{x}}b = 0 \label{eq108a} \ee\]

if \(b\) is parallel transported. Since \(b = b^{i}{{\eb}}_{i}\) and the derivatives of \({{\eb}}_{i}\) are functions of the \(x^{i}\)’s then the \(b^{i}\)’s are also functions of the \(x^{i}\)’s so that in order for eq (\(\ref{eq108a}\)) to be satisfied we have

\[\begin{split}\begin{aligned} {\lp {a\cdot\nabla_{x}} \rp }b =& a^{i}\partial_{x^{i}}{\lp {b^{j}{{\eb}}_{j}} \rp } \nonumber \\ =& a^{i}{\lp {{\lp {\partial_{x^{i}}b^{j}} \rp }{{\eb}}_{j} + b^{j}\partial_{x^{i}}{{\eb}}_{j}} \rp } \nonumber \\ =& a^{i}{\lp {{\lp {\partial_{x^{i}}b^{j}} \rp }{{\eb}}_{j} + b^{j}\Gamma_{ij}^{k}{{\eb}}_{k}} \rp } \nonumber \\ =& a^{i}{\lp {{\lp {\partial_{x^{i}}b^{j}} \rp }{{\eb}}_{j} + b^{k}\Gamma_{ik}^{j}{{\eb}}_{j}} \rp }\nonumber \\ =& a^{i}{\lp {{\lp {\partial_{x^{i}}b^{j}} \rp } + b^{k}\Gamma_{ik}^{j}} \rp }{{\eb}}_{j} = 0.\end{aligned}\end{split}\]

Thus for \(b\) to be parallel transported we must have

\[\be \partial_{x^{i}}b^{j} = -b^{k}\Gamma_{ik}^{j}. \label{eq121a} \ee\]

The geometric meaning of parallel transport is that for an infinitesimal rotation and dilation of the basis vectors (cause by infinitesimal changes in the \(x^{i}\)’s) the direction and magnitude of the vector \(b\) does not change.

If we apply eq (\(\ref{eq121a}\)) along a parametric curve defined by \({{x^{j}}\lp {s} \rp }\) we have

\[\begin{split}\begin{align} \deriv{b^{j}}{s}{} =& \deriv{x^{i}}{s}{}\pdiff{b^{j}}{x^{i}} \nonumber \\ =& -b^{k}\deriv{x^{i}}{s}{}\Gamma_{ik}^{j}, \label{eq122a} \end{align}\end{split}\]

and if we define the initial conditions \({{b^{j}}\lp {0} \rp }{{\eb}}_{j}\). Then eq (\(\ref{eq122a}\)) is a system of first order linear differential equations with initial conditions and the solution, \({{b^{j}}\lp {s} \rp }{{\eb}}_{j}\), is the parallel transport of the vector \({{b^{j}}\lp {0} \rp }{{\eb}}_{j}\).

An equivalent formulation for the parallel transport equation is to let \({{\gamma}\lp {s} \rp }\) be a parametric curve in the manifold defined by the tuple \({{\gamma}\lp {s} \rp } = {\lp {{{x^{1}}\lp {s} \rp },\dots,{{x^{n}}\lp {s} \rp }} \rp }\). Then the tangent to \({{\gamma}\lp {s} \rp }\) is given by

\[\be \deriv{\gamma}{s}{} \equiv \deriv{x^{i}}{s}{}\eb_{i} \ee\]

and if \({{v}\lp {x} \rp }\) is a vector field on the manifold then

\[\begin{split}\begin{align} \paren{\deriv{\gamma}{s}{}\cdot\nabla_{x}}v =& \deriv{x^{i}}{s}{}\pdiff{}{x^{i}}\paren{v^{j}\eb_{j}} \nonumber \\ =&\deriv{x^{i}}{s}{}\paren{\pdiff{v^{j}}{x^{i}}\eb_{j}+v^{j}\pdiff{\eb_{j}}{x^{i}}} \nonumber \\ =&\deriv{x^{i}}{s}{}\paren{\pdiff{v^{j}}{x^{i}}\eb_{j}+v^{j}\Gamma^{k}_{ij}\eb_{k}} \nonumber \\ =&\deriv{x^{i}}{s}{}\pdiff{v^{j}}{x^{i}}\eb_{j}+\deriv{x^{i}}{s}{}v^{k}\Gamma^{j}_{ik}\eb_{j} \nonumber \\ =&\paren{\deriv{v^{j}}{s}{}+\deriv{x^{i}}{s}{}v^{k}\Gamma^{j}_{ik}}\eb_{j} \nonumber \\ =& 0. \label{eq124a} \end{align}\end{split}\]

Thus eq (\(\ref{eq124a}\)) is equivalent to eq (\(\ref{eq122a}\)) and parallel transport of a vector field along a curve is equivalent to the directional derivative of the vector field in the direction of the tangent to the curve being zero.

If the tensor component representation is contra-variant (superscripts instead of subscripts) we must use the covariant component representation of the vector arguments of the tensor, \(a = a_{i}{{\eb}}^{i}\). Then the definition of parallel transport gives

\[\begin{split}\begin{aligned} {\lp {a\cdot\nabla_{x}} \rp }b =& a^{i}\partial_{x^{i}}{\lp {b_{j}{{\eb}}^{j}} \rp } \nonumber \\ =& a^{i}{\lp {{\lp {\partial_{x^{i}}b_{j}} \rp }{{\eb}}^{j} + b_{j}\partial_{x^{i}}{{\eb}}^{j}} \rp },\end{aligned}\end{split}\]

and we need

\[\be \paren{\partial_{x^{i}}b_{j}}\eb^{j} + b_{j}\partial_{x^{i}}\eb^{j} = 0. \label{eq111a} \ee\]

To satisfy equation (\(\ref{eq111a}\)) consider the following

\[\begin{split}\begin{aligned} \partial_{x^{i}}{\lp {{{\eb}}^{j}\cdot{{\eb}}_{k}} \rp } =& 0 \nonumber \\ {\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} + {{\eb}}^{j}\cdot{\lp {\partial_{x^{i}}{{\eb}}_{k}} \rp } =& 0 \nonumber \\ {\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} + {{\eb}}^{j}\cdot{{\eb}}_{l}\Gamma_{ik}^{l} =& 0 \nonumber \\ {\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} + \delta_{l}^{j}\Gamma_{ik}^{l} =& 0 \nonumber \\ {\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} + \Gamma_{ik}^{j} =& 0 \nonumber \\ {\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} =& -\Gamma_{ik}^{j}\end{aligned}\end{split}\]

Now dot eq (\(\ref{eq111a}\)) into \({{\eb}}_{k}\) giving

\[\begin{split}\begin{aligned} {\lp {\partial_{x^{i}}b_{j}} \rp }{{\eb}}^{j}\cdot{{\eb}}_{k} + b_{j}{\lp {\partial_{x^{i}}{{\eb}}^{j}} \rp }\cdot{{\eb}}_{k} =& 0 \nonumber \\ {\lp {\partial_{x^{i}}b_{j}} \rp }\delta_{j}^{k} - b_{j}\Gamma_{ik}^{j} =& 0 \nonumber \\ {\lp {\partial_{x^{i}}b_{k}} \rp } = b_{j}\Gamma_{ik}^{j}.\end{aligned}\end{split}\]

Thus if we have a mixed representation of a tensor

\[\be \f{T}{a_{1},\dots,a_{r};x} = \f{T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}}{x}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots a_{i_{r}}, \ee\]

the covariant derivative of the tensor is

\[\begin{split}\begin{align} {\lp {a_{r+1}\cdot D} \rp } {{T}\lp {a_{1},\dots,a_{r};x} \rp } =& {{\displaystyle\frac{\partial {T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}}}{\partial {x^{r+1}}}}}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots a^{r}_{i_{r}} a^{i_{r+1}} \nonumber \\ &\hspace{-0.5in}+ \sum_{p=1}^{s}{{\displaystyle\frac{\partial {a^{i_{p}}}}{\partial {x^{i_{r+1}}}}}}T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}a^{i_{1}}\dots \breve{a}^{i_{p}}\dots a^{i_{s}}a_{i_{s+1}}\dots a_{i_{r}}a^{i_{r+1}} \nonumber \\ &\hspace{-0.5in}+ \sum_{q=s+1}^{r}{{\displaystyle\frac{\partial {a_{i_{p}}}}{\partial {x^{i_{r+1}}}}}}T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots\breve{a}_{i_{q}}\dots a_{i_{r}}a^{i_{r+1}} \nonumber \\ =& {{\displaystyle\frac{\partial {T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}}}{\partial {x^{r+1}}}}}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots a^{r}_{i_{r}} a^{i_{r+1}} \nonumber \\ &\hspace{-0.5in}- \sum_{p=1}^{s}\Gamma_{i_{r+1}l_{p}}^{i_{p}}T\indices{_{i_{1}\dots i_{p}\dots i_{s}}^{i_{s+1} \dots i_{r}}}a^{i_{1}}\dots a^{l_{p}}\dots a^{i_{s}}a_{i_{s+1}}\dots a_{i_{r}}a^{i_{r+1}} \nonumber \\ &\hspace{-0.5in}+ \sum_{q=s+1}^{r}\Gamma_{i_{r+1}i_{q}}^{l_{q}}T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{q} \dots i_{r}}}a^{i_{1}}\dots a^{i_{s}}a_{i_{s+1}}\dots a_{l_{q}}\dots a_{i_{r}}a^{i_{r+1}} . \label{eq126a} \\ \end{align}\end{split}\]

From eq (\(\ref{eq126a}\)) we obtain the components of the covariant derivative to be

\[\begin{aligned} {{\displaystyle\frac{\partial {T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{r}}}}}{\partial {x^{r+1}}}}} - \sum_{p=1}^{s}\Gamma_{i_{r+1}l_{p}}^{i_{p}}T\indices{_{i_{1}\dots i_{p}\dots i_{s}}^{i_{s+1}\dots i_{r}}} + \sum_{q=s+1}^{r}\Gamma_{i_{r+1}i_{q}}^{l_{q}}T\indices{_{i_{1}\dots i_{s}}^{i_{s+1}\dots i_{q}\dots i_{r}}}.\end{aligned}\]

The component free form of the covariant derivative (the one used to calculate it in the code) is

\[\be \mathcal{D}_{a_{r+1}} {{T}\lp {a_{1},\dots,a_{r};x} \rp } \equiv \nabla T - \sum_{k=1}^{r}{{T}\lp {a_{1},\dots,{\lp {a_{r+1}\cdot\nabla} \rp } a_{k},\dots,a_{r};x} \rp }. \ee\]

4

By the manifold embedding theorem any \(m\)-dimensional manifold is isomorphic to a \(m\)-dimensional vector manifold

5

This product in not necessarily positive definite.

6

In this section and all following sections we are using the Einstein summation convention unless otherwise stated.

7

We use the Christoffel symbols of the first kind to calculate the derivatives of the basis vectors and the product rule to calculate the derivatives of the basis blades where (http://en.wikipedia.org/wiki/Christoffel_symbols)

\[\be \Gamma_{ijk} = {\frac{1}{2}}{\lp {{{\displaystyle\frac{\partial {g_{jk}}}{\partial {x^{i}}}}}+{{\displaystyle\frac{\partial {g_{ik}}}{\partial {x^{j}}}}}-{{\displaystyle\frac{\partial {g_{ij}}}{\partial {x^{k}}}}}} \rp }, \ee\]

and

\[\be {{\displaystyle\frac{\partial {{{\eb}}_{j}}}{\partial {x^{i}}}}} = \Gamma_{ijk}{{\eb}}^{k}. \ee\]

The Christoffel symbols of the second kind,

\[\be \Gamma_{ij}^{k} = {\frac{1}{2}}g^{kl}{\lp {{{\displaystyle\frac{\partial {g_{li}}}{\partial {x^{j}}}}}+{{\displaystyle\frac{\partial {g_{lj}}}{\partial {x^{i}}}}}-{{\displaystyle\frac{\partial {g_{ij}}}{\partial {x^{l}}}}}} \rp }, \ee\]

could also be used to calculate the derivatives in term of the original basis vectors, but since we need to calculate the reciprocal basis vectors for the geometric derivative it is more efficient to use the symbols of the first kind.

8

In this case \(D_{B}^{j_{1}\dots j_{n}} = F\) and \(\partial_{j_{1}\dots j_{n}} = 1\).

9

For example in three dimensions \({\left \{{3} \rbrc} = (0,1,2,3,(1,2),(2,3),(1,3),(1,2,3))\) and as an example of how the superscript would work with each grade \({{\eb}}^{0}=1\), \({{\eb}}^{1}={{\eb}}^{1}\), \({{\eb}}^{{\lp {1,2} \rp }}={{\eb}}^{1}{\wedge}{{\eb}}^{2}\), and \({{\eb}}^{{\lp {1,2,3} \rp }}={{\eb}}^{1}{\wedge}{{\eb}}^{2}{\wedge}{{\eb}}^{3}\).

10

We are following the treatment of Tensors in section 3–10 of [HS84].

11

We assume that the arguments are elements of a vector space or more generally a geometric algebra so that the concept of linearity is meaningful.