# Linear Transformations / Outermorphisms in GAlgebra

Author: Greg Grunberg

Last updated: 2021-11-04

Original name: Linear Transformations in GAlgebra.ipynb in GSG_2021-11-05_GAlgebra_fixes.zip

## 1. Preliminaries

This Markdown cell implements certain user-defined LaTeX macros which I’m in the habit of using. (Not all are used by this notebook.) The macros ease creation of LaTeX/MathJax code, which in turn allows production of more readable Markdown cells and In[ ] cell output.
- Enter edit mode to see definitions of the macros.
- Exit edit mode (Shift+Enter) to activate the macros in this notebook.
- An example of each macro follows its definition. The typeset result of each macro appears when not in edit mode.

$\newcommand {\Rn}[1] {\mathbb{R}^{#1}}$

$$\Rn{p,q}$$ scalar product space of signature $$(p,q)$$

$\newcommand {\Gn}[1]{\mathbb{G}^{#1}}$

$$\Gn{p,q}$$ geometric algebra of signature $$(p,q)$$

$\newcommand {\op} {\wedge}$

$$A \op B$$ outer product

$\newcommand {\ip} {\cdot}$

$$A \ip B$$ dot product

$\newcommand {\lc} {\rfloor}$

$$A \lc B$$ left contraction

$\newcommand {\rc} {\lfloor}$

$$A \rc B$$ right contraction

$\newcommand {\dual} {^\star}$

$$A\dual$$ dual of multivector

$\newcommand {\undual} {^{-\star}}$

$$A\undual$$ undual of multivector

$\newcommand {\rev} {^\dagger}$

$$A\rev$$ reverse of multivector (dagger notation)

$\newcommand {\til}[1] {\widetilde{#1}}$

$$\til{A}$$ reverse of multivector (tilde notation)

$\newcommand {\ginvol}[1] {\widehat{#1}}$

$$\ginvol{A}$$ grade involute of multivector

$\newcommand {\ccon}[1] {\overline{#1}}$

$$\ccon{A}$$ Clifford conjugate of multivector

$\newcommand {\lt}[1] {\mathsf{#1}}$

$$\lt{T}$$ linear transformation / outermorphism

$\newcommand {\ad}[1] {\mathsf{#1}^\ast}$

$$\ad{T}$$ adjoint of linear transformation / outermorphism

$\newcommand {\es}[1] {\mathbf{e}_{#1}}$

$$\es{j}$$ (orthonormal) basis vector

$\newcommand {\eS}[1] {\mathbf{e}^{#1}}$

$$\eS{i}$$ (orthonormal) reciprocal basis vector

$\newcommand {\bas}[2] {\mathbf{#1}_{#2}}$

$$\bas{b}{j}$$ basis vector

$\newcommand {\baS}[2] {\mathbf{#1}^{#2}}$

$$\baS{b}{i}$$ reciprocal basis vector

$\newcommand {\oder}[3] {\dfrac{d^{#3}{#1}}{d{#2}^{#3}}}$

$$\oder{y}{x}{k}$$ $$k$$th ordinary derivative

$\newcommand {\pder}[3] {\dfrac{\partial^{#3} {#1}}{\partial{#2}^{#3}}}$
$$\pder{u}{(x^i)}{k}$$ $$k$$th partial derivative
$\newcommand {\qform}[1] {\mathscr{Q}\left({#1}\right)}$

$$\qform{A}$$ quadratic form

$\newcommand {\norm}[1] {\left\|{#1}\right\|}$

$$\norm{A}$$ norm

$\newcommand {\normsq}[1] {\left\|{#1}\right\|^2}$

$$\normsq{A}$$ normsquared

$\newcommand {\mag}[1] {\left|{#1}\right|}$

$$\mag{A}$$ magnitude

$\newcommand {\magsq}[1] {\left|{#1}\right|^2}$

$$\magsq{A}$$ magnitude squared

$\newcommand {\mbf}[1] {\mathbf{#1}}$

$$\mbf{A}$$ mathboldface font

$\newcommand {\msf}[1] {\mathsf{#1}}$

$$\msf{A}$$ mathsansserif font

$\newcommand {\mbs}[1] {\boldsymbol{#1}}$

$$\mbs{\kappa}$$ boldsymbol font

$\newcommand {\scrB} {\mathscr{B}}$

$$\scrB$$ basis

$\newcommand {\scrE} {\mathscr{E}}$

$$\scrE$$ (orthonormal) basis

[1]:

# Initialize this notebook to use SymPy and GAlgebra:
import platform
import sympy
import galgebra
from galgebra.ga import *
from galgebra.mv import *
from galgebra.lt import *
from galgebra.printer import Fmt, GaPrinter, Format
from galgebra.gprinter import gFormat, gprint
gFormat()
Ga.dual_mode('Iinv+')
gprint(r'\textsf{This notebook is now using} \\',

$\displaystyle \DeclareMathOperator{\Tr}{Tr}\DeclareMathOperator{\Adj}{Adj}\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}\newcommand{\lp}{\left (}\newcommand{\rp}{\right )}\newcommand{\paren}[1]{\lp {#1} \rp}\newcommand{\half}{\frac{1}{2}}\newcommand{\llt}{\left <}\newcommand{\rgt}{\right >}\newcommand{\abs}[1]{\left |{#1}\right | }\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}\newcommand{\npdiff}[3]{\bfrac{\partial^{#3} {#1}}{\partial {#2}^{#3}}}\newcommand{\lbrc}{\left \{}\newcommand{\rbrc}{\right \}}\newcommand{\W}{\wedge}\newcommand{\prm}[1]{{#1}^{\prime}}\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}\newcommand{\R}{\dagger}\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}\newcommand{\grade}[2]{\left < {#1} \right >_{#2}}\newcommand{\f}[2]{{#1}\lp {#2} \rp}\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}\newcommand{\bs}[1]{\boldsymbol{#1}}\newcommand{\grad}{\bs{\nabla}}$
$\displaystyle \textsf{This notebook is now using} \\\qquad\bullet~ \textsf{Python }3.11.8\qquad\bullet~ \textsf{SymPy }1.12\qquad\bullet~ \textsf{GAlgebra }0.5.1.$

Important: GAlgebra 0.5.0 is the version available as of this file date from the PyGAE GAlgebra website at https://github.com/pygae/galgebra. This notebook actually uses GAlgebra 0.5.0 but with two of its modules, lt.py and mv.py, modified. The modifications have corrected those version 0.5.0 bugs of which I’m aware. The changes have also added to the capabilities offered by those modules.

This notebook also uses module gprinter.py, provided to me by Alan Bromborsky (the original author of GAlgebra), which is not part of GAlgebra 0.5.0. That module’s gprint function is used extensively in this notebook’s In[ ] cells to produce beautifully formatted output.

## 2. m3, the geometric algebra used in examples

This notebook’s examples use m3, a model of the geometric algebra $$\Gn{1,2}$$ generated by an orthonormal basis of 3-dimensional Minkowski space.

[2]:

index_values = symbols('1 2 3', int=True)
coords = (x1, x2, x3) = symbols('x__1, x__2, x__3', real=True)
m3= Ga('\mbf{e}*1|2|3', g=[1,-1,-1], coords=index_values, wedge=False)
e1, e2, e3 = m3.mv()         # basis vectors
re1, re2, re3 = m3.mvr()     # reciprocal basis vectors
kappa = m3.mv('kappa', 0)    # generic 0-vector
x = m3.mv('x',1)             # generic 1-vector
X = m3.mv('X', 'mv')         # generic multivector
I = m3.I()                   # unit pseudoscalar
gprint(r'\textbf{m3: a model of }\Gn{1,2}')
gprint(r'\text{basis:}~~\scrE=', m3.mv(),
'=', m3.mvr())
gprint(r'\text{metric tensor:}~~[g_{ij}]=', m3.g,
gprint(r'\text{unit pseudoscalar:}~~\mbf{I}=', I,
gprint(r'\text{generic}',

$\displaystyle \textbf{m3: a model of }\Gn{1,2}$
$\displaystyle \text{basis:}~~\scrE= \left( \boldsymbol{\mbf{e}}_{1}, \ \boldsymbol{\mbf{e}}_{2}, \ \boldsymbol{\mbf{e}}_{3}\right) \qquad\text{reciprocal basis:}~~\scrE^{-1}= \left( \boldsymbol{\mbf{e}}^{1}, \ \boldsymbol{\mbf{e}}^{2}, \ \boldsymbol{\mbf{e}}^{3}\right) = \left( \boldsymbol{\mbf{e}}_{1}, \ - \boldsymbol{\mbf{e}}_{2}, \ - \boldsymbol{\mbf{e}}_{3}\right)$
$\displaystyle \text{metric tensor:}~~[g_{ij}]= \left[\begin{array}{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0 & -1\end{array}\right] \quad\text{reciprocal metric tensor:}~~[g^{ij}]= \left[\begin{array}{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0 & -1\end{array}\right]$
$\displaystyle \text{unit pseudoscalar:}~~\mbf{I}= \boldsymbol{\mbf{e}}_{123} \qquad\text{square of unit pseudoscalar:}~~\mbf{I}^2= -1$
$\displaystyle \begin{equation*} \text{generic}\end{equation*}$
$\displaystyle \begin{equation*} \quad\text{scalar (0-vector):}~~ \kappa \end{equation*}$
$\displaystyle \begin{equation*} \quad\text{vector:}~~\mbf{x}= x^{1} \boldsymbol{\mbf{e}}_{1} + x^{2} \boldsymbol{\mbf{e}}_{2} + x^{3} \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \begin{equation*} \quad\text{multivector:}~~\mbf{X}= X + X^{1} \boldsymbol{\mbf{e}}_{1} + X^{2} \boldsymbol{\mbf{e}}_{2} + X^{3} \boldsymbol{\mbf{e}}_{3} + X^{12} \boldsymbol{\mbf{e}}_{12} + X^{13} \boldsymbol{\mbf{e}}_{13} + X^{23} \boldsymbol{\mbf{e}}_{23} + X^{123} \boldsymbol{\mbf{e}}_{123} \end{equation*}$

The wedge=False specification of m3’s instantiations means that “no wedge” notation is being used, in which $$\es{i_1 \cdots i_g}$$ is an abbreviation for the basis blade $$\es{i_1} \op \cdots \op \es{i_g}$$.

It will be useful to have the following function for simplifying the coefficients in a multivector’s basis blade expansion. The function doesn’t always accomplish its purpose, but seems to do an adequate job when the coefficients are trigonometric expressions.

[3]:

from sympy.simplify.fu import fu
def mv_simplify(M:Mv, simp=simplify) -> Mv:
"""
Returns the multivector M but with the coefficients of its basis
blade expansion simplified.  Does not modify the original multivector.
Uses the SymPy simplification function specified by the simp parameter.
"""
# Create basis_blades, a list of all basis blades for the geometric
# algebra to which multivector M belongs.  The blades are listed in
# ascending strictly ordered indexes.
# Create a list coefficients of M's scalar coefficients in its basis
# blade expansion.  Then simplify each coefficient in the list.
for k in range(len(coefficients)):
coefficients[k] = simp(coefficients[k])
# Create and return a new version of M with simplified coefficients.
simplified_M = 0
return simplified_M


## 3. Contravariant/covariant indexing in GAlgebra’s output

Most introductory linear algebra textbooks, including Alan Macdonald’s Linear and Geometric Algebra, write the basis expansion $$\mbf{x}=\sum_{i=1}^n x_i \es{i}$$ of a vector $$\mbf{x}$$ with the scalar coefficients $$x_i$$ labelled by a subscript.

GAlgebra uses a somewhat different notational scheme, one borrowed from tensor algebra. GAlgebra places the labelling index $$i$$ of the scalar coefficient as a superscript and thus writes the basis expansion of $$\mbf{x}$$ in the form

$\mbf{x}=\sum_{i=1}^n x^i \es{i} = x^1 \es{1} + x^2 \es{2} + \dots + x^n \es{n},$

where $$n=p+q$$ is the dimension of the scalar product space $$\Rn{p,q}$$. Indexes written in superscript position, which should not be confused with exponents, are called contravariant, while those written as subscripts (as on the basis vectors) are called covariant. (Although not very relevant for GAlgebra, which uses only one basis per geometric algebra model, contravariant/covariant index positioning carries information about how an indexed quantity changes under a change of basis.)

Something similar happens with the basis blade expansion of a multivector $$\mbf{X}$$. GAlgebra labels the coefficient which multiplies basis blade $$\es{i_1 \cdots i_g} := \es{i_1} \op \cdots \op \es{i_g}$$ with contravariant indices identical to the covariant indices on the basis blade multiplied. Thus $$\mbf{X}$$’s expansion has the form

$\mbf{X} = \sum_{g=0}^n \left< {\mbf{X}} \right>_g = X + \sum_{g=1}^n \left( \sum_{1 \le i_1 < \cdots < i_g \le n} X^{i_1 \cdots i_g} \es{i_1 \cdots i_g} \right).$

$$X = \left<\mbf{X}\right>_0$$ has no indices and is displayed in normal math italic so as to distinguish it from the multivector $$\mbf{X}$$ of which it is the grade-zero part. In order to keep linearly independent the set of basis blades used in the expansion, only strictly ordered index values, $$1 \le i_1 < \cdots < i_g \le n$$, are included in the summation.

$$\lt{T}(\es{j})$$, the image by a linear transformation $$\lt{T}$$ of a basis vector $$\es{j}$$, is itself a vector, so the image’s expansion coefficients are also labelled by a contravariant index. Convention is to write the expansion coefficient of $$\lt{T}(\es{j})$$ which multiplies the $$i$$th basis vector $$\es{i}$$ as $${T^i}_j$$. Therefore

$\lt{T}(\es{j}) = \sum_{i=1}^n {T^i}_j \es{i} = {T^1}_j \es{1} + {T^2}_j \es{2} + \cdots + {T^n}_j \es{n}.$

Notice that the coefficient’s contravariant index $$i$$, which denotes which basis vector the coefficient is to multiply, is written not only as a superscript but also to the left of the covariant index $$j$$ which indicates the image vector in question. This is done so that when the coefficients of the various basis vector images are placed into a matrix, a coefficient’s left index specifies the row in which a coefficient is placed while its right index specifies the column. With indexes so placed, the image vector may be written

\begin{align}\lt{T}(\mbf{x}) = \lt{T}\left(\sum_{j=1}^n x^j \es{j}\right) = \sum_{j=1}^n x^j \lt{T}(\es{j}) = \sum_{j=1}^n x^j \sum_{i=1}^n {T^i}_j \es{i} = \sum_{i=1}^n \left( \sum_{j=1}^n {T^i}_j x^j \right) \es{i}. \end{align}

and the matrix of all expansion coefficients becomes

$\begin{split}[\lt{T}]_\scrE = \left[ {T^i}_j \right] = \left[ \begin{matrix} {T^1}_1 & \cdots & {T^1}_j & \cdots & {T^1}_n \\ \vdots & & \vdots & & \vdots \\ {T^i}_1 & \cdots & {T^i}_j & \cdots & {T^i}_n \\ \vdots & & \vdots & & \vdots \\ {T^n}_1 & \cdots & {T^n}_j & \cdots & {T^n}_n \\ \end{matrix} \right].\end{split}$

This contravariant-covariant matrix is the standard matrix of the linear transformation $$\lt{T}$$ with respect to the basis $$\scrE = \left(\es{1}, \dots, \es{n}\right)$$. The definitions employed are the same as those in introductory textbooks; the only difference is that the matrix entries have been written with the row index appearing as a left superscript rather than a left subscript.

Notice that the $$j$$th column of $$\lt{T}$$’s matrix consists of the expansion coefficients of $$\lt{T}(\es{j})$$, the $$j$$th basis image vector.

A GAlgebra user must keep in mind that Python indexing starts at $$0$$ while it’s traditional in mathematics to start at $$1$$. The difference in start values means that for the SymPy matrix T.matrix(), it’s the quantity T.matrix()[i-1, j-1] which retutrns $${T^i}_j$$. It may not display as such, however, as we will see when we examine the matrices for symmetric and antisymmetric transformations.

Beside the standard matrix $$\left[{T^i}_j\right]$$ (the contravariant-covariant matrix) associated with $$\lt{T}$$, there exists a different matrix, useful when discussing symmetric and antisymmetric transformations, that we will refer to as the covariant-covariant matrix $$\left[T_{ij}\right]$$. The entries of the two matrices are related by the formulas

$T_{ij} = \sum_{k=1}^n g_{ik}{T^k}_j \qquad\text{and}\qquad {T^i}_j = \sum_{k=1}^n g^{ik} T_{kj}.$

Use of Euclidean metrics and orthonormal bases is common in introductory textbooks. Since $$\left[g_{ij}\right]$$ is the identity matrix in such situations, the above relations then reduce to $$T_{ij} = {T^i}_j$$, which is one reason introductory textbooks do not make the contravariant/covariant distinction. But if the metric is non-Euclidean or the basis is not orthonormal, the distinction is essential.

## 4. General symbolic transformations; transformation operations

As illustrated by the next In[ ] cell, GAlgebra may be used to: - Instantiate a general symbolic linear transformation $$\lt{G}$$.
- Display how $$\lt{G}$$ maps each basis vector to a linear combination of basis vectors. - Find the matrix of $$\lt{G}$$. - Compute $$\lt{G}$$’s action on a generic vector $$\mbf{x}$$. - Compute $$\lt{G}$$’s action on a generic multivector $$\mbf{X}$$. - Find the determinant of $$\lt{G}$$. - Find the trace of $$\lt{G}$$.
[4]:

G = m3.lt('G')                            # instantiate general symbolic transformation
gprint(r'\mbf{G}:~', G)                   # transformation's action on basis vectors
gprint(r'[\lt{G}]_\scrE =', G.matrix())   # matrix with respect to basis
gprint(r'\text{G.matrix()[3-1, 1-1]}=',  G.matrix()[3-1, 1-1])
# example: (3,1) matrix entry
gprint(r'\lt{G}(\mbf{x})=', G(x).Fmt(3))  # action on generic vector
gprint(r'\lt{G}(\mbf{X})=', G(X).Fmt(3))  # action on generic multivector
gprint(r'\det(\lt{G})=', G.det())         # transformation's determinant
gprint(r'\text{tr}(\lt{G})=', G.tr())     # transformation's trace

\displaystyle \mbf{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\}
$\displaystyle [\lt{G}]_\scrE = \left[\begin{array}{ccc}{G^{1}}_{1} & {G^{1}}_{2} & {G^{1}}_{3}\\{G^{2}}_{1} & {G^{2}}_{2} & {G^{2}}_{3}\\{G^{3}}_{1} & {G^{3}}_{2} & {G^{3}}_{3}\end{array}\right]$
$\displaystyle \text{G.matrix()[3-1, 1-1]}= {G^{3}}_{1}$
\displaystyle \lt{G}(\mbf{x})= \begin{aligned}[t] & \left ( x^{1} {G^{1}}_{1} + x^{2} {G^{1}}_{2} + x^{3} {G^{1}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} \\ & + \left ( x^{1} {G^{2}}_{1} + x^{2} {G^{2}}_{2} + x^{3} {G^{2}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} \\ & + \left ( x^{1} {G^{3}}_{1} + x^{2} {G^{3}}_{2} + x^{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{aligned}
\displaystyle \lt{G}(\mbf{X})= \begin{aligned}[t] & X \\ & + \left ( X^{1} {G^{1}}_{1} + X^{2} {G^{1}}_{2} + X^{3} {G^{1}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} \\ & + \left ( X^{1} {G^{2}}_{1} + X^{2} {G^{2}}_{2} + X^{3} {G^{2}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} \\ & + \left ( X^{1} {G^{3}}_{1} + X^{2} {G^{3}}_{2} + X^{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \\ & + \left ( X^{12} {G^{1}}_{1} {G^{2}}_{2} - X^{12} {G^{1}}_{2} {G^{2}}_{1} + X^{13} {G^{1}}_{1} {G^{2}}_{3} - X^{13} {G^{1}}_{3} {G^{2}}_{1} + X^{23} {G^{1}}_{2} {G^{2}}_{3} - X^{23} {G^{1}}_{3} {G^{2}}_{2}\right ) \boldsymbol{\mbf{e}}_{12} \\ & + \left ( X^{12} {G^{1}}_{1} {G^{3}}_{2} - X^{12} {G^{1}}_{2} {G^{3}}_{1} + X^{13} {G^{1}}_{1} {G^{3}}_{3} - X^{13} {G^{1}}_{3} {G^{3}}_{1} + X^{23} {G^{1}}_{2} {G^{3}}_{3} - X^{23} {G^{1}}_{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{13} \\ & + \left ( X^{12} {G^{2}}_{1} {G^{3}}_{2} - X^{12} {G^{2}}_{2} {G^{3}}_{1} + X^{13} {G^{2}}_{1} {G^{3}}_{3} - X^{13} {G^{2}}_{3} {G^{3}}_{1} + X^{23} {G^{2}}_{2} {G^{3}}_{3} - X^{23} {G^{2}}_{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{23} \\ & + X^{123} \left({G^{1}}_{1} {G^{2}}_{2} {G^{3}}_{3} - {G^{1}}_{1} {G^{2}}_{3} {G^{3}}_{2} - {G^{1}}_{2} {G^{2}}_{1} {G^{3}}_{3} + {G^{1}}_{2} {G^{2}}_{3} {G^{3}}_{1} + {G^{1}}_{3} {G^{2}}_{1} {G^{3}}_{2} - {G^{1}}_{3} {G^{2}}_{2} {G^{3}}_{1}\right) \boldsymbol{\mbf{e}}_{123} \end{aligned}
$\displaystyle \det(\lt{G})= {G^{1}}_{1} {G^{2}}_{2} {G^{3}}_{3} - {G^{1}}_{1} {G^{2}}_{3} {G^{3}}_{2} - {G^{1}}_{2} {G^{2}}_{1} {G^{3}}_{3} + {G^{1}}_{2} {G^{2}}_{3} {G^{3}}_{1} + {G^{1}}_{3} {G^{2}}_{1} {G^{3}}_{2} - {G^{1}}_{3} {G^{2}}_{2} {G^{3}}_{1}$
$\displaystyle \text{tr}(\lt{G})= {G^{1}}_{1} + {G^{2}}_{2} + {G^{3}}_{3}$
• GAlgebra can multiply a linear transformation by a scalar provided it’s a SymPy scalar. If multiplying the transformation by a 0-vector (i.e. a GAlgebra scalar), the 0-vector has to first be converted into a SymPy scalar by use of the multivector method .scalar().

[5]:

kappa = m3.mv('kappa', 0)             # a 0-vector, not a SymPy scalar
gprint(kappa, r'\text{ belongs to }', type(kappa))
gprint(r'\text{converted }', kappa.scalar(), r'\text{ belongs to }', type(kappa.scalar()))
gprint(r'\kappa\lt{G}:~', kappa.scalar()*G)
# product of 0-vector and transformation
gprint()
lamda = symbols('lambda', real=True)  # a SymPy scalar, not a 0-vector
gprint(lamda, r'\text{ belongs to }', type(lamda))
gprint(r'\lambda\lt{G}:~', lamda*G)   # product of SymPy scalar and transformation

$\displaystyle \kappa \text{ belongs to } \text{<class 'galgebra.mv.Mv'>}$
$\displaystyle \text{converted } \kappa \text{ belongs to } \text{<class 'sympy.core.symbol.Symbol'>}$
\displaystyle \kappa\lt{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto \kappa {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + \kappa {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + \kappa {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \kappa {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + \kappa {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + \kappa {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \kappa {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + \kappa {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + \kappa {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\}
$\displaystyle$
$\displaystyle \lambda \text{ belongs to } \text{<class 'sympy.core.symbol.Symbol'>}$
\displaystyle \lambda\lt{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto \lambda {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + \lambda {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + \lambda {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \lambda {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + \lambda {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + \lambda {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \lambda {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + \lambda {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + \lambda {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\}
GAlgebra can combine transformations $$\lt{F}$$ and $$\lt{G}$$ using - addition, $$\lt{F} + \lt{G}$$,
- subtraction, $$\lt{F} - \lt{G}$$, and/or
- compositional multiplication, $$\lt{F}\lt{G}$$.

The compositional product $$\lt{FG} \equiv \lt{F}\circ\lt{G}$$ is obtained from GAlgebra expression F*G.

[6]:

F = m3.lt('F')    # a symbolic linear transformation
G = m3.lt('G')    # another symbolic transformation
gprint(r'\lt{F} + \lt{G}:~', F + G,
gprint(r'\lt{F} - \lt{G}:~', F - G,
gprint(r'\lt{F}\lt{G}:~', F * G,
# Test:  Is the matrix of two transformations' compositional product
# the same as the product of the transformations' matrices?
gprint(r'\text{(F*G).matrix() == F.matrix() * G.matrix()}:~',
(F*G).matrix() == F.matrix() * G.matrix())

\displaystyle \lt{F} + \lt{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto \left ( {F^{1}}_{1} + {G^{1}}_{1}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{1} + {G^{2}}_{1}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{1} + {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \left ( {F^{1}}_{2} + {G^{1}}_{2}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{2} + {G^{2}}_{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{2} + {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \left ( {F^{1}}_{3} + {G^{1}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{3} + {G^{2}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{3} + {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\lt{F}+\lt{G}]_\scrE = \left[\begin{array}{ccc}{F^{1}}_{1} + {G^{1}}_{1} & {F^{1}}_{2} + {G^{1}}_{2} & {F^{1}}_{3} + {G^{1}}_{3}\\{F^{2}}_{1} + {G^{2}}_{1} & {F^{2}}_{2} + {G^{2}}_{2} & {F^{2}}_{3} + {G^{2}}_{3}\\{F^{3}}_{1} + {G^{3}}_{1} & {F^{3}}_{2} + {G^{3}}_{2} & {F^{3}}_{3} + {G^{3}}_{3}\end{array}\right]
\displaystyle \lt{F} - \lt{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto \left ( {F^{1}}_{1} - {G^{1}}_{1}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{1} - {G^{2}}_{1}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{1} - {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \left ( {F^{1}}_{2} - {G^{1}}_{2}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{2} - {G^{2}}_{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{2} - {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \left ( {F^{1}}_{3} - {G^{1}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{3} - {G^{2}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{3} - {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\lt{F}-\lt{G}]_\scrE = \left[\begin{array}{ccc}{F^{1}}_{1} - {G^{1}}_{1} & {F^{1}}_{2} - {G^{1}}_{2} & {F^{1}}_{3} - {G^{1}}_{3}\\{F^{2}}_{1} - {G^{2}}_{1} & {F^{2}}_{2} - {G^{2}}_{2} & {F^{2}}_{3} - {G^{2}}_{3}\\{F^{3}}_{1} - {G^{3}}_{1} & {F^{3}}_{2} - {G^{3}}_{2} & {F^{3}}_{3} - {G^{3}}_{3}\end{array}\right]
\displaystyle \lt{F}\lt{G}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto \left ( {F^{1}}_{1} {G^{1}}_{1} + {F^{1}}_{2} {G^{2}}_{1} + {F^{1}}_{3} {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{1} {G^{1}}_{1} + {F^{2}}_{2} {G^{2}}_{1} + {F^{2}}_{3} {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{1} {G^{1}}_{1} + {F^{3}}_{2} {G^{2}}_{1} + {F^{3}}_{3} {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \left ( {F^{1}}_{1} {G^{1}}_{2} + {F^{1}}_{2} {G^{2}}_{2} + {F^{1}}_{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{1} {G^{1}}_{2} + {F^{2}}_{2} {G^{2}}_{2} + {F^{2}}_{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{1} {G^{1}}_{2} + {F^{3}}_{2} {G^{2}}_{2} + {F^{3}}_{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \left ( {F^{1}}_{1} {G^{1}}_{3} + {F^{1}}_{2} {G^{2}}_{3} + {F^{1}}_{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( {F^{2}}_{1} {G^{1}}_{3} + {F^{2}}_{2} {G^{2}}_{3} + {F^{2}}_{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( {F^{3}}_{1} {G^{1}}_{3} + {F^{3}}_{2} {G^{2}}_{3} + {F^{3}}_{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\lt{F}\lt{G}]_\scrE = \left[\begin{array}{ccc}{F^{1}}_{1} {G^{1}}_{1} + {F^{1}}_{2} {G^{2}}_{1} + {F^{1}}_{3} {G^{3}}_{1} & {F^{1}}_{1} {G^{1}}_{2} + {F^{1}}_{2} {G^{2}}_{2} + {F^{1}}_{3} {G^{3}}_{2} & {F^{1}}_{1} {G^{1}}_{3} + {F^{1}}_{2} {G^{2}}_{3} + {F^{1}}_{3} {G^{3}}_{3}\\{F^{2}}_{1} {G^{1}}_{1} + {F^{2}}_{2} {G^{2}}_{1} + {F^{2}}_{3} {G^{3}}_{1} & {F^{2}}_{1} {G^{1}}_{2} + {F^{2}}_{2} {G^{2}}_{2} + {F^{2}}_{3} {G^{3}}_{2} & {F^{2}}_{1} {G^{1}}_{3} + {F^{2}}_{2} {G^{2}}_{3} + {F^{2}}_{3} {G^{3}}_{3}\\{F^{3}}_{1} {G^{1}}_{1} + {F^{3}}_{2} {G^{2}}_{1} + {F^{3}}_{3} {G^{3}}_{1} & {F^{3}}_{1} {G^{1}}_{2} + {F^{3}}_{2} {G^{2}}_{2} + {F^{3}}_{3} {G^{3}}_{2} & {F^{3}}_{1} {G^{1}}_{3} + {F^{3}}_{2} {G^{2}}_{3} + {F^{3}}_{3} {G^{3}}_{3}\end{array}\right]
$\displaystyle \text{(F*G).matrix() == F.matrix() * G.matrix()}:~ \text{True}$

Observe that the $$(i,j)$$ entry in the above output for $$[\lt{F}\lt{G}]_\scrE$$ is given by $$\sum_{k=1}^n {F^i}_k {G^k}_j$$ (where $$n=3$$ for $$\Gn{1,2}$$), as it should be.

• The GAlgebra expression G.adj() returns the adjoint $$\ad{G}$$ of transformation $$\lt{G}$$.

[7]:

gprint(r'\text{adjoint }\ad{G}\text{ of general symbolic transformation }\lt{G}:')

$\displaystyle \text{adjoint }\ad{G}\text{ of general symbolic transformation }\lt{G}:$
\displaystyle \begin{equation*} \qquad\ad{G}: \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} - {G^{1}}_{2} \boldsymbol{\mbf{e}}_{2} - {G^{1}}_{3} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto - {G^{2}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{2}}_{3} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto - {G^{3}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{3}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\ad{G}]_\scrE= \left[\begin{array}{ccc}{G^{1}}_{1} & - {G^{2}}_{1} & - {G^{3}}_{1}\\- {G^{1}}_{2} & {G^{2}}_{2} & {G^{3}}_{2}\\- {G^{1}}_{3} & {G^{2}}_{3} & {G^{3}}_{3}\end{array}\right] \end{equation*}
\displaystyle \begin{equation*} \qquad\ad{G}(\mbf{x})= \begin{aligned}[t] & \left ( x^{1} {G^{1}}_{1} - x^{2} {G^{2}}_{1} - x^{3} {G^{3}}_{1}\right ) \boldsymbol{\mbf{e}}_{1} \\ & + \left ( - x^{1} {G^{1}}_{2} + x^{2} {G^{2}}_{2} + x^{3} {G^{3}}_{2}\right ) \boldsymbol{\mbf{e}}_{2} \\ & + \left ( - x^{1} {G^{1}}_{3} + x^{2} {G^{2}}_{3} + x^{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{aligned} \end{equation*}
$\displaystyle \qquad {{G^*}^1}_2=(1,2)\text{ entry of matrix of }\ad{G}= - {G^{2}}_{1}$

Remark: As the above output shows, the adjoint’s matrix is not necessarily the transpose of the transformation’s matrix. The two are necessarily equal only when the basis used is orthonormal and the metric has a Euclidean signature. The algebra being used, m3, fails the second of those two requirements.

Internal representation of a linear transformation. The model G of a transformation $$\lt{G}$$ is stored internally as a Python dictionary, accessible as the attribute G.lt_dict. The dictionary’s key: value pairs, when printed, appear to have the form basis vector: :math:lt{G}(basis vector), but appearances are deceiving. Instead the keys are basis symbols, i.e. are objects which are mapped to the basis vectors by application of the method m3.mv(). And the values are linear combinations of the basis symbols, i.e. are objects which m3.mv() maps to basis image vectors. This distinction will be important in Section 6, when we explore the different ways one can specify a non-symbolic linear transformation.

[8]:

gprint(r'\text{basis vectors}:~~\text{m3.mv()}=', m3.mv())
gprint(r'\text{basis symbols}:~~\text{m3.basis}=', m3.basis)
gprint(r'\text{internal dictionary}:~~\text{G.lt_dict}=', G.lt_dict)
for base in m3.basis:
gprint(base, r'\text{ from m3.basis belongs to }', type(base))
gprint(G.lt_dict[base], r'\text{ belongs to }',
type(G.lt_dict[base]))

$\displaystyle \text{basis vectors}:~~\text{m3.mv()}= \left( \boldsymbol{\mbf{e}}_{1}, \ \boldsymbol{\mbf{e}}_{2}, \ \boldsymbol{\mbf{e}}_{3}\right)$
$\displaystyle \text{basis symbols}:~~\text{m3.basis}= \left[ \boldsymbol{\mbf{e}}_{1}, \ \boldsymbol{\mbf{e}}_{2}, \ \boldsymbol{\mbf{e}}_{3}\right]$
$\displaystyle \text{internal dictionary}:~~\text{G.lt_dict}= \left\{ \boldsymbol{\mbf{e}}_{1} : {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3}, \ \boldsymbol{\mbf{e}}_{2} : {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3}, \ \boldsymbol{\mbf{e}}_{3} : {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3}\right\}$
$\displaystyle \boldsymbol{\mbf{e}}_{1} \text{ from m3.basis belongs to } \text{<class 'galgebra.atoms.BasisVectorSymbol'>}$
$\displaystyle {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3} \text{ belongs to } \text{<class 'sympy.core.add.Add'>}$
$\displaystyle \boldsymbol{\mbf{e}}_{2} \text{ from m3.basis belongs to } \text{<class 'galgebra.atoms.BasisVectorSymbol'>}$
$\displaystyle {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3} \text{ belongs to } \text{<class 'sympy.core.add.Add'>}$
$\displaystyle \boldsymbol{\mbf{e}}_{3} \text{ from m3.basis belongs to } \text{<class 'galgebra.atoms.BasisVectorSymbol'>}$
$\displaystyle {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \text{ belongs to } \text{<class 'sympy.core.add.Add'>}$

Versor-based transformations will be discussed in Section 7. Such a transformation stores a versor internally in its .V attribute. Each transformation, whether dictionary based or versor based, have a boolean .versor attribute which, when True, signals the existence of the .V attribute.

## 5. Symmetric and antisymmetric symbolic transformations

Besides general symbolic transformations, encountered in the previous section, GAlgebra can create symmetric (a.k.a. self-adjoint) and antisymmetric (a.k.a. skew) symbolic transformations.

[9]:

G = m3.lt('G', mode='g')    # instantiate a general symbolic transformation
# specification mode='g' is not strictly necessary as 'g' is the default
# value of the mode parameter
S = m3.lt('S', mode='s')    # instantiate a symmetric symbolic transformation
A = m3.lt('A', mode='a')    # instantiate an antisymmetric symbolic transformation


Unlike a general symbolic transformation $$\lt{G}$$, the entries in S.matrix() for a symmetric symbolic transformation $$\lt{S}$$ will not appear to be entries in the standard (contravariant-covariant) matrix for that transformation, although they are. Instead they will appear as linear combinations of doubly-subscripted symbols $$S_{ij}$$, where $$1 \le i \le j \le n$$. Those symbols have the significance of being the entries on or above the diagonal of $$\lt{S}$$’s covariant-covariant matrix.

Similarly, the entries in A.matrix() for an antisymmetric symbolic transformation $$\lt{A}$$ will appear as linear combinations of doubly-subscripted symbols $$A_{ij}$$, where $$1 \le i < j \le n$$. Those symbols have the significance of being the entries above the diagonal of $$\lt{A}$$’s covariant-covariant matrix.

[10]:

gprint(r'\text{general symbolic transformation }\lt{G}:')
gprint(r'\qquad(3,1)\text{ entry of the contravariant-covariant matrix is }',
G.matrix()[3-1,1-1])               # example matrix entry
gprint(r'\qquad\text{covariant-covariant matrix of }\lt{G}=', m3.g * G.matrix())
gprint()
gprint(r'\text{symmetric symbolic transformation }\lt{S}:')
gprint(r'\qquad(3,1)\text{ entry of the contravariant-covariant matrix is }',
S.matrix()[3-1,1-1])               # example matrix entry
gprint(r'\qquad\text{covariant-covariant matrix of }\lt{S}=', m3.g * S.matrix())
gprint()
gprint(r'\text{antisymmetric symbolic transformation }\lt{A}:')
gprint(r'\qquad(3,1)\text{ entry of the contravariant-covariant matrix is }',
A.matrix()[3-1,1-1])               # example matrix entry
gprint(r'\qquad\text{covariant-covariant matrix of }\lt{A}=', m3.g * A.matrix())

$\displaystyle \text{general symbolic transformation }\lt{G}:$
\displaystyle \begin{equation*} \qquad\lt{G}: \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto {G^{1}}_{1} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{1} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{1} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto {G^{1}}_{2} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{2} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{2} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto {G^{1}}_{3} \boldsymbol{\mbf{e}}_{1} + {G^{2}}_{3} \boldsymbol{\mbf{e}}_{2} + {G^{3}}_{3} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\lt{G}]_\scrE= \left[\begin{array}{ccc}{G^{1}}_{1} & {G^{1}}_{2} & {G^{1}}_{3}\\{G^{2}}_{1} & {G^{2}}_{2} & {G^{2}}_{3}\\{G^{3}}_{1} & {G^{3}}_{2} & {G^{3}}_{3}\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{G}(\mbf{x})= \left ( x^{1} {G^{1}}_{1} + x^{2} {G^{1}}_{2} + x^{3} {G^{1}}_{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( x^{1} {G^{2}}_{1} + x^{2} {G^{2}}_{2} + x^{3} {G^{2}}_{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( x^{1} {G^{3}}_{1} + x^{2} {G^{3}}_{2} + x^{3} {G^{3}}_{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \qquad(3,1)\text{ entry of the contravariant-covariant matrix is } {G^{3}}_{1}$
$\displaystyle \qquad\text{covariant-covariant matrix of }\lt{G}= \left[\begin{array}{ccc}{G^{1}}_{1} & {G^{1}}_{2} & {G^{1}}_{3}\\- {G^{2}}_{1} & - {G^{2}}_{2} & - {G^{2}}_{3}\\- {G^{3}}_{1} & - {G^{3}}_{2} & - {G^{3}}_{3}\end{array}\right]$
$\displaystyle$
$\displaystyle \text{symmetric symbolic transformation }\lt{S}:$
\displaystyle \begin{equation*} \qquad\lt{S}: \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto S_{1} \boldsymbol{\mbf{e}}_{1} - S_{2} \boldsymbol{\mbf{e}}_{2} - S_{3} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto S_{2} \boldsymbol{\mbf{e}}_{1} - S_{4} \boldsymbol{\mbf{e}}_{2} - S_{5} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto S_{3} \boldsymbol{\mbf{e}}_{1} - S_{5} \boldsymbol{\mbf{e}}_{2} - S_{6} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad[\lt{S}]_\scrE= \left[\begin{array}{ccc}S_{1} & S_{2} & S_{3}\\- S_{2} & - S_{4} & - S_{5}\\- S_{3} & - S_{5} & - S_{6}\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{S}(\mbf{x})= \left ( S_{1} x^{1} + S_{2} x^{2} + S_{3} x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( - S_{2} x^{1} - S_{4} x^{2} - S_{5} x^{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( - S_{3} x^{1} - S_{5} x^{2} - S_{6} x^{3}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \qquad(3,1)\text{ entry of the contravariant-covariant matrix is } - S_{3}$
$\displaystyle \qquad\text{covariant-covariant matrix of }\lt{S}= \left[\begin{array}{ccc}S_{1} & S_{2} & S_{3}\\S_{2} & S_{4} & S_{5}\\S_{3} & S_{5} & S_{6}\end{array}\right]$
$\displaystyle$
$\displaystyle \text{antisymmetric symbolic transformation }\lt{A}:$
\displaystyle \begin{equation*} \qquad\lt{A}: \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto A_{1} \boldsymbol{\mbf{e}}_{2} + A_{2} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto A_{1} \boldsymbol{\mbf{e}}_{1} + A_{3} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto A_{2} \boldsymbol{\mbf{e}}_{1} - A_{3} \boldsymbol{\mbf{e}}_{2} \end{aligned} \right\} \qquad[\lt{A}]_\scrE= \left[\begin{array}{ccc}0 & A_{1} & A_{2}\\A_{1} & 0 & - A_{3}\\A_{2} & A_{3} & 0\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{A}(\mbf{x})= \left ( A_{1} x^{2} + A_{2} x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( A_{1} x^{1} - A_{3} x^{3}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( A_{2} x^{1} + A_{3} x^{2}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \qquad(3,1)\text{ entry of the contravariant-covariant matrix is } A_{2}$
$\displaystyle \qquad\text{covariant-covariant matrix of }\lt{A}= \left[\begin{array}{ccc}0 & A_{1} & A_{2}\\- A_{1} & 0 & A_{3}\\- A_{2} & - A_{3} & 0\end{array}\right]$

Notice in the above output that the covariant-covariant matrices of the general transformation $$\lt{G}$$, the symmetric transformation $$\lt{S}$$, and the antisymmetric transformation $$\lt{A}$$ are respectively neither symmetric nor antisymmetric, symmetric, and antisymmetric. None of the transformations display symmetry or antisymmetry in their standard (contravariant-covariant) matrices.

## 6. Non-symbolic, dictionary-based transformations

All examples in Sections 4–6 were of symbolic transformations. To create such transformations, all that was needed was a one-letter string, used to specify the kernel symbol of the matrix entries, and specification as to whether the transformation was to be general (mode='g'), symmetric (mode='s'), or antisymmetric (mode='a').

Besides one-letter strings, the transformation constructor m3.lt can accept objects of other types for its instantiation parameter. Such objects can be:

• lt_list, a list of lists, has its $$j$$th entry lt_list[j] a list of the expansion coefficients of the $$j$$the image vector.

• lt_matrix, a SymPy matrix, is identical to the matrix of the desired tranformation.

• lt_dict, a Python dictionary, consists of key:value pairs, where each key is a basis symbol (entry in m3.basis) and each value is a linear combination of basis symbols.

• lt_func, a linear vector-valued function of a vector argument, has the same action on vectors as the desired linear transformation/outermorphism.

Each such parameter is a different ways of specifying the images of the basis vectors by the desired transformation/outermorphism. The mode parameter is not specified since symmetry or antisymmetry, if either, is determined by the basis vector images.

An example. Suppose we want to instantiate a transformation $$\lt{T}$$ the actions of which on basis vectors are

$\begin{split}\left\{~ \begin{array}{llrrr} \lt{T}(\es{1}) &= &(0)\es{1} + &(-1)\es{2} + &(2)\es{3} \\ \lt{T}(\es{2}) &= &(5)\es{1} + &(2)\es{2} + &(3)\es{3} \\ \lt{T}(\es{3}) &= &(2)\es{1} + &(0)\es{2} + &(0)\es{3} \\ \end{array} ~\right\}.\end{split}$

We create different type objects each of which encodes the information in the above set of equations:

[11]:

# type list (of lists)
lt_list = [[0, -1, 2], [5, 2, 3], [2, 0, 0]]

# type Matrix
lt_matrix = Matrix([[0, 5, 2], [-1, 2, 0], [2, 3, 0]])

# type dict
lt_dict = {}          # create empty dictionary, then add key:value pairs
b1,b2,b3 = m3.basis   # basis symbols corresponding to basis vectors
lt_dict[b1] = -b2 + 2*b3          # 1st linear combination of basis symbols
lt_dict[b2] = 5*b1 + 2*b2 + 3*b3  # 2nd linear combination of basis symbols
lt_dict[b3] = 2*b1                # 3rd linear combination of basis symbols

# type function
lt_func = lambda x: (re1<x)*(   0*e1 + 5*e2 + 2*e3)\
+ (re2<x)*((-1)*e1 + 2*e2 + 0*e3)\
+ (re3<x)*(   2*e1 + 3*e2 + 0*e3)


Note: lt_list is different from the list of lists provided to Matrix when creating lt_matrix. lt_list organizes the entries of the transformation’s matrix by column, but the argument given to Matrix organizes them by row.

Note: The construction of lt_func was motivated by the formula $:nbsphinx-math:mbf{x} \mapsto :nbsphinx-math:sum{j=1}^n x^j :nbsphinx-math:lt{T}(:nbsphinx-math:es{j}) = :nbsphinx-math:sum{j=1}^n (\eS{j}\cdot\mbf{x}) \left( :nbsphinx-math:sum_{i=1}^n {T^i}_j \es{i}\right)$.

Next we instantiate linear transformations $$\lt{T}_1$$, $$\lt{T}_2$$, $$\lt{T}_3$$, and $$\lt{T}_4$$ for each of the respective objects lt_list, lt_matrix, lt_dict, and lt_func. Each transformation is then examined for its action on the basis, its standard matrix, and its action on a generic vector $$\mbf{x}$$.

[12]:

T1 = m3.lt(lt_list)    # instantiate transformation using list-type object
gprint(r'\textbf{T1 = m3.lt(lt_list)}, \quad\text{where lt_list}=', lt_list, ':')
gprint()

T2 = m3.lt(lt_matrix)  # instantiate transformation using Matrix-type object
gprint(r'\textbf{T2 = m3.lt(lt_matrix)}, \quad\text{where lt_matrix}=', lt_matrix, ':')
gprint()

T3 = m3.lt(lt_dict)    # instantiate transformation using dict-type object
gprint(r'\textbf{T3 = m3.lt(lt_dict)}, \quad\text{where lt_dict}=', lt_dict, ':')
gprint()

T4 = m3.lt(lt_func)    # instantiate transformation using function-type object
+ r'\text{lambda x: (e2+2*e3)*(re1<x) + (5*e1+2*e2+3*e3)*(re2<x) + (2*e1)*(re3<x)}:')

$\displaystyle \textbf{T1 = m3.lt(lt_list)}, \quad\text{where lt_list}= \left[ \left[ 0, \ -1, \ 2\right], \ \left[ 5, \ 2, \ 3\right], \ \left[ 2, \ 0, \ 0\right]\right] :$
\displaystyle \begin{equation*} \qquad\lt{T}_1:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto - \boldsymbol{\mbf{e}}_{2} + 2 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto 5 \boldsymbol{\mbf{e}}_{1} + 2 \boldsymbol{\mbf{e}}_{2} + 3 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto 2 \boldsymbol{\mbf{e}}_{1} \end{aligned} \right\} \qquad\left[ \lt{T}_1 \right]= \left[\begin{array}{ccc}0 & 5 & 2\\-1 & 2 & 0\\2 & 3 & 0\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{T}_1(\mbf{x})= \left ( 5 x^{2} + 2 x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( - x^{1} + 2 x^{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( 2 x^{1} + 3 x^{2}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle$
$\displaystyle \textbf{T2 = m3.lt(lt_matrix)}, \quad\text{where lt_matrix}= \left[\begin{array}{ccc}0 & 5 & 2\\-1 & 2 & 0\\2 & 3 & 0\end{array}\right] :$
\displaystyle \begin{equation*} \qquad\lt{T}_2:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto - \boldsymbol{\mbf{e}}_{2} + 2 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto 5 \boldsymbol{\mbf{e}}_{1} + 2 \boldsymbol{\mbf{e}}_{2} + 3 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto 2 \boldsymbol{\mbf{e}}_{1} \end{aligned} \right\} \qquad\left[ \lt{T}_2 \right]= \left[\begin{array}{ccc}0 & 5 & 2\\-1 & 2 & 0\\2 & 3 & 0\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{T}_2(\mbf{x})= \left ( 5 x^{2} + 2 x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( - x^{1} + 2 x^{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( 2 x^{1} + 3 x^{2}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle$
$\displaystyle \textbf{T3 = m3.lt(lt_dict)}, \quad\text{where lt_dict}= \left\{ \boldsymbol{\mbf{e}}_{1} : - \boldsymbol{\mbf{e}}_{2} + 2 \boldsymbol{\mbf{e}}_{3}, \ \boldsymbol{\mbf{e}}_{2} : 5 \boldsymbol{\mbf{e}}_{1} + 2 \boldsymbol{\mbf{e}}_{2} + 3 \boldsymbol{\mbf{e}}_{3}, \ \boldsymbol{\mbf{e}}_{3} : 2 \boldsymbol{\mbf{e}}_{1}\right\} :$
\displaystyle \begin{equation*} \qquad\lt{T}_3:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto - \boldsymbol{\mbf{e}}_{2} + 2 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto 5 \boldsymbol{\mbf{e}}_{1} + 2 \boldsymbol{\mbf{e}}_{2} + 3 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto 2 \boldsymbol{\mbf{e}}_{1} \end{aligned} \right\} \qquad\left[ \lt{T}_3 \right]= \left[\begin{array}{ccc}0 & 5 & 2\\-1 & 2 & 0\\2 & 3 & 0\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{T}_3(\mbf{x})= \left ( 5 x^{2} + 2 x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( - x^{1} + 2 x^{2}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( 2 x^{1} + 3 x^{2}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle$
$\displaystyle \textbf{T4 = m3.lt(lt_func)}\qquad\text{where lt_func is defined by}\\\qquad\text{lambda x: (e2+2*e3)*(re1<x) + (5*e1+2*e2+3*e3)*(re2<x) + (2*e1)*(re3<x)}:$
\displaystyle \begin{equation*} \qquad\lt{T}_4:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto 5 \boldsymbol{\mbf{e}}_{2} + 2 \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto - \boldsymbol{\mbf{e}}_{1} + 2 \boldsymbol{\mbf{e}}_{2} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto 2 \boldsymbol{\mbf{e}}_{1} + 3 \boldsymbol{\mbf{e}}_{2} \end{aligned} \right\} \qquad\left[ \lt{T}_4 \right]= \left[\begin{array}{ccc}0 & -1 & 2\\5 & 2 & 3\\2 & 0 & 0\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{T}_4(\mbf{x})= \left ( - x^{2} + 2 x^{3}\right ) \boldsymbol{\mbf{e}}_{1} + \left ( 5 x^{1} + 2 x^{2} + 3 x^{3}\right ) \boldsymbol{\mbf{e}}_{2} + 2 x^{1} \boldsymbol{\mbf{e}}_{3} \end{equation*}$

Comparison shows that each of the differently instantiated transformations are equivalent.

## 7. Versor-based orthogonal transformations

A versor $$\mbf{V} = \mbf{v}_k \cdots \mbf{v}_1$$ is the geometric product of a sequence of invertible vectors. The $$i$$th vector defines a reflection $$\mbf{x} \mapsto -\mbf{v}_i \mbf{x} \mbf{v}_i^{-1}$$ in the hyperplane with normal vector $$\mbf{v}_i$$. Consequently

$\mbf{x} \mapsto \lt{T}_\mbf{V}(\mbf{x}) =-{\mbf{v}_k}(\cdots(-{\mbf{v}_1}\mbf{x}{\mbf{v}_1}^{-1})\cdots){\mbf{v}_k}^{-1} = (-1)^k \mbf{V}\mbf{x} \mbf{V}^{-1} = \ginvol{\mbf{V}}\mbf{x} \mbf{V}^{-1}$

defines the orthogonal transformation $$\lt{T}_\mbf{V}$$ which arises by composing the $$k$$ hyperplane reflections. The Cartan-Dieudonné Theorem says that any orthogonal transformation can be so expressed.

When the transformation is extended to an outermorphism, its action on a multivector is described by

$\begin{split}\lt{T}_\mbf{V}(\mbf{X}) = \begin{cases} \mbf{V} \mbf{X} \mbf{V}^{-1} & \text{if versor }\mbf{V}\text{ is even} \\ \mbf{V} \ginvol{\mbf{X}} \mbf{V}^{-1} & \text{if versor }\mbf{V}\text{ is odd} \end{cases}.\end{split}$

The mapping $$\mbf{V} \mapsto \lt{T}_\mbf{V}$$ from the group of versors to the group of orthogonal transformations is a group morphism, so

$\lt{T}_{\mbf{V}_2} \lt{T}_{\mbf{V}_1} = \lt{T}_{\mbf{V}_2 \mbf{V}_1}.$

Furthermore the morphism’s kernel is the versor subgroup of all invertible 0-vectors; if $$\kappa$$ is a nonzero 0-vector, then $$\lt{T}_\kappa$$ will be the identity transformation $$\lt{Id}_{\Gn{p,q}}$$. Take $$\kappa = \mbf{V}\til{\mbf{V}} \in \Rn{\ne 0}$$. By the group morphism’s product preservation property we have

$\lt{T}_\mbf{V} \lt{T}_{\til{\mbf{V}}} = \lt{T}_{\mbf{V}\til{\mbf{V}}} = \lt{T}_{\kappa} = \lt{Id}_{\Gn{p,q}},$

whence $$\lt{T}_{\til{\mbf{V}}} = {\lt{T}_\mbf{V}}^{-1}$$. (Another way of obtaining the inverse of $$\lt{T}_\mbf{V}$$ is $$\lt{T}_{\mbf{V}^{-1}}$$, as follows from $$\mbf{V}^{-1}$$ and $$\til{\mbf{V}}$$ being scalar multiples of one another.)

Provided the desired transformation is orthogonal, there’s one final type of parameter that can be used to instantiate a model of the transformation:

• lt_versor, a versor, defines the composition of a sequence of hyperplane reflections.

Example. Suppose we want an orthogonal transformation that is the composition of a rotation through angle $$\theta$$ in the (spacelike) $$x^2 x^3$$ plane and of a reflection in that plane, which has normal vector $$\es{1}$$. Reflecting first through the plane orthogonal to $$\mbf{v}_1 = \es{2}$$ and then through the plane orthogonal to $$\mbf{v}_2 = \cos(\theta/2) \es{2} + \sin(\theta/2) \es{3}$$ accomplishes the rotation. A third reflection through the plane orthogonal to $$\mbf{v}_3 = \es{1}$$ completes the overall transformation.

[13]:

theta = symbols('theta', real=True)
v1 = e2
v2 = cos(theta/2)*e2 + sin(theta/2)*e3
v3 = e1
lt_versor = v3 * v2 * v1
T_V = m3.lt(lt_versor)    # instantiate versor-based orthogonal transformation

gprint(r'\textbf{T_V = m3.lt(lt_versor)}, \quad\text{where lt_versor}=', lt_versor, ':')
fu(qform(T_V(x))) == fu(qform(x)))

$\displaystyle \textbf{T_V = m3.lt(lt_versor)}, \quad\text{where lt_versor}= - \cos{\left (\frac{\theta }{2} \right )} \boldsymbol{\mbf{e}}_{1} - \sin{\left (\frac{\theta }{2} \right )} \boldsymbol{\mbf{e}}_{123} :$
\displaystyle \begin{equation*} \qquad\lt{T}_\mbf{V}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto - \boldsymbol{\mbf{e}}_{1} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \cos{\left (\theta \right )} \boldsymbol{\mbf{e}}_{2} + \sin{\left (\theta \right )} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto - \sin{\left (\theta \right )} \boldsymbol{\mbf{e}}_{2} + \cos{\left (\theta \right )} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad\left[\lt{T}_\mbf{V}\right]= \left[\begin{array}{ccc}- \frac{{\sin{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} - \frac{{\cos{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} & 0 & 0\\0 & - \frac{{\sin{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} + \frac{{\cos{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} & - \frac{2 \sin{\left (\frac{\theta }{2} \right )} \cos{\left (\frac{\theta }{2} \right )}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}}\\0 & \frac{2 \sin{\left (\frac{\theta }{2} \right )} \cos{\left (\frac{\theta }{2} \right )}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} & - \frac{{\sin{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} + \frac{{\cos{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}}\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad\lt{T}_\mbf{V}(\mbf{x})= - x^{1} \boldsymbol{\mbf{e}}_{1} + \left ( x^{2} \cos{\left (\theta \right )} - x^{3} \sin{\left (\theta \right )}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( x^{2} \sin{\left (\theta \right )} + x^{3} \cos{\left (\theta \right )}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \qquad\mathscr{Q}(\lt{T}_\mbf{V}(\mbf{x}))= {\left ( x^{1} \right )}^{2} - {\left ( x^{2} \right )}^{2} - {\left ( x^{3} \right )}^{2} \qquad\mathscr{Q}(\mbf{x})= {\left ( x^{1} \right )}^{2} - {\left ( x^{2} \right )}^{2} - {\left ( x^{3} \right )}^{2}$
$\displaystyle \qquad\mathscr{Q}(\lt{T}_\mbf{V}(\mbf{x})) = \mathscr{Q}(\mbf{x}):~ \text{True}$
$\displaystyle \qquad\det(\lt{T}_\mbf{V})= -1$

The output’s penultimate line confirms that $$\lt{T}_\mbf{V}$$ is orthogonal, for a tranformation is orthogonal if and only if it preserves the quadratic form $$\mbf{x} \mapsto \mathscr{Q}(\mbf{x}) = \mbf{x}^2$$. That $$\det(\lt{T}_\mbf{V}) = -1$$ is a consequence of the transformation being a composition of three orientation-reversing reflections.

The compositional inverse method .inv() is only implemented in GAlgebra for versor-based transformations. The next In[ ] cell computes the compositional inverse of the transformation T_V found in the previous cell.

[14]:

T_Vinv = T_V.inv()
# compositional inverse of versor-based transformation
gprint(r'\textbf{T_Vinv = T_V.inv()}:')
fu(qform(T_Vinv(x))) == fu(qform(x)))
(T_V*T_Vinv)(x))
(T_V*T_Vinv)(x) == x)

$\displaystyle \textbf{T_Vinv = T_V.inv()}:$
\displaystyle \begin{equation*} \qquad{\lt{T}_\mbf{V}}^{-1}:~ \left\{ \begin{aligned} \boldsymbol{\mbf{e}}_{1} &\mapsto - \boldsymbol{\mbf{e}}_{1} \\ \boldsymbol{\mbf{e}}_{2} &\mapsto \cos{\left (\theta \right )} \boldsymbol{\mbf{e}}_{2} - \sin{\left (\theta \right )} \boldsymbol{\mbf{e}}_{3} \\ \boldsymbol{\mbf{e}}_{3} &\mapsto \sin{\left (\theta \right )} \boldsymbol{\mbf{e}}_{2} + \cos{\left (\theta \right )} \boldsymbol{\mbf{e}}_{3} \end{aligned} \right\} \qquad\left[{\lt{T}_\mbf{V}}^{-1}\right]= \left[\begin{array}{ccc}-1 & 0 & 0\\0 & - \frac{{\sin{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} + \frac{{\cos{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} & \frac{2 \sin{\left (\frac{\theta }{2} \right )} \cos{\left (\frac{\theta }{2} \right )}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}}\\0 & - \frac{2 \sin{\left (\frac{\theta }{2} \right )} \cos{\left (\frac{\theta }{2} \right )}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} & - \frac{{\sin{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}} + \frac{{\cos{\left (\frac{\theta }{2} \right )}}^{2}}{{\sin{\left (\frac{\theta }{2} \right )}}^{2} + {\cos{\left (\frac{\theta }{2} \right )}}^{2}}\end{array}\right] \end{equation*}
$\displaystyle \begin{equation*} \qquad{\lt{T}_\mbf{V}}^{-1}(\mbf{x})= - x^{1} \boldsymbol{\mbf{e}}_{1} + \left ( x^{2} \cos{\left (\theta \right )} + x^{3} \sin{\left (\theta \right )}\right ) \boldsymbol{\mbf{e}}_{2} + \left ( - x^{2} \sin{\left (\theta \right )} + x^{3} \cos{\left (\theta \right )}\right ) \boldsymbol{\mbf{e}}_{3} \end{equation*}$
$\displaystyle \qquad\mathscr{Q}({\lt{T}_\mbf{V}}^{-1}(\mbf{x}))= {\left ( x^{1} \right )}^{2} - {\left ( x^{2} \right )}^{2} - {\left ( x^{3} \right )}^{2} \qquad\mathscr{Q}(\mbf{x})= {\left ( x^{1} \right )}^{2} - {\left ( x^{2} \right )}^{2} - {\left ( x^{3} \right )}^{2}$
$\displaystyle \qquad\mathscr{Q}({\lt{T}_\mbf{V}}^{-1}(\mbf{x})) = \mathscr{Q}(\mbf{x}):~ \text{True}$
$\displaystyle \qquad\det({\lt{T}_\mbf{V}}^{-1})= - \frac{\left(1 - \cos{\left (\theta \right )}\right)^{2}}{2} - \cos{\left (\theta \right )} + \frac{\cos{\left (2 \theta \right )}}{4} - \frac{1}{4}$
$\displaystyle \qquad\left(\lt{T}_\mbf{V}{\lt{T}_\mbf{V}}^{-1}\right)(\mbf{{x}})= x^{1} \boldsymbol{\mbf{e}}_{1} + x^{2} \boldsymbol{\mbf{e}}_{2} + x^{3} \boldsymbol{\mbf{e}}_{3}$
$\displaystyle \qquad\left(\lt{T}_\mbf{V}{\lt{T}_\mbf{V}}^{-1}\right)(\mbf{{x}})==\mbf{x}:~ \text{True}$
[ ]: