LOG#096. Group theory(XVI).

Eugene Wigner, Hungarian physicist

Wigner

Given any physical system, we can perform certain “operations” or “transformations” with it. Some examples are well known: rotations, traslations, scale transformations, conformal transformations, Lorentz transformations,… The ultimate quest of physics is to find the most general “symmetry group” leaving invariant some system. In Classical Mechanics, we have particles, I mean point particles, and in Classical Field Theories we have “fields” or “functions” acting on (generally point) particles. Depending on the concrete physical system, some invariant properties are “interesting”.

Similarly, we can leave the system invariant and change the reference frame, and thus, we can change the “viewpoint” with respect to we realize our physical measurements. To every type of transformations in space-time (or “internal spaces” in the case of gauge/quantum systems) there is some mathematical transformation F acting on states/observables. Generally speaking, we have:

1) At level of states: \vert \Psi\rangle \longrightarrow F\vert \Psi\rangle =\vert \Psi'\rangle

2) At level of observables: O\longrightarrow F(O)=O'

These general transformations should preserve some kind of relations in order to be called “symmetry transformations”. In particular, we have conditions on 3 different objects:

A) Spectrum of observables:

O_n\vert \phi_n\rangle =O_n\vert \phi_n\rangle \leftrightarrow O'\vert \phi'_n\rangle=O_n\vert \phi'\rangle

These operators O, O' must represent observables that are “identical”. Generally, these operators must be “isospectral” and they will have the same “spectrum” or “set of eigenvalues”.

B) In Quantum Mechanics, the probabilities for equivalent events must be the same before and after the transformations and “measurements”. In fact, measurements can be understood as “operations” on observables/states of physical systems in this general framework. Therefore, if

\displaystyle{\vert\Psi\rangle =\sum_n c_n\vert \phi_n\rangle}

where \vert\phi_n\rangle is a set of eigenvectors of O, and

\displaystyle{\vert\Psi'\rangle =\sum_n c'_n\vert\phi'_n\rangle}

where \vert\phi'_n\rangle is a set of eigenvectors of O’, then we must verify

\vert c_n\vert^2=\vert c'_n\vert^2\longleftrightarrow \vert \langle \phi_n\vert \Psi\rangle\vert^2=\vert\langle\phi'_n\vert \Psi'\rangle\vert^2

C) Conservation of commutators. In Classical Mechanics, there are some “gadgets” called “canonical transformations” leaving invariant the so-called Poisson brackets. There are some analogue “brackets” in Quantum Mechanics: the commutators are preserved by symmetry transformations in the same way that canonical transformations leave invariant the classical Poisson brackets.

These 3 conditions constrain the type of symmetries in Classical Mechanics and Quantum Mechanics (based in Hilbert spaces). There is a celebrated theorem, due to Wigner, saying more or less the mathematical way in which those transformations are “symmetries”.

Let me define before two important concepts:

Antilinear operator.  Let A be a linear operator in certain Hilbert space H. Let us suppose that \vert \Psi\rangle,\vert\varphi\rangle \in H and \alpha,\beta\in\mathbb{C}. An antilinear operator A satisfies the condition:

A\left(\alpha\vert\Psi\rangle+\beta\vert\varphi\rangle\right)=\alpha'A\left(\vert\Psi\rangle\right)+\beta^*A\left(\vert\varphi\rangle\right)

Antiunitary operator.  Let A be an antilinear opeartor in certain Hilbert space H. A is said to be antiunitary if it is antilinear and

AA^+=A^+A=I\leftrightarrow A^{-1}=A^+

Any continuous family of continuous transformations can be only described by LINEAR operators. These transformations are continuously connected to the unity matrix/identity transformation leaving invariant the system/object, and this identity matrix is in fact a linear transformation itself. The product of two unitary transformations is unitary. However, the product of two ANTIUNITARY transformations is not antiunitary BUT UNITARY.

Wigner’s theorem. Let A be an operator with eigenvectors B=\vert\phi_n\rangle and A'=F(A) another operator with eigenvectors B'=\vert\phi'_n\rangle. Moreover, let us define the state vectors:

\displaystyle{\vert \Psi\rangle=\sum_n a_n\vert\phi_n\rangle} \displaystyle{\vert\varphi\rangle=\sum_n b_n\vert\phi_n\rangle}

\displaystyle{\vert\Psi'\rangle=\sum_n a'_n\vert\phi'_n\rangle} \displaystyle{\vert\varphi'\rangle=\sum_n b'_n\vert\phi'_n\rangle}

Then, every bijective transformation leaving invariant

\vert \langle \phi_n\vert \Psi\rangle\vert^2=\vert\langle\phi'_n\vert \Psi'\rangle\vert^2

can be represented in the Hilbert space using some operator. And this operator can only be UNITARY (LINEAR) or ANTIUNITARY(ANTILINEAR).

This theorem is relative to “states” but it can also be applied to maps/operators over those states, since F(A)=A' for the transformation of operators. We only have to impose

A\vert\phi_n\rangle =a_n\phi_n\rangle

A'\vert\phi'_n\rangle=a_n\vert\phi'_n\rangle

Due to the Wigner’s theorem, the transformation between operators must be represented by certain operator U, unitary or antiunitary accordingly to our deductions above, such that if \vert\phi'_n\vert=U\vert\phi_n\vert, then:

A'\vert\phi'_n\rangle=A'U\vert\phi_n\rangle=a_n U\vert\phi_n\rangle

U^{-1}A'U\vert\phi_n\rangle=a_n\vert\phi_n\rangle

This last relation is valid vor every element \vert\phi_n\rangle in a set of complete observables like the basis, and then it is generally valid for an arbitrary vector. Furthermore,

U^{-1}A'U=A

A\rightarrow A'=U^{-1}AU

There are some general families of transformations:

i) Discrete transformations A_i, both finite and infinite in order/number of elements.

ii) Continuous transformation A(a,b,\ldots). We can speak about uniparametric families of transformations A(\alpha) or multiparametric families of transformations A(\alpha_1,\alpha_2,\ldots,\alpha_n). Of course, we can also speak about families with an infinite number of parameters, or “infinite groups of transformations”.

Physical transformations form a group from the mathematical viewpoint. That is why all this thread is imporant! How can we parametrize groups? We have provided some elementary vision in previous posts. We will focus on continuous groups. There are two main ideas:

a) Parametrization. Let U(s)\in F(\alpha) be a family of unitary operators depending continuously on the parameter s. Then, we have:

i) U(0)=U(s=0)=I.

ii) U(s_1+s_2)=U(s_1)U(s_2).

b) Taylor expansion. We can expand the operator as follows:

U(s)=U(0)+\dfrac{dU}{ds}\bigg|_{s=0}+\mathcal{O}(s^2)

or

U(s)=I+\dfrac{dU}{ds}\bigg|_{s=0}+\mathcal{O}(s^2)

There is other important definitiion. We define the generator of the infinitesimal transformation U(s), denoted by K, in such a way that

\dfrac{dU}{ds}\bigg|_{s=0}\equiv iK

Moreover, K must be an hermitian operator (note that mathematicians prefer the “antihermitian” definition mostly), that is:

I=U(s)U^+(s)=I+s\left(\dfrac{dU}{ds}\bigg|_{s=0}+\dfrac{U^+}{ds}\bigg|_{s=0}\right)+\mathcal{O}(s^2)

iK+(iK)^+=0

K=K^+

Q.E.D.

There is a fundamental theorem about this class of operators, called Stone theorem by the mathematicians, that says that if K is a generator of a symmetry at infinitesimal level, then K determines in a unique way the unitary operator U(s) for all value s. In fact, we have already seen that

U(s)=e^{iKs}

So, the Stone theorem is an equivalent way to say the exponential of the group generator provides the group element!

We can generalize the above considerations to finite multiparametric operators. The generator would be defined, for a multiparametric family of group elements G(\alpha_1,\alpha_2,\ldots,\alpha_n). Then,

iK_{\alpha_j}=\dfrac{\partial G}{\partial_{\alpha_j}}\bigg|_{\alpha_j=0}

There are some fundamental properties of all this stuff:

1) Unitary transformations G(\alpha_1,\alpha_2,\ldots,\alpha_n) form a Lie group, as we have mentioned before.

2) Generators K_{\alpha_j} form a Lie algebra. The Lie algebra generators satisfy

\displaystyle{\left[K_i,K_j\right]=\sum_k c_{ijk}K_k}

3) Every element of the group or the multiparametric family G(\alpha_1,\alpha_2,\ldots,\alpha_n) can be written (likely in a non unique way) such that:

G\left(\alpha_1,\alpha_2,\ldots,\alpha_n\right)=\exp \left( iK_{\alpha_1}\alpha_1\right)\exp \left( iK_{\alpha_2}\alpha_2\right)\cdots \exp \left( iK_{\alpha_n}\alpha_n \right)

4) Every element of the multiparametric group can be alternatively written in such a way that

e^{iK_\alpha\alpha}e^{iK_\beta\beta}=e^{K_\alpha\omega_1(\alpha,\beta)}e^{iK_\beta\omega_2(\alpha,\beta)}

where the parameters \omega_1, \omega_2 are functions to be determined for every case.

What about the connection between symmetries and conservation laws? Well, I have not discussed in this blog the Noether’s theorems and the action principle in Classical Mechanics (yet) but I have mentioned it already. However, in Quantum Mechanics, we have some extra results. Let us begin with a set of unitary and linear transformations G=T_\alpha. These set can be formed by either discrete or continuous transformations depending on one or more parameters. We define an invariant observable Q under G as the set that satisfies

Q=T_\alpha Q T^+_\alpha,\forall T_\alpha\in G

Moreover, invariance have two important consequences in the Quantum World (one “more” than that of Classical Mechanics, somehow).

1) Invariance implies conservation laws.

Given a unitary operator T^+=T^{-1}, as Q=T_\alpha Q T^+_\alpha, then

QT_\alpha=T_\alpha Q and thus

\left[Q,T_\alpha\right]=0

If we have some set of group transformations G=T_\alpha, such the so-called hamiltonian operator H is invariant, i.e., if

\left[H,T_\alpha\right]=0,\forall T_\alpha\in G

Then, as we have seen above, these operators for every value of their parameters are “constants” of the “motion” and their “eigenvalues” can be considered “conserved quantities” under the hamiltonian evolution. Then, from first principles, we could even have an infinite family of conserved quantities/constants of motion.

This definifion can be applied to discrete or continuous groups. However, if the family is continuous, we have additional conserved constants. In this case, for instance in the uniparametric group, we should see that

T_\alpha=T(\alpha)=\exp (i\alpha K)

and it implies that if an operator is invariant under that family of continuous transformation, it also commutes with the infinitesimal generator (or with any other generator in the multiparametric case):

Q=T_\alpha QT^+_\alpha \leftrightarrow \left[Q,K\right]=0

Every function of the operators in the set of transformations is also a motion constant/conserved constant, i.e., an observable such as the “expectation value” would remain constant in time!

2) Invariance implies (not always) degeneration in the spectrum.

Imagine a hamiltonian operator H and an unitary transformation T such as \left[H,T\right]=0. If

H\vert \alpha\rangle=E_\alpha\vert \alpha\rangle

then

1) \vert\beta\rangle=T\vert\alpha\rangle is also an eigenvalue of H.

2) If \vert\alpha\rangle and \vert\beta\rangle are “linearly independent”, then E_\alpha is (a) degenerated (spectrum).

Check:

1st step. We have

\left[H,T\right]=0\longrightarrow HT=TH

H(T\vert\alpha\rangle)=T(H\vert\alpha\rangle)=E_\alpha T\vert\alpha\rangle

2nd step. If \vert\alpha\rangle and \vert\beta\rangle are linarly independent, then T\vert\alpha\rangle=c\vert\alpha\rangle and thus

H(T\vert\alpha\rangle)=H(c\vert\alpha\rangle)=cE_\alpha\vert\alpha\rangle

If the hamiltonian H is invariant under a transformation group, then it implies the existence (in general) of a degeneration in the states (if these states are linearly independent). The characteristics features of this degeneration (e.g., the degeneration “grade”/degree in each one of these states) are specific of the invariance group. The converse is also true (in general). The existence of a degeneration in the spectrum implies the existence of certain symmetry in the system. Two specific examples of this fact are the kepler problem/”hydrogen atom” and the isotropic harmonic oscillator. But we will speak about it in other post, not today, not here, ;).


LOG#091. Group theory(XI).

250px-Pauli

imagesGellMann

enstein_en_lorentz

Today, we are going to talk about the Lie groups U(n) and SU(n), and their respective Lie algebras, generally denoted by u(n) and su(n) by the physics community. In addition to this, we will see some properties of the orthogonal groups in euclidena “signature” and general quadratic metrics due to their importance in special relativity or its higher dimensional analogues.

Let us remember what kind of groups are U(n) and U(n):

1) The unitary group is defined by:

U(n)\equiv\left\{ U\in M_{n\times n}(\mathbb{C})/UU^+=U^+U=I\right\}

2) The special unitary group is defined by:

SU(n)\equiv\left\{ U\in M_{n\times n}(\mathbb{C})/UU^+=U^+U=I,\det (U)=1\right\}

The group operation is the usual matrix multiplication. The respective algebras are denoted as we said above by u(n) and su(n). Moreover, if you pick an element U\in U(n), there exists an hermitian (antihermitian if you use the mathematician “approach” to Lie algebras/groups instead the convention used mostly by physicists) n\times n matrix H such that:

U=\exp (iH)

Some general properties of unitary and special unitary groups are:

1) U(n) and SU(n) are compact Lie groups. As a consequence, they have unitary, finite dimensional and irreducible representations. U(n) and SU(n) are subgroups of U(m) if m\geq n.

2) Generators or parameters of unitary and special unitary groups. As we have already seen, the unitary group has n^2 parameters (its “dimension”) and it has rank n-1 (its number of Casimir operators). The special unitary group has n^2-1 free parameters (its dimension) and it has rank n-1 (its number of Casimir operators).

3) Lie algebra generators. The unitary group has a Lie algebra generated by the space of n^2 dimensional complex matrices. The special unitary group has a Lie algebra generated by the n^2-1 dimensional space of hermitian n\times n traceless matrices.

4) Lie algebra structures. Given a basis of generators L_i for the Lie algebra, we define the constants C_{ijk}, f_{ijk}, d_{ijk} by the following equations:

\left[L_i,L_m\right]=C_{ijk}L_k=if_{ijk}L_k

L_iL_j+L_jL_i=\dfrac{1}{3}\delta_{ij}I+d_{ijk}L_k

These structure constants f_{ijk} are totally antisymmetric under the exchange of any two indices while the coefficients d_{ijk} are symmetric under those changes. Moreover, we also have:

d_{ijk}=2\mbox{Tr}(\left\{L_i,L_j\right\}L_k)

f_{ijk}=-2i\mbox{Tr}(\left[L_i,L_j\right]L_k)

Remark(I):   From U=e^{iH}, we get \det U=e^{i\mbox{Tr} (H)}, and from here we can prove the statement 3) above.

Remark(II): An arbitrary element of U(n) can be expressed as a product of an element of U(1) and an element of SU(n). That is, we can write U(n)\cong U(1)\cup SU(n), where the symbol \cong means “group isomorphism”.

Example 1. The SU(2) group.

In particular, for n=2, we get

SU(2)=\left\{U\in M_{2\times 2})(\mathbb{C})/UU^+=U^+U=I_{2\times 2},\det U=1\right\}

This is an important group in physics! It appears in many contexts: angular momentum (both classical and quantum), the rotation group, spinors, quantum information theory, spin networks and black holes, the Standard Model, and many other places. So it is important to know it at depth. The number of parameters of SU(2) is equal to 3 and its rank is equal to one (1). As generators of the Lie algebra associated to this Lie group, called su(2), we can choose for free 3 any independent traceless (trace equal to zero) matrices. As a convenient choice, it is usual to select the so called Pauli matrices \sigma_i:

\sigma_1=\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}

\sigma_2=\begin{pmatrix}0 & -i\\ i & 0\end{pmatrix}

\sigma_3=\begin{pmatrix} 1 & 0\\ 0 & -1\end{pmatrix}

In general, these matrices satisfy an important number of mathematical relations. The most important are:

\left\{\sigma_i,\sigma_j\right\}=2\sigma_i\sigma_j+2i\varepsilon_{ijk}\sigma_k

and

\sigma_i\sigma_j=i\varepsilon_{ijk}\sigma_k

The commutators of Pauli matrices are given by:

\left[\sigma_i,\sigma_j\right]=2if_{ijk}\sigma_k

The structure constants read

f_{ijk}=\dfrac{1}{2}\varepsilon_{ijk} d_{ijk}=0

The Casimir operator/matrix related to the Pauli basis is:

C(\sigma_i)=\sigma_i^2=\sigma_1^2+\sigma_2^2+\sigma_3^2

This matrix, by Schur’s lemma, has to be a multiple of the identity matrix (it commutes with each one of the 3 generators of the Pauli algebra, as it can be easily proved). Please, note that using the previous Pauli representation of the Pauli algebra we get:

\displaystyle{C=\sum_i\sigma_i^2=3I}

Q.E.D.

A similar relation, with different overall prefactor, must be true for ANY other representation of the Lie group algebra su(2). In fact, it can be proved in Quantum Mechanics that this number is “four times” the j(j+1) quantum number associated to the angular momentum and it characterizes completely the representation. The general theory of the representation of the Lie group SU(2) and its algebra su(2) is known in physics as the general theory of the angular momentum!

Example 2. The SU(3) group.

If n=3, the theory of SU(3) is important for Quantum Chromodynamics (QCD) and the quark theory. It is also useful in Grand Unified Theories (GUTs) and flavor physics.

SU(3)=\left\{U\in M_{3\times 3})(\mathbb{C})/UU^+=U^+U=I_{3\times 3},\det U=1\right\}

The number of parameters of SU(3) is 8 (recall that there are 8 different massless gluons in QCD) and the rank of the Lie algebra is equal to two, so there are two Casimir operators.

The analogue generators of SU(3), compared with the Pauli matrices, are the so-called Gell-Mann matrices. They are 8 independent traceless matrices. There are some “different” elections in literature, but a standard choice are the following matrices:

\lambda_1=\begin{pmatrix}0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 &0\end{pmatrix}

\lambda_2=\begin{pmatrix}0 & -i & 0\\ i & 0 & 0\\ 0 & 0 &0\end{pmatrix}

\lambda_3=\begin{pmatrix}1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 &0\end{pmatrix}

\lambda_4=\begin{pmatrix}0 & 0 & 1\\ 0 & 0 & 0\\ 1 & 0 &0\end{pmatrix}

\lambda_5=\begin{pmatrix}0 & 0 & -i\\ 0 & 0 & 0\\ i & 0 &0\end{pmatrix}

\lambda_6=\begin{pmatrix}0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 &0\end{pmatrix}

\lambda_7=\begin{pmatrix}0 & 0 & 0\\ 0 & 0 & -i\\ 0 & i &0\end{pmatrix}

\lambda_8=\dfrac{1}{\sqrt{3}}\begin{pmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 &-2\end{pmatrix}

Gell-Mann matrices above satisfy a normalization condition:

\mbox{Tr}(\lambda_i\lambda_j)=2\delta_{ij}

where \delta_{ij} is the Kronecker delta in two indices.

The two Casimir operators for Gell-Mann matrices are:

1) \displaystyle{C_1(\lambda_i)=\sum_{i=1}^8\lambda_i^2}

This operator is the natural generalization of the previously seen SU(2) Casimir operator.

2) \displaystyle{C_2(\lambda_i)=\sum_{ijk}d_{ijk}\lambda_i\lambda_j\lambda_k}

Here, the values of the structure constans f_{ijk} and d_{ijk} for the su(3) Lie algebra can be tabulated in rows as follows:

1) For ijk=123,147,156,246,257,345,367,458,678 we have f_{ijk}=1,\dfrac{1}{2},-\dfrac{1}{2},\dfrac{1}{2},\dfrac{1}{2},\dfrac{1}{2},-\dfrac{1}{2},\dfrac{\sqrt{3}}{2},\dfrac{\sqrt{3}}{2}.

2) If

ijk=118,146,157,228,247,256,338,344,355,366,377,448,558,668,778,888

then have

d_{ijk}=\dfrac{1}{\sqrt{3}},\dfrac{1}{2},\dfrac{1}{2},\dfrac{1}{\sqrt{3}},-\dfrac{1}{2},\dfrac{1}{2},\dfrac{1}{\sqrt{3}},\dfrac{1}{2},\dfrac{1}{2},-\dfrac{1}{2},-\dfrac{1}{2},-\dfrac{1}{2\sqrt{3}},-\dfrac{1}{2\sqrt{3}},-\dfrac{1}{2\sqrt{3}},-\dfrac{1}{2\sqrt{3}},-\dfrac{1}{\sqrt{3}}

Example 3. Euclidean groups, orthogonal groups and the Lorentz group in 4D and general D=s+t dimensional analogues.

In our third example, let us remind usual galilean relativity. In a 3D world, physics is the same for every inertial observer (observers moving with constant speed). Moreover, the fundamental invariant of “motion” in 3D space is given by the length:

L^2=x^2+y^2+z^2=\delta_{ij}x^ix^j \forall i,j=1,2,3

In fact, with tensor notation, the above “euclidean” space can be generalized to any space dimension. For a ND space, the fundamental invariant reads:

\displaystyle{L_N^2=\sum_{i=1}^NX_i^2=x_1^2+x_2^2+\cdots+x_N^2}

Mathematically speaking, the group leaving the above metrics invariant are, respectively, SO(3) and SO(N). They are Lie groups with dimensions 3 and N(N-1)/2, respectively and their Lie algebra generators are antisymmetric traceless 3\times 3 and N\times N matrices. Those metrics are special cases of quadratic forms and it can easily proved that orthogonal transformations with metric \delta_{ij} (the euclidean metric given by a Kronecker delta tensor) are invariant in the following sense:

A^\mu_{\;\;\; i}\delta_{\mu\nu}A^\nu_{\;\;\; j}=\delta_{ij}

or equivalently

A\delta A^T=\delta

using matric notation. In special relativity, the (proper) Lorentz group L is composed by every real 4\times 4 matrix \Lambda^\mu_{\;\;\;\nu} connected to the identity through infinitesimal transformations, and the Lorentz group leaves invariant the Minkovski metric(we use natural units with c=1):

s^2=X^2+Y^2+Z^2-T^2 if you use the “mostly plus” 3+1 metric (\eta=\mbox{diag}(1,1,1,-1)) or, equivalentaly

s^2=T^2-X^2-Y^2-Z^2 if with a “mostly minus” 1+3 metric (\eta=\mbox{diag}(1,-1,-1,-1)).

These equations can be also genearlized to arbitrary signature. Suppose there are s-spacelike dimensions and t-time-like dimensions (D=s+t). The associated non-degenerated quadratic form is:

\displaystyle{s^2_D=\sum_{i=1}^sX_i^2-\sum_{j=1}^tX_j^2}

In matrix notation, the orthogonal rotations leaving the above quadratic metrics are said to belong to the group SO(3,1) (or SO(1,3) is you use the mostly minus convention) real orthogonal group over the corresponding quadratic form. The signature of the quadratic form is said to be S=2 or (3,1) (equivalently \Sigma=3-1=2 and (1,3) with the alternative convention). We are not considering “degenerated” quadratic forms for simplicity of this example. The Lorentzian or Minkovskian metric are invariant in the same sense that the euclidean example before:

L^\mu_{\;\;\;\alpha}\eta_{\alpha\beta}L^\mu_{\;\;\;\beta}=\eta_{\alpha\beta}

LGL^T=G

The group SO(s,t) has signature (s,t) or s-t or s+t in non-degenerated quadratic spaces. Obviously, the rotation group SO(3) is a subgroup of SO(3,1) and more generally SO(s) is a subgroup of SO(s,t). We are going to focus now a little bit on the common special relativity group SO(3,1). This group have 6 parameters or equivalently its group dimension is 6. The rank of this special relativity group is equal to 1. We can choose as parameters for the SO(3,1) group 3 spatial rotation angles \omega_i and three additional parameters, that we do know as rapidities \xi_i. These group generators have Lie algebra generators S_i and K_i or more precisely, if we define the Lorentz boosts as

\xi=\dfrac{\beta}{\parallel\beta\parallel}\tanh^{-1}\parallel \beta\parallel

In the case of SO(3,1), a possible basis for the Lie algebra generators are the next set of matrices:

iS_1=\begin{pmatrix}0 & 0 & 0& 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & -1 & 0\end{pmatrix}

iS_2=\begin{pmatrix}0 & 0 & 0& 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\end{pmatrix}

iS_3=\begin{pmatrix}0 & 0 & 0& 0\\ 0 & 0 & 1 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix}

iK_1=\begin{pmatrix}0 & 1 & 0& 0\\ -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix}

iK_2=\begin{pmatrix}0 & 0 & 1& 0\\ 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix}

iK_3=\begin{pmatrix}0 & 0 & 0& 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0\end{pmatrix}

And the commutation rules for these matrices are given by the basic relations:

\left[S_a,S_b\right]=i\varepsilon_{abc}S_c

\left[K_a,K_b\right]=-i\varepsilon_{abc}S_c

\left[S_a,K_b\right]=i\varepsilon_{abc}K_c

Final remark: SO(s,t) are sometimes called isometry groups since they define isometries over quadratic forms, that is, they define transformations leaving invariant the “spacetime length”.


LOG#090. Group theory(X).

grouptheoryX

The converse of the first Lie theorem is also generally true.

Theorem. Second Lie Theorem. Given a set of N hermitian matrices or operators L_j, closed under commutation with the group multiplication, then these operators L_j define and specify a Lie group and they are their generators.

Remark(I): We use hermitian generators in our definition of “group generators”. Mathematicians use to define “antihermitian” generators in order to “erase” some “i” factors from the equations of Lie algebras, especially in the group exponentiation.

Remark (II): Mathematicians use to speak about 3 different main Lie theorems, more or less related to the 2 given theorems in my thread. I am a physicist, so somehow I do prefer this somewhat informal distinction but the essence and contents of the Lie theory is the same.

Definition(39). Lie algebra. The set of N matrices N\times N L_j and their linear combinations, closed under commutation with the group multiplication, is said to be a Lie algebra. Lie algebra are characterized by structure constants.

Example: In the 3D worl of space, we can define the group of 3D rotations

SO(3)=\left\{O\in M_{3\times 3}(\mathbb{R}),OO^+=I, \mbox{det}O=1\right\}

and the generators of this group J_1, J_2,J_3 satisfy a Lie algebra called so(3) with commutation rules

\left[J_i,J_j\right]=i\varepsilon_{ijk}J_k

There \varepsilon_{ijk} is the completely antisymmetric symbol in the three indices. We can form linear combinations of generators:

J_{\pm}=J_1\pm J_2

and then the commutation rules are changed into the following relations

\left[J_3,J_\pm\right]=\pm J_\pm

\left[J_+,J_-\right]=2J_3

The structure constants are different in the two cases above but they are related through linear expressions involving the coefficients of the combinations of the generators. In summary:

1) It is possible to get by direct differentiation that every Lie algebra forms a Lie group. The Lie algebra consists of those matrices or operators X for which the exponential \exp(tX)\in G \forall real number t. The Lie bracket/commutator of the Lie algebra is given by the commutator of the two matrices.

2) Different groups COULD have the same Lie algebra, e.g., the groups SO(3) and SU(2) share the same Lie algebra. Given a Lie algebra, we can build by exponentiation, at least, one single Lie group. In fact, usually we can build different Lie groups with the same algebra.

Definition (40). Group rank. By definition, the rank is the largest number of generators commuting with each other. The rank of a Lie group is the same that the rank of the corresponding Lie algebra.

Definition (41). Casimir operator.  A Casimir operator is a matrix/operator which commutes with all the generators of the group, and therefore it also commutes with all the elements of the algebra and of the group.

Theorem. Racah’s theorem. The number of Casimir operators of a Lie group is equal to its rank.

Before giving a list of additional results in the theory of Lie algebras, let me provide some extra definitions about (Lie) groups.

Definition (42). Left, right and adjoint group actions.  Let L_g, R_g, \mbox{Ad}_g be group function isomorphisms:

L_g: G\longrightarrow G/ g\Rightarrow L_g=L_g(h)=gh

R_g:G\longrightarrow G/ g\Rightarrow R_g(h)=hg^{-1}

\mbox{Ad}_g:G\longrightarrow G/ g\Rightarrow \mbox{Ad}_g(h)=ghg^{-1}

then they are called respectively left group action, right group action and adjoint group action. They are all group isomorphisms.

In fact, we can be more precise about what a Lie algebra IS. A Lie algebra is some “vector space” with an external operation, the commutator definining the Lie algebra structure constants, with some beautiful properties.

Definition (43). Lie algebra. Leb (A,+,\circ) be a real (or complex) vector space and define the binary “Lie-bracket” operation

\left[,\right]:\mathcal{A}\times\mathcal{A}\longrightarrow \mathcal{A}

This Lie bracket is a bilinear and antisymmetric operation in the (Lie) algebra \mathcal{A} such as it satisfies the so-called Jacobi identity:

\left[\left[A,B\right],C\right]+\left[\left[B,C\right],A\right]+\left[\left[C,A\right],B\right]=0

In fact, if the Jacobi identity holds for some algebra \mathcal{A}, then it is a real(or complex) Lie algebra.

Remark(I): A bilinear antisymmetric operation satisfies (in our case we use the bracket notation):

Bilinearity: \left[A,B+\lambda C\right]=\left[A,B\right]+\lambda\left[A,C\right]

Antisymmetry: \left[A,B\right]=-\left[B,A\right]

Remark(II): The Jacobi identity is equivalent to the expressions:

i) \left[\left[A,B\right],C\right]+\left[B,\left[A,C\right]\right]=\left[A,\left[B,C\right]\right]

ii) Let us define a “derivation” operation with the formal equation D_A(X)\equiv \left[A,X\right]. Then, it satisfies the Leibniz rule

D_A(\left[B,C\right])=\left[D_A(B),C\right]+\left[B,D_A(C)\right]

Remark(III): From the antisymmetry, in the framework of a Lie algebra we have that \left[A,A\right]=0

The commutators of matrices/operators are examples of bilinear antisymmetric operations, but even more genral operations can be guesses. Ask a mathemation or a wise theoretical physicist for more details! 🙂

In the realm of Lie algebras, there are some “basic results” we should know. Of course, the basical result is of course:

\left[A_i,A_j\right]=c_{ijk}A_k

Since the Lie algebra is a vector space, we can form linear combinations or superpositions

B_i=\sum_j a_{ij}A_j

with some a_{ij} non singular matrix. Thus, the new elements will satisfy the commutation relation swith some new “structure constants”:

\left[B_i,B_j\right]=c'_{ijk}B_k

such as

c'_{ijk}=a_{ik}a_{jl}(A^{-1})_{mn}c_{klm}

In particular, if A=a_{ij} is a unitary transformation (or basis change) with A^{-1}=A^+, then

c'_{ijk}=a_{ik}a_{jl}a^*_{nm}c_{klm}

Definition (44). Simple Lie algebra.  A simple Lie algebra is a Lie algebra that has no non-trivial ideal and that is not abelian. An ideal is certain subalgebra generated by a subset of generators A_i such that \left[B_j,A_i\right]=\sum a_{ijk}A_k \forall B_j.

Definition (45). Semisimple Lie algebra. A Lie algebra is called semisimple if it does not contain any non-zero abelian ideals (subalgebra finitely generated).

In particular, every simple Lie algebra is semisimple. Reciprocally, any semisimple Lie algebra is the direct sum of simple Lie algebras. Semisimplicity is related with the complete reducibility of the group representations. Semisimple Lie algebras have been completely by Cartan and other mathematicians using some more advanced gadgets called “root systems”. I am not going to discuss root systems in this thread (that would be too advanced for my current purposes), but I can provide the list of semisimple Lie algebras in the next table. This table contains the summary of t:

Algebra Rank Dimension Group
A_{n-1}(n\geq 2) n-1 n^2-1 SL(n),SU(n)
B_{n}(n\geq 1) n n(2n+1) SO(2n+1)
C_{n}(n\geq 3) n n(2n+1) Sp(2n)
D_{n}(n\geq 4) n n(2n-1) SO(2n)
\mathfrak{G}_2 2 12 G_2
\mathfrak{F}_4 4 48 F_4
\mathfrak{e}_6 6 72 E_6
\mathfrak{e}_7 7 126 E_7
\mathfrak{e}_8 8 240 E_8

We observe that there are 4 “infinite series” of “classical groups” and 5 exceptional semisimple Lie groups. The allowed dimensions for a given rank can be worked out, and the next table provides a list up to rank 9 of the dimension of Lie algebras for the above Cartan group classification:

Rank Dimension
1 3(SU(2)
2 8(SU(3)),10,12
3 15(SU(4)),21,21
4 24,36,36,28,48
5 35,55,55,45
6 48,78,78,66,72
7 63,105,105,91,72
8 80,126,136,136,120
9 99,240,171,171,153

Final remark: in general, Lie algebras deriving or providing some Lie group definition can be represented with same letters of the group but in gothic letters(fraktur style) or in lower case.

See you in the next blog post!


LOG#089. Group theory(IX).

220px-Sophus_Lie

Definition (36). An infinite group (G,\circ) is a group where the order/number of elements \vert G\vert is not finite. We distinguish two main types of groups (but there are more classes out there…):

1) Discrete groups: their elements are a numerable set. Invariance under a discrete group provides multiplicative conservation laws. Elements are symbolized as g_i \forall i=1,\ldots,\infty for a discrete group.

2) Continuous groups: their elements are not numerable, since they depend “continuously” on a finite number of parameters (real, complex,…):

g=g(\alpha_1,\alpha_2,\ldots)

Note that the number or paraters can be either finite or infinite in some cases. The number of parameters define the so-called “dimension” of the group. Please, don’t confuse group order with its dimension. Group order is the number of elements, group dimension is the number of parameters we do need to characterize/generate the group! Invariance under a continuous group has some consequences (due to the Noether’s theorems):

1) Invariance under a finite dimensional r-parametric continuous group provides conservation laws.

2) Invariance under an infinite dimensional continuous group (parametrized by some set of “functions”) provides some relationships between field equations called “dependencies” or “noether identities” in modern language.

Definition (37). Composition rule/law for a group. Let G be a continuous group and two elements g(\alpha_1),g(\alpha_2)\in G, then

g(\alpha_1)\circ g(\alpha_2)=g(\alpha_3)

and we define the composition law of a continuous  group as the function that gives \alpha_3=f(\alpha_1,\alpha_2) and similarly

g(\alpha_2)=g^{-1}(\alpha_1)

so

\alpha_1=f^{-1}(\alpha_2)

Theorem (Lie). Every continuous group is a Lie Group. It means that whenever you have a group where the composition rule is given, as the inverse element, then the group elements are differentiable functions (analytic in the complex case) on its arguments.

Some examples of Lie groups (some of them we have already quoted in this thread):

1) The euclidean real space \mathbb{R}^n or the hermitian complex space \mathbb{C} with ordinary vector addition form (in any of that two cases) a n-dimensional noncompact abelian Lie group.

2) The general linear (Lie) group of non-singular matrices over the real number or the complex numbers is a Lie group GL_n(\mathbb{R}) or GL_n(\mathbb{C}).

3) The special linear group SL(n,\mathbb{R}) or the complex analogue SL(n,\mathbb{C}) of square matrices with determinant equal to one.

4) The orthogonal group O(n) over the real numbers, n\times n matrices with real entries is a n(n-1)/2 dimensional Lie group.

5) The special orthogonal group SO(n,\mathbb{R}) is the subgroup of the orthogonal group whose matrices have determinant equal to one.

6) The unitary group U(n,\mathbb{C}) of complex n\times n unitary matrices, UU^+=U^+U=\mathbb{I}_n. Its dimension is equal to n^2 over the complex numbers. SU(n) is the n^2-1 dimensional subgroup formed by unitary matrices with determinant equal to one.

7) The symplectic group \mbox{Sp}(2n,\mathbb{R}).

8) The group of upper triangular matrices n\times n is a group with dimension n(n+1)/2.

9) The Lorentz group and the Poincaré group. The are non-compact Lie groups (Poincaré is non-compact due to the fact that the Lorentz subgroup is non-compact). Their dimensions in 4D spacetime are 6 and 10 dimensions respectively.

10) The Standard Model “gauge” (Lie) group U(1)\times SU(2)\times SU(3) is a group formed with direct group (in the group sense) of three groups and it has dimension 1+3+8=12. The dimensions of the gauge groups in the Standard Model is in direct correspondence with the numbers of gauge bosons: 1 massless photon, 3 vector bosons for the electroweak interactions, and 8 gluons for the quantum chromodynamics (QCD).

11) The exceptional Lie groups \mathcal{G}_2,\mathcal{F}_4, E_6, E_7, E_8, the so called Cartan exceptional groups. Their dimensions are respectively 14, 52, 78, 133 and 248.

The continuous group made of matrices (finite and infinite matrices/operators) play an important role in Physics. Moreover, as Lie groups depend continuously on their arguments AND their dependence is generally differentiable, it makes sense to take derivatives in the group elements. In fact, this fact allow us to define the idea of group generator.

Definition(38).  Group generator. If g=U(\alpha) is a continuous (therefore differentiable; remember that continuity implies differentiability but the converse is not necessarily true), then we define the generators of the group L_i in the following (hermitian) way:

-iL_j=\dfrac{\partial U(\alpha)}{\partial \alpha_j}\bigg|_{\alpha=0}

Theorem (Lie). Let us choose some G=U(\alpha) one-parameter continuous group and K its generator. Then, the following facts hold:

i) K fully determines the group U(\alpha).

ii) Group elements are obtained using “exponentiation” of generators. That is,

U(\alpha)=\exp\left(-iK\alpha\right)

The “proof” involves a group parametrization and an expansion as a series. We have U(0)=1 and U(x+y)=U(x)U(y). Therefore,

\dfrac{dU(x)}{dx}=\dfrac{d}{dy}\left(U(x+y)\right)\vert_{y=0}=\dfrac{d}{dy}(U(x)U(y))\vert_{y=0}

\dfrac{dU(x)}{dx}=U(x)\dfrac{dU(y)}{dy}\bigg|_{y=0}=\dfrac{dU(y)}{dy}\bigg|_{y=0}U(x)=-iKU(x)

so

-iKU(x)=\dfrac{dU(x)}{dx} and U(0)=\mathbb{I}

This differential equation has one and only one solution for every K-value\forall x. The general solution of this equation is the exponential:

U(x)=U(0)\exp\left(-iKx\right)

Taking into account the initial conditions U(0)=\mathbb{I} (elements nearby of any group element are the identity element) we have the desired result for every x=\alpha. Q.E.D.

Theorem (Lie). A multiparametric Lie group (N-dimensional) is a Lie group G with functions g=U(\alpha_j),\forall j=1,2,\ldots, N and generators L_j obtained by exponentiation. That is:

\boxed{\displaystyle{U(\alpha_j)=\prod_{j=1}^N\exp \left(-iL_j\alpha_j\right)}}

Check (Easy simplified proof): Using the previous result, we have to fix only all the parameters \alpha_j\forall j=1,\ldots,N. Then, a simple “empatic mimicry” of the previous one dimensional provides:

U(\alpha_N)=U(0,\alpha_2,\alpha_3,\ldots,\alpha_N)\exp\left(-iL_1\alpha_1\right)

and then

U(\alpha_N)=U(0,0,\alpha_3,\ldots,\alpha_N)\exp\left(-iL_1\alpha_1\right)\exp\left(-iL_2\alpha_2\right)

and finally, iterating the process N-times we get

\displaystyle{U(\alpha_N)=U(0,0,0,\ldots,0)\prod_j^N\exp\left(-iL_j\alpha_j\right)}

The generators of any Lie group satisfy some algebraic and important relations. In the case of dealing with matrix or operator groups, the generators are matrices or operator theirselves. These mathematical relations can be written in terms of (ordinary) algebraic commutators. There is a very important theorem about this fact:

Theorem. First Lie theorem. Lie group generators form a closed commutator algebra under “matrix/operator” products. That is:

\boxed{\left[L_i,L_j\right]=C_{ij}^{k}L_k} or \boxed{\left[L_i,L_j\right]=C^{ijk}L_k}

without distinction of lower and upper “labels”.

There the commutator of two matrices/operators is defined to be \left[A,B\right]=AB-BA and the contants C_{ijk} or C^k_{ij} are the so-called structure constants of the Lie group. The structure constants of a Lie group are:

1) Antisymmetric with respect to the first two indices (or the paired ones, ij, with our notation).

2) Characteristic of the group but they do change, in a particular way, if we form linear combinations of the Lie group generators.

There is a nice formula called Baker-Campbell-Hausdorff identity that relates group exponentials and group commutators. It is specially important in the theory of Lie groups and Lie algebras:

The Baker-Campbell-Hausdorff (BCH) formula. For any matrix/operator A,B, under certain very general conditions, we have:

\exp(A)\exp(B)=\exp\left(A+B+\dfrac{1}{2}\left[A,B\right]+\dfrac{1}{12}\left[\left[A,B\right],B\right]-\dfrac{1}{12}\left[\left[A,B\right],A\right]-\ldots\right)

In the case that the matrices A and B do commute, then we recover the usual ordinary exponentiation of “elements”:

\exp(A)\exp(B)=\exp(A+B)

A beautiful and simple application of the BCH formula is the next feature which allows us to write ANY member of a Lie group as the exponential of a sum of the Lie group generators. Let us write the group elements, firstly, as

g=U(\alpha_j)\forall j=1,2,\ldots,N

and let us write the group generators as L_j. Then, we have

\displaystyle{U(\alpha_j)=\prod_{j=1}^N\exp\left(-i\alpha_jL_j\right)=\exp\left(-i\sum_ {j=1}^N\omega_jL_j\right)}

where the parameters \omega_j are related to the \alpha_j parameters in a simple continuous way

\omega_j=\omega_j(\alpha_k)

The specific form of this realtion can be expanded and computed/calculated term by term using the BCH formula, as given before.

See you in the next blog post of this group theory thread!