LOG#120. Basic Neutrinology(V).

Supersymmetry (SUSY) is one of the most discussed ideas in theoretical physics. I am not discussed its details here (yet, in this blog). However, in this thread, some general features are worth to be told about it. SUSY model generally include a symmetry called R-parity, and its breaking provide an interesting example of how we can generate neutrino masses WITHOUT using a right-handed neutrino at all. The price is simple: we have to add new particles and then we enlarge the Higgs sector. Of course, from a pure phenomenological point, the issue is to discover SUSY! On the theoretical aside, we can discuss any idea that experiments do not exclude. Today, after the last LHC run at 8TeV, we have not found SUSY particles, so the lower bounds of supersymmetric particles have been increased. Which path will Nature follow? SUSY, LR models -via GUTs or some preonic substructure, or something we can not even imagine right now? Only experiment will decide in the end…

In fact, in a generic SUSY model, dut to the Higgs and lepton doublet superfields, we have the same SU(3)_c\times SU(2)_L\times U(1)_Y quantum numbers. We also have in the so-called “superpotential” terms, bilinear or trilinear pieces in the superfields that violate the (global) baryon and lepton number explicitly. Thus, they lead to mas terms for the neutrino but also to proton decays with unacceptable high rates (below the actual lower limit of the proton lifetime, about 10^{33}  years!). To protect the proton experimental lifetime, we have to introduce BY HAND a new symmetry avoiding the terms that give that “too high” proton decay rate. In SUSY models, this new symmetry is generally played by the R-symmetry I mentioned above, and it is generally introduced in most of the simplest models including SUSY, like the Minimal Supersymmetric Standard Model (MSSM). A general SUSY superpotential can be written in this framework as

(1) \mathcal{W}'=\lambda{ijk}L_iL_jE_l^c+\lambda'_{ijk}L_iQ_jD_k^c+\lambda''_{ijk}D_i^cD_j^cU_k^c+\epsilon_iL_iH_2

A less radical solution is to allow for the existence in the superpotential of a bilinear term with structure \epsilon_3L_3H_2. This is the simplest way to realize the idea of generating the neutrino masses without spoiling the current limits of proton decay/lifetime. The bilinear violation of R-parity implied by the \epsilon_3 term leads by a minimization condition to a non-zero vacuum expectation value or vev, v_3. In such a model, the \tau neutrino acquire a mass due to the mixing between neutrinos and the neutralinos.The \nu_e, v_\mu neutrinos remain massless in this toy model, and it is supposed that they get masses from the scalar loop corrections. The model is phenomenologically equivalent to a “3 Higgs doublet” model where one of these doublets (the sneutrino) carry a lepton number which is broken spontaneously. The mass matrix for the neutralino-neutrino secto, in a “5×5” matrix display, is:

(2) \mathbb{M}=\begin{pmatrix}G_{2x2} & Q_{ab}^1 & Q_{ab}^2 & Q_{ab}^3\\ Q_{ab}^{1T} & 0 & -\mu & 0\\ Q_{ab}^{2T} & -\mu & 0 & \epsilon_3\\ Q_{ab}^{3T} & 0 & \epsilon_3 & 0\end{pmatrix}

and where the matrix G_{2x2}=\mbox{diag}(M_1, M_2) corresponds to the two “gauginos”. The matrix Q_{ab} is a 2×3 matrix and it contains the vevs of the two higgses H_1,H_2 plus the sneutrino, i.e., v_u, v_d, v_3 respectively. The remaining two rows are the Higgsinos and the tau neutrino. It is necessary to remember that gauginos and Higgsinos are the supersymmetric fermionic partners of the gauge fields and the Higgs fields, respectively.

I should explain a little more the supersymmetric terminology. The neutralino is a hypothetical particle predicted by supersymmetry. There are some neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They can be seen as mixtures between binos and winos (the sparticles associated to the b quark and the W boson) and they are generally Majorana particles. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles (decays that happen in multiple steps) usually originating from colored  supersymmetric particles such as squarks or gluinos. In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum (missing transverse energy) in a detector. As a heavy, stable particle, the lightest neutralino is an excellent candidate to comprise the universe’s cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early Universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly 10-10^4 GeV is the leading weakly interacting massive particle (WIMP) dark matter candidate.

Neutralino dark matter could be observed experimentally in nature either indirectly or directly. In the former case, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. In the latter case, special purpose experiments such as the (now running) Cryogenic Dark Matter Search (CDMS)  seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.

If we return to the matrix (2) above, we observe that when we diagonalize it, a “seesaw”-like mechanism is again at mork. There, the role of M_D, M_R can be easily recognized. The \nu_\tau mass is provided by

m_{\nu_\tau}\propto \dfrac{(v_3')^2}{M}

where v_3'\equiv \epsilon_3v_d+\mu v_3 and M is the largest gaugino mass. However, an arbitrary SUSY model produces (unless M is “large” enough) still too large tau neutrino masses! To get a realistic and small (1777 GeV is “small”) tau neutrino mass, we have to assume some kind of “universality” between the “soft SUSY breaking” terms at the GUT scale. This solution is not “natural” but it does the work. In this case, the tau neutrino mass is predicted to be tiny due to cancellations between the two terms which makes negligible the vev v_3'. Thus, (2) can be also written as follows

(3) \begin{pmatrix}M_1 & 0 & -\frac{1}{2}g'v_d & \frac{1}{2}g'v_u & -\frac{1}{2}g'v_3\\ 0 & M_2 & \frac{1}{2}gv_d & -\frac{1}{2}gv_u & \frac{1}{2}gv_3\\ -\frac{1}{2}g'v_d & \frac{1}{2}gv_d & 0 & -\mu & 0\\ \frac{1}{2}g'v_u& -\frac{1}{2}gv_u& -\mu & 0 & \epsilon_3\\ -\frac{1}{2}g'v_3 & \frac{1}{2}gv_3 & 0 & \epsilon_3 & 0\end{pmatrix}

We can study now the elementary properties of neutrinos in some elementary superstring inspired models. In some of these models, the effective theory implies a supersymmetric (exceptional group) E_6 GUT with matter fields belong to the 27 dimensional representation of the exceptional group E_6 plus additional singlet fields. The model contains additional neutral leptons in each generation and the neutral E_6 singlets, the gauginos and the Higgsinos. As the previous model, but with a larger number of them, every neutral particle can “mix”, making the undestanding of the neutrino masses quite hard if no additional simplifications or assumptions are done into the theory. In fact, several of these mechanisms have been proposed in the literature to understand the neutrino masses. For instance, a huge neutral mixing mass matris is reduced drastically down to a “3×3” neutrino mass matrix result if we mix \nu and \nu^c with an additional neutral field T whose nature depends on the particular “model building” and “mechanism” we use. In some basis (\nu, \nu^c,T), the mass matrix can be rewritten

(4) M=\begin{pmatrix}0 & m_D & 0\\ m_D & 0 & \lambda_2v_R\\ 0 & \lambda_2v_R & \mu\end{pmatrix}

and where the \mu energy scale is (likely) close to zero. We distinguish two important cases:

1st. R-parity violation.

2nd. R-parity conservation and a “mixing” with the singlet.

In both cases, the sneutrinos, superpartners of \nu^c are assumed to acquire a v.e.v. with energy size v_R. In the first case, the T field corresponds to a gaugino with a Majorana mass \mu than can be produced at two-loops! Usually \mu\approx 100GeV, and if we assume \lambda v_R\approx 1 TeV, then additional dangerous mixing wiht the Higgsinos can be “neglected” and we are lead to a neutrino mass about m_\nu\sim 0.1eV, in agreement with current bounds. The important conclusion here is that we have obtained the smallness of the neutrino mass without any fine tuning of the parameters! Of course, this is quite subjective, but there is no doubt that this class of arguments are compelling to some SUSY defenders!

In the second case, the field T corresponds to one of the E_6 singlets. We have to rely on the symmetries that may arise in superstring theory on specific Calabi-Yau spaces to restric the Yukawa couplings till “reasonable” values. If we have \mu=0 in the matrix (4) above, we deduce that a massless neutrino and a massive Dirac neutrino can be generated from this structure. If we include a possible Majorana mass term of the sfermion at a scale \mu\approx 100GeV, we get similar values of the neutrino mass as the previous case.

Final remark: mass matrices, as we have studied here, have been proposed without embedding in a supersymmetric or any other deeper theoretical frameworks. In that case, small tree level neutrino masses can be obtained without the use of large scales. That is, the structure of the neutrino mass matrix is quite “model independent” (as the one in the CKM quark mixing) if we “measure it”. Models reducing to the neutrino or quark mass mixing matrices can be obtained with the use of large energy scales OR adding new (likely “dark”) particle species to the SM (not necessarily at very high energy scales!).

LOG#110. Basic Cosmology (V).



When the Universe cooled up to T\sim eV, the neutrinos decoupled from the primordial plasma (soup). Protons, electrons and photons remained tighly coupled by 2 main types of scattering processes:

1) Compton scattering: e+\gamma \leftrightarrow e+\gamma

2) Coulomb scattering: e^-+p\leftrightarrow H+\gamma

Then, there were little hydrogen (H) and though B_H>T due to small baryon fraction \eta_b.

The evolution of the free electron fraction provided the ratio

X_e\equiv =\dfrac{n_e}{n_e+n_H}=\dfrac{n_p}{n_p+n_H}

where n_p+n_H\approx n_b and the second equality is due to the neutrality of our universe, i.e., to the fact that n_e=n_p (by charge conservation). If e^-+p\longrightarrow H+\gamma remains in the thermal equilibrium, then

\dfrac{n_en_p}{n_H}=\dfrac{n_e^0n_p^0}{n_H^0}\longrightarrow \dfrac{X_e^2}{1-X_e}=\dfrac{1}{n_e+n_H}\left[\left(\dfrac{m_eT}{2\pi}\right)^{3/2}e^{-\left[m_e+m_p-m_H\right]/T}\right]

where we have

\dfrac{1}{n_e+n_H}\left(\dfrac{m_eT}{2\pi}\right)^{3/2}=n_p+n_H=n_b-4n(He)\approx n_p+n_H=n_b=\eta_b\eta_\gamma

It gives \eta_b\eta_\gamma\sim 10^{-9}T^3\approx 10^{15}

and the last equality is due to the fact we take T\sim E_0. It means that X_e\approx 1 at T\sim E_0. As we have X_e\longrightarrow 0, we are out of the thermal equilibrium.

From the Boltzmann equation, we also get

a^{-3}\dfrac{d(n_ea^3)}{dt}=n_e^0n_p^0\langle \sigma v\rangle \left( \dfrac{n_Hn_\gamma}{n_H^0n_\gamma^0}-\dfrac{n_e^2}{n_e^0n_p^0}\right)

or equivalently

a^{-3}\dfrac{d(n_ea^3)}{dt}=n_b\langle \sigma v\rangle \left(\dfrac{n_H}{n_b}\dfrac{n_e^0n_p^0}{n_H^0}-\dfrac{n_e^2}{n_b}\right)


a^{-3}\dfrac{d(n_ea^3)}{dt}=n_b\langle \sigma v\rangle \left( (1-X_e)\left(\dfrac{m_eT}{2\pi}\right)^{3/2}e^{-E_0/T}-X_e^2n_b\right)

Using that n_e=n_bX_e and \dfrac{d}{dt}(n_ba^3)=0, we obtain

\dfrac{dX_e}{dt}=\left[(1-X_e)\beta -X_e^2n_b\alpha^{(2)}\right]


\beta\equiv \langle \sigma v\rangle \left(\dfrac{m_eT}{2\pi}\right)^{3/2}e^{-E_0/T}, the ionization rate, and

\alpha^{(2)}\equiv \langle \sigma v\rangle the so-called recombination rate. It is taken the recombination to the n=2 state of the neutral hydrogen. Note that the ground state recombination is NOT relevant here since it produces an ionizing photon, which ionizes a neutral atom, and thus the net effect is zero. In fact, the above equations provide

\alpha^{(2)}=9\mbox{.}78\dfrac{\alpha^2}{m_e^2}\left(\dfrac{E_0}{T}\right)^{1/2}\ln \left(\dfrac{E_0}{T}\right)

The numerical integration produces the following qualitative figure


The decoupling of photons from the primordial plasma is explained as

\mbox{Compton scattering rate}\sim\mbox{Expansion rate}

Mathematicaly speaking, this fact implies that


where \sigma_T is the Thomson cross section. For the processes we are interesting in, it gives

\sigma_T=0\mbox{.}665\cdot 10^{-24}cm^2

and then

n_e\sigma_T=7\mbox{.}477\cdot 10^{-30}cm^{-1}X_e\Omega_bh^2a^{-3}

Thus, we deduce that



and where X_e\leq 10^{-2} implies that the decoupling of photons occurs during the time of recombination! In fact, the decoupling of photons at time of recombination is what we observe when we look at the Cosmic Microwave Background (CMB). Fascinating, isn’t it?

Dark Matter (DM)

Today, we have strong evidences and hints that non-baryonic dark matter (DM) exists (otherwise, we should modify newtonian dynamics and or the gravitational law at large scales, but it seems that even if we do that, we require this dark matter stuff).

In fact, from cosmological observations (and some astrotronomical and astrophysical measurements) we get the value of the DM energy density

\Omega_{DM}\sim 0\mbox{.}2-0\mbox{.}3

The most plausible candidate for DM are the Weakly Interacting Massive Particles (WIMPs, for short). Generic WIMP scenarios provide annihilations

X_{DM}+\bar{X}_{DM}\leftrightarrow l+\bar{l}

where X_{DM} is some “heavy” DM particle and the (ultra)weak interaction above produces light particles in form of leptons and antileptons, tighly couple to the cosmic plasma. The Boltzmann equation gives

a^{-3}\dfrac{d(n_Xa^3)}{dt}=\langle \sigma_X v\rangle \left( n_X^{(0)2}-n_X^2\right)

Define the yield (or ratio) Y_X=\dfrac{n_X}{T^3}. It is produced since generally we have


and since sa^3=constant, then s\propto T^3. Thus,

\dfrac{dY}{dt}=T^3\langle \sigma v\rangle \left( Y_{EQ}^2-Y^2\right)



Now, we can introduce a new time variable, say


Then, we calculate


For a radiation dominated (RD) Universe, \rho\propto T^4 implies that H\propto T^2 and H(x)=-\dfrac{H(m)}{x^2}

In this case, we obtain


with \lambda=\dfrac{m^3\langle \sigma v\rangle}{H(m)}

The final freeze out abundance is got in the limit Y_\infty=Y(x\longrightarrow \infty). Typically, \lambda >>1, and when Y_{EQ}\sim 1 and Y\approx Y_{EQ}, for x>>1, and there, the yield drops exponentially

\dfrac{dY}{dx}\approx \dfrac{\lambda Y^2}{x^2}


\dfrac{dY}{Y^2}\approx \dfrac{\lambda dx}{x^2}

Integrating this equation,

\displaystyle{\int_{Y_f}^{Y_\infty}\dfrac{dY}{Y^2}=\int_{x_f}^\infty \dfrac{\lambda}{dx}{x^2}}

and then


Generally, Y_f>>Y_\infty and the freeze out temperature for WIMPs is got with the aid of the following equation


Indeed, n\langle \sigma v\rangle= H\longrightarrow x_f\sim 10

A qualitative numerical solution of the “WIMP” miracle (and its freeze out) is given by the following sketch

DMyieldWIMPmiracleThe present abundance of heavy particle relics gives

\rho_X=mY_\infty T_0^3\left(\dfrac{a_1T_1}{a_0T_0}\right)^3\approx \dfrac{mY_\infty T_0^3}{30}

and where the effect of entrpy dumping after the freeze-out is encoded into the factor

\left(\dfrac{a_1T_1}{a_0T_0}\right)^3 with \left(\dfrac{g_\star (0)}{g_\star (f)}\right)^3\approx \dfrac{1}{30}

Moreover, the DM energy density can also be estimated:

\Omega_X=\Omega_{DM}=\dfrac{x_f}{\lambda}\dfrac{mT_0^3}{30\rho_c}=\dfrac{H (m) x_fT_0^3}{30m^2\langle \sigma v\rangle\rho_c}


\Omega_X=\left[\dfrac{4\pi^3Gg_\star (m)}{45}\right]^{1/2}\dfrac{x_fT_0^3}{30\langle \sigma v\rangle \rho_c}=0\mbox{.}3h^{-2}\left(\dfrac{x_f}{10}\right)\left(\dfrac{g_\star (m)}{100}\right)^{1/2}\dfrac{10^{-39}cm^2}{\langle \sigma v\rangle}

The main (current favourite) candidate for WIMP particles are the so called lightest supersymmetric particles (LSP). However, there are other possible elections. For instance, Majorana neutrinos (or other sterile neutrino species), Z prime bosons, and other exotic particles. We observe that here there is a deep connection between particle physics, astrophysics and cosmology when we talk about the energy density and its total composition, from a fundamental viewpoint.

Remark: there are also WISP particles (Weakly Interacting Slim Particles), like (superlight) axions and other exotics that could contribute to the DM energy density and/or the “dark energy”/vacum energy that we observe today. There are many experiments searching for these particles in laboratories, colliders, DM detection experiments and astrophysical/cosmological observations (cosmic rays and other HEP phenomena are also investigated to that goal).

See you in a next cosmological post!

LOG#107. Basic Cosmology (II).


Evolution of the Universe: the scale factor

The Universe expands, and its expansion rate is given by the Hubble parameter (not constant in general!)

\boxed{H(t)\equiv \dfrac{\dot{a}(t)}{a(t)}}

Remark  (I): The Hubble “parameter” is “constant” at the present value (or a given time/cosmological age), i.e., H_0=H(t_0).

Remark (II): The Hubble time defines a Hubble length about L_H=H^{-1}, and it defines the time scale of the Universe and its expasion “rate”.

The critical density of matter is a vital quantity as well:


We can also define the density parameters


This quantity represents the amount of substance for certain particle species. The total composition of the Universe is the total density, or equivalently, the sum over all particle species of the density parameters, that is:


There is a nice correspondence between the sign of the curvature k and that of \Omega-1. Using the Friedmann’s equation


then we have


Thus, we observe that

1st. \Omega>1 if and only if (iff) k=+1, i.e., iff the Universe is spatially closed (spherical/elliptical geometry).

2nd. \Omega=1 if and only if (iff) k=0, i.e., iff the Universe is spatially “flat” (euclidean geometry).

3rd. \Omega<1 if and only if (iff) k=-1, i.e., iff the Universe is spatially “open” (hyperbolic geometry).

In the early Universe, the curvature term is negligible (as far as we know). The reason is as follows:

k/a^2\propto a^{-2}<<\dfrac{\kappa\rho}{3}\propto a^{-3}(MD),a^{-4}(RD) as a goes to zero. MD means matter dominated Universe, and RD means radiation dominated Universe. Then, the Friedmann’s equation at the early time is given by


Furthermore, the evolution of the curvature term

\Omega_k\equiv \Omega-1

is given by

\Omega-1=\dfrac{k}{H^2a^2}\propto \dfrac{1}{\rho a^2}\propto a(MD),a^2(RD)

and thus

\vert \Omega-1\vert=\begin{cases}(1+z)^{-1}, \mbox{if MD}\\ 10^4(1+z)^{-2}, \mbox{if RD}\end{cases}

The spatial curvature will be given by


and the curvature radius will be

\boxed{R=a\vert k\vert ^{-1/2}=H^{-1}\vert \Omega-1\vert ^{-1/2}}

We have arrived at the interesting result that in the early Universe, it was nearly “critical”. The Universe close to the critical density is very flat!

By the other hand, supposing that a_0=1, we can integrate the Friedmann’s equation easily:


Then, we obtain


We can make an analogy of this equation to certain simple equation from “newtonian Mechanics”:


Therefore, if we identify terms, we get that the density parameters work as “potential”, with


and the total energy is equal to zero (a “machian” behaviour indeed!). In addition to this equation, we also get


The age of the Universe can be easily calculated (symbolically and algebraically):




This equation can be evaluated for some general and special cases. If we write p=\omega \rho for a single component, then

a\propto t^{2/3(1+\omega)} if \omega\neq -1

Moreover, 3 common cases arise:

1) Matter dominated Universe (MD): a\propto t^{2/3}

2) Radiation dominated Universe (RD): a\propto t^{1/2}

3) Vacuum dominated Universe (VD): e^{H_0t} (w=-1 for the cosmological constant, vacuum energy or dark energy).


We can find out how much energy is contributed by the different compoents of the Universe, i.e., by the different density parameters.

Case 1. Photons.

The CMB temperature gives us “photons” with T_\gamma=2\mbox{.}725\pm 0\mbox{.}002K

The associated energy density is given by the Planck law of the blackbody, that is

\rho_\gamma=\dfrac{\pi^2}{15}T^4 and \mu/T<9\cdot 10^{-5}

or equivalently

\Omega_\gamma=\Omega_r=\dfrac{2\mbox{.}47\cdot 10^{-5}}{h^2a^4}

Case 2. Baryons.

There are four established ways of measuring the baryon density:

i) Baryons in galaxies: \Omega_b\sim 0\mbox{.}02

ii) Baryons through the spectra fo distant quasars: \Omega_b h^{1\mbox{.}5}\approx 0\mbox{.}02

iii) CMB anisotropies: \Omega_bh^2=0\mbox{.}024\pm ^{0\mbox{.}004}_{0\mbox{.}003}

iv) Big Bag Nucleosynthesis: \Omega_bh^2=0\mbox{.}0205\pm 0\mbox{.}0018

Note that these results are “globally” compatible!

Case 3. (Dark) Matter/Dust.

The mass-to-light ratio from galactic rotation curves are “flat” after some cut-off is passed. It also works for clusters and other bigger structures. This M/L ratio provides a value about \Omega_m=0\mbox{.}3. Moreover, the galaxy power spectrum is sensitive to \Omega_m h. It also gives \Omega_m\sim 0\mbox{.}2. By the other hand, the cosmic velocity field of galaxies allows us to derive \Omega_m\approx 0\mbox{.}3 as well. Finally, the CMB anisotropies give us the puzzling values:

\Omega_m\sim 0\mbox{.}25

\Omega_b\sim 0\mbox{.}05

We are forced to accept that either our cosmological and gravitational theory is a bluff or it is flawed or the main component of “matter” is not of baryonic nature, it does not radiate electromagnetic radiation AND that the Standard Model of Particle Physics has no particle candidate (matter field) to fit into that non-baryonic dark matter. However, it could be partially formed by neutrinos, but we already know that it can NOT be fully formed by neutrinos (hot dark matter). What is dark matter? We don’t know. Some candidates from beyond standard model physics: axion, new (likely massive or sterile) neutrinos, supersymmetric particles (the lightest supersymmetric particle LSP is known to be stable: the gravitino, the zino, the neutralino,…), ELKO particles, continuous spin particles, unparticles, preons, new massive gauge bosons, or something even stranger than all this and we have not thought yet! Of course, you could modify gravity at large scales to erase the need of dark matter, but it seems it is not easy at all to guess a working Modified Gravitational theory or Modified Newtonian(Einsteinian) dynmanics that avoids the need for dark matter. MOND’s, MOG’s or similar ideas are an interesting idea, but it is not thought to be the “optimal” solution at current time. Maybe gravitons and quantum gravity could be in the air of the dark issues? We don’t know…

Case 4. Neutrinos.

They are NOT observed, but we understand them their physics, at least in the Standard Model and the electroweak sector. We also know they suffer “oscillations”/flavor oscillations (as kaons). The (cosmic) neutrino temperature can be determined and related to the CMB temperature. The idea is simple: the neutrino decoupling in the early Universe implied an electron-positron annihilation! And thus, the (density) entropy dump to the photons, but not to neutrinos. It causes a difference between the neutrino and photon temperature “today”. Please, note than we are talking about “relic” neutrinos and photons from the Big Bang! The (density) entropy before annihilation was:

s(a_1)=\dfrac{2\pi^2}{45}T_1^3\left[2+\dfrac{7}{8}(2\cdot 2+3\cdot 2)\right]=\dfrac{43}{90}\pi^2 T_1^3

After the annihilation, we get

s(a_2)=\dfrac{2\pi^2}{45}\left[2T_\gamma^3+\dfrac{7}{8}(3\cdot 2)T_\nu^3\right]

Therefore, equating

s(a_1)a_1^3=s(a_2)a_2^3 and a_1T_1=a_2T_\nu (a_2)

\dfrac{43}{90}\pi^2(a_1T_1)^3=\dfrac{2\pi^2}{45}\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

\dfrac{43}{2}\pi^2(a_1T_1)^3=2\pi^2\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

and then


or equivalently

\boxed{T_\nu=\sqrt[3]{\dfrac{4}{11}}T_\gamma\approx 1\mbox{.}9K}

In fact, the neutrino energy density can be given in two different ways, depending if it is “massless” or “massive”. For massless neutrinos (or equivalently “relativistic” massless matter particles):

I) Massless neutrinos: \Omega_\nu=\dfrac{1\mbox{.}68\cdot 10^{-5}}{h^2}

2) Massive neutrinos: \Omega_\nu= \dfrac{m_\nu}{94h^2 \; eV}

Case 5. The dark energy/Cosmological constant/Vacuum energy.

The budget of the Universe provides (from cosmological and astrophysical measurements) the shocking result

\Omega\approx 1 with \Omega_M\approx 0\mbox{.}3

Then, there is some missin smooth, unclustered energy-matter “form”/”species”. It is the “dark energy”/vacuum energy/cosmological cosntant! It can be understood as a “special” pressure term in the Einstein’s equations, but one with NEGATIVE pressure! Evidence for this observation comes from luminosity-distance-redshift measurements from SNae, clusters, and the CMB spectrum! The cosmological constant/vacuum energy/dark energy dominates the Universe today, since, it seems, we live in a (positively!) accelerated Universe!!!!! What can dark energy be? It can not be a “normal” matter field. Like the Dark Matter field, we believe that (excepting perhaps the scalar Higgs field/s) the SM has no candidate to explain the Dark Energy. What field could dark matter be? Perhaps an scalar field or something totally new and “unknown” yet.

In short, we are INTO a DARKLY, darkly, UNIVERSE! Darkness is NOT coming, darkness has arrived and, if nothing changes, it will turn our local Universe even darker and darker!

See you in the next cosmological post!

LOG#105. Einstein’s equations.


In 1905,  one of Einstein’s achievements was to establish the theory of Special Relativity from 2 single postulates and correctly deduce their physical consequences (some of them time later).  The essence of Special Relativity, as we have seen, is that  all the inertial observers must agree on the speed of light “in vacuum”, and that the physical laws (those from Mechanics and Electromagnetism) are the same for all of them.  Different observers will measure (and then they see) different wavelengths and frequencies, but the product of wavelength with the frequency is the same.  The wavelength and frequency are thus Lorentz covariant, meaning that they change for different observers according some fixed mathematical prescription depending on its tensorial character (scalar, vector, tensor,…) respect to Lorentz transformations.  The speed of light is Lorentz invariant.

By the other hand, Newton’s law of gravity describes the motion of planets and terrrestrial bodies.  It is all that we need in contemporary rocket ships unless those devices also carry atomic clocks or other tools of exceptional accuracy.  Here is Newton’s law in potential form:

4\pi G\rho = \nabla ^2 \phi

In the special relativity framework, this equation has a terrible problem: if there is a change in the mass density \rho, then it must propagate everywhere instantaneously.  If you believe in the Special Relativity rules and in the speed of light invariance, it is impossible. Therefore, “Houston, we have a problem”.

Einstein was aware of it and he tried to solve this inconsistency.  The final solution took him ten years .

The apparent silly and easy problem is to develop and describe all physics in the the same way irrespectively one is accelerating or not. However, it is not easy or silly at all. It requires deep physical insight and a high-end mathematical language.  Indeed,  what is the most difficult part are  the details of Riemann geometry and tensor calculus on manifolds.  Einstein got  private aid from a friend called  Marcel Grossmann. In fact, Einstein knew that SR was not compatible with Newton’s law of gravity. He (re)discovered the equivalence principle, stated by Galileo himself much before than him, but he interpreted deeper and seeked the proper language to incorporante that principle in such a way it were compatible (at least locally) with special relativity! His  “journey” from 1907 to 1915 was a hard job and a continuous struggle with tensorial methods…

Today, we are going to derive the Einstein field equations for gravity, a set of equations for the “metric field” g_{\mu \nu}(x). Hilbert in fact arrived at Einstein’s field equations with the use of the variational method we are going to use here, but Einstein’s methods were more physical and based on physical intuitions. They are in fact “complementary” approaches. I urge you to read “The meaning of Relativity” by A.Einstein in order to read a summary of his discoveries.

We now proceed to derive Einstein’s Field Equations (EFE) for General Relativity (more properly, a relativistic theory of gravity):

Step 1. Let us begin with the so-called Einstein-Hilbert action (an ansatz).

S = \int d^4x \sqrt{-g} \left( \dfrac{c^4}{16 \pi G} R + \mathcal{L}_{\mathcal{M}} \right)

Be aware of  the square root of the determinant of the metric as part of the volume element.  It is important since the volume element has to be invariant in curved spacetime (i.e.,in the presence of a metric).  It also plays a critical role in the derivation.

Step 2. We perform the variational variation with respect to the metric field g^{\mu \nu}:

\delta S = \int d^4 x \left( \dfrac{c^4}{16 \pi G} \dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}} + \dfrac{\delta (\sqrt{-g}\mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}} \right) \delta g^{\mu \nu}

Step 3. Extract out  the square root of the metric as a common factor and use the product rule on the term with the Ricci scalar R:

\delta S = \int d^4 x \sqrt{-g} \left( \dfrac{c^4}{16 \pi G} \left ( \dfrac{\delta R}{\delta g^{\mu \nu}} +\dfrac{R}{\sqrt{-g}}\dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}} \right) +\dfrac{1}{\sqrt{-g}}\dfrac{\delta ( \sqrt{-g}\mathcal{L}_{\mathcal{M}})}{\delta g^{\mu\nu}}\right) \delta g^{\mu \nu}

Step 4.  Use the definition of a Ricci scalar as a contraction of the Ricci tensor to calculate the first term:

\dfrac{\delta R}{\delta g^{\mu \nu}} = \dfrac{\delta (g^{\mu \nu}R_{\mu \nu})}{\delta g^{\mu \nu} }= R_{\mu\nu} + g^{\mu \nu}\dfrac{\delta R_{\mu \nu}}{\delta g^{\mu \nu}} = R_{\mu \nu} + \mbox{total derivative}

A total derivative does not make a contribution to the variation of the action principle, so can be neglected to find the extremal point.  Indeed, this is the Stokes theorem in action. To show that the variation in the Ricci tensor is a total derivative, in case you don’t believe this fact, we can proceed as follows:

Check 1. Write  the Riemann curvature tensor:

R^{\rho}_{\, \sigma \mu \nu} = \partial _{\mu} \Gamma ^{\rho}_{\, \sigma \nu} - \partial_{\nu} \Gamma^{\rho}_{\, \sigma \mu}+ \Gamma^{\rho}_{\, \lambda \mu} \Gamma^{\lambda}_{\, \sigma \nu} - \Gamma^{\rho}_{\, \lambda \nu} \Gamma^{\lambda}_{\, \sigma \mu}

Note the striking resemblance with the non-abelian YM field strength curvature two-form

F=dA+A \wedge A = \partial _{\mu} A_{\nu} - \partial _{\nu} A_{\mu} + k \left[ A_\mu , A_{\nu} \right].

There are many terms with indices in the Riemann tensor calculation, but we can simplify stuff.

Check 2. We have to calculate the variation of the Riemann curvature tensor with respect to the metric tensor:

\delta R^{\rho}_{\, \sigma \mu \nu} = \partial _{\mu} \delta \Gamma^{\rho}_{\, \sigma \nu} - \partial_\nu \delta \Gamma^{\rho}_{\, \sigma \mu} + \delta \Gamma ^{\rho}_{\, \lambda \mu} \Gamma^{\lambda}_{\, \sigma \nu} - \delta \Gamma^{\rho}_{\lambda \nu}\Gamma^{\lambda}_{\, \sigma \mu} + \Gamma^{\rho}_{\, \lambda \mu}\delta \Gamma^{\lambda}_{\sigma \nu} - \Gamma^{\rho}_{\lambda \nu} \delta \Gamma^{\lambda}_{\, \sigma \mu}

One cannot calculate the covariant derivative of a connection since it does not transform like a tensor.  However, the difference of two connections does transform like a tensor.

Check 3. Calculate the covariant derivative of the variation of the connection:

\nabla_{\mu} ( \delta \Gamma^{\rho}_{\sigma \nu}) = \partial _{\mu} (\delta \Gamma^{\rho}_{\, \sigma \nu}) + \Gamma^{\rho}_{\, \lambda \mu} \delta \Gamma^{\lambda}_{\, \sigma \nu} - \delta \Gamma^{\rho}_{\, \lambda \sigma}\Gamma^{\lambda}_{\mu \nu} - \delta \Gamma^{\rho}_{\, \lambda \nu}\Gamma^{\lambda}_{\, \sigma \mu}

\nabla_{\nu} ( \delta \Gamma^{\rho}_{\sigma \mu}) = \partial _\nu (\delta \Gamma^{\rho}_{\, \sigma \mu}) + \Gamma^{\rho}_{\, \lambda \nu} \delta \Gamma^{\lambda}_{\, \sigma \mu} - \delta \Gamma^{\rho}_{\, \lambda \sigma}\Gamma^{\lambda}_{\mu \nu} - \delta \Gamma^{\rho}_{\, \lambda \mu}\Gamma^{\lambda}_{\, \sigma \nu}

Check 4. Rewrite the variation of the Riemann curvature tensor as the difference of two covariant derivatives of the variation of the connection written in Check 3, that is, substract the previous two terms in check 3.

\delta R^{\rho}_{\, \sigma \mu \nu} = \nabla_{\mu} \left( \delta \Gamma^{\rho}_{\, \sigma \nu}\right) - \nabla _{\nu} \left(\delta \Gamma^{\rho}_{\, \sigma \mu}\right)

Check 5. Contract the result of Check 4.

\delta R^{\rho}_{\, \mu \rho \nu} = \delta R_{\mu \nu} = \nabla_{\rho} \left( \delta \Gamma^{\rho}_{\, \mu \nu}\right) - \nabla _{\nu} \left(\delta \Gamma^{\rho}_{\, \rho \mu}\right)

Check 6. Contract the result of Check 5:

g^{\mu \nu}\delta R_{\mu \nu} = \nabla_\rho (g^{\mu \nu} \delta \Gamma^{\rho}_{\mu\nu})-\nabla_\nu (g^{\mu \nu}\delta \Gamma^{\rho}_{\rho \mu}) = \nabla _\sigma (g^{\mu \nu}\delta \Gamma^{\sigma}_{\mu \nu}) - \nabla_\sigma (g^{\mu \sigma}\delta \Gamma ^{\rho}_{\rho \mu})

Therefore, we have

g^{\mu \nu}\delta R_{\mu \nu} = \nabla_\sigma (g^{\mu \nu}\delta \Gamma^{\sigma}_{\mu\nu}- g^{\mu \sigma}\delta \Gamma^{\rho}_{\rho\mu})=\nabla_\sigma K^\sigma


Step 5. The variation of the second term in the action is the next step.  Transform the coordinate system to one where the metric is diagonal and use the product rule:

\dfrac{R}{\sqrt{-g}} \dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}}=\dfrac{R}{\sqrt{-g}} \dfrac{-1}{2 \sqrt{-g}}(-1) g g_{\mu \nu}\dfrac{\delta g^{\mu \nu}}{\delta g^{\mu \nu}} =- \dfrac{1}{2}g_{\mu \nu} R

The reason of the last equalities is that g^{\alpha\mu}g_{\mu \beta}=\delta^{\alpha}_{\; \beta}, and then its variation is

\delta (g^{\alpha\mu}g_{\mu \nu}) = (\delta g^{\alpha\mu}) g_{\mu \nu} + g^{\alpha\mu}(\delta g_{\mu \nu}) = 0

Thus, multiplication by the inverse metric g^{\beta \nu} produces

\delta g^{\alpha \beta} = - g^{\alpha \mu}g^{\beta \nu}\delta g_{\mu \nu}

that is,

\dfrac{\delta g^{\alpha \beta}}{\delta g_{\mu \nu}}= -g^{\alpha \mu} g^{\beta \nu}

By the other hand, using the theorem for the derivation of a determinant we get that:

\delta g = \delta g_{\mu \nu} g g^{\mu \nu}


\dfrac{\delta g}{\delta g^{\alpha \beta}}= g g^{\alpha \beta}

because of the classical identity

g^{\alpha \beta}=(g_{\alpha \beta})^{-1}=\left( \det g \right)^{-1} Cof (g)


Cof (g) = \dfrac{\delta g}{\delta g^{\alpha \beta}}

and moreover

\delta \sqrt{-g}=-\dfrac{\delta g}{2 \sqrt{-g}}= -g\dfrac{ \delta g_{\mu \nu} g^{\mu \nu}}{2 \sqrt{-g}}


\delta \sqrt{-g}=\dfrac{1}{2}\sqrt{-g}g^{\mu \nu}\delta g_{\mu \nu}=\dfrac{1}{2}\sqrt{-g}g_{\mu \nu}\delta g^{\mu \nu}


Step 6. Define the stress energy-momentum tensor as the third term in the action (that coming from the matter lagrangian):

T_{\mu \nu} = - \dfrac{2}{\sqrt{-g}}\dfrac{(\sqrt{-g} \mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}}

or equivalently

-\dfrac{1}{2}T_{\mu \nu} = \dfrac{1}{\sqrt{-g}}\dfrac{(\sqrt{-g} \mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}}

Step 7. The extremal principle. The variation of the Hilbert action will be  an extremum when the integrand is equal to zero:

\dfrac{c^4}{16\pi G}\left( R_{\mu \nu} - \dfrac{1}{2} g_{\mu \nu}R\right) - \dfrac{1}{2} T_{\mu \nu} = 0


\boxed{R_{\mu \nu} - \dfrac{1}{2}g_{\mu \nu} R = \dfrac{8\pi G}{c^4}T_{\mu\nu}}

Usually this is recasted and simplified using the Einstein’s tensor

G_{\mu \nu}= R_{\mu \nu} - \dfrac{1}{2}g_{\mu \nu} R


\boxed{G_{\mu\nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}}

This deduction has been mathematical. But there is a deep physical picture behind it. Moreover,  there are a huge number of physics issues one could go into. For instance, these equations bind to particles with integral spin which is good for bosons, but there are matter fermions that also participate in gravity coupling to it. Gravity is universal.  To include those fermion fields, one can consider the metric and the connection to be independent of each other.  That is the so-called Palatini approach.

Final remark: you can add to the EFE above a “constant” times the metric tensor, since its “covariant derivative” vanishes. This constant is the cosmological constant (a.k.a. dark energy in conteporary physics). The, the most general form of EFE is:

\boxed{G_{\mu\nu}+\Lambda g_{\mu\nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}}

Einstein’s additional term was added in order to make the Universe “static”. After Hubble’s discovery of the expansion of the Universe, Einstein blamed himself about the introduction of such a term, since it avoided to predict the expanding Universe. However, perhaps irocanilly, in 1998 we discovered that the Universe was accelerating instead of being decelerating due to gravity, and the most simple way to understand that phenomenon is with a positive cosmological constant domining the current era in the Universe. Fascinating, and more and more due to the WMAP/Planck data. The cosmological constant/dark energy and the dark matter we seem to “observe” can not be explained with the fields of the Standard Model, and therefore…They hint to new physics. The character of this  new physics is challenging, and much work is being done in order to find some particle of model in which dark matter and dark energy fit. However, it is not easy at all!

May the Einstein’s Field Equations be with you!