# LOG#120. Basic Neutrinology(V).

**Posted:**2013/07/15

**Filed under:**Basic Neutrinology, Physmatics, The Standard Model: Basics |

**Tags:**bino, dark matter, Dirac mass term, E(6) group, exceptional group GUT, gauginos, GUT, GUT scale, Higgsino, LR models, LSP, Majorana mass term, MSSM, neutralino, neutrino masses, neutrino mixing, proton decay, proton lifetime, R-parity, R-parity violations, seesaw, sfermion, singlets, sneutrino, soft SUSY breaking terms, string inspired models, superparticle, superpartner, superpotential, SUSY models of neutrino masses, vev, WIMPs, wino, Yukawa coupling, Zinos Leave a comment

Supersymmetry (SUSY) is one of the most discussed ideas in theoretical physics. I am not discussed its details here (yet, in this blog). However, in this thread, some general features are worth to be told about it. SUSY model generally include a symmetry called R-parity, and its breaking provide an interesting example of how we can generate neutrino masses WITHOUT using a right-handed neutrino at all. The price is simple: we have to add new particles and then we enlarge the Higgs sector. Of course, from a pure phenomenological point, the issue is to discover SUSY! On the theoretical aside, we can discuss any idea that experiments do not exclude. Today, after the last LHC run at 8TeV, we have not found SUSY particles, so the lower bounds of supersymmetric particles have been increased. Which path will Nature follow? SUSY, LR models -via GUTs or some preonic substructure, or something we can not even imagine right now? Only experiment will decide in the end…

In fact, in a generic SUSY model, dut to the Higgs and lepton doublet superfields, we have the same quantum numbers. We also have in the so-called “superpotential” terms, bilinear or trilinear pieces in the superfields that violate the (global) baryon and lepton number explicitly. Thus, they lead to mas terms for the neutrino but also to proton decays with unacceptable high rates (below the actual lower limit of the proton lifetime, about years!). To protect the proton experimental lifetime, we have to introduce BY HAND a new symmetry avoiding the terms that give that “too high” proton decay rate. In SUSY models, this new symmetry is generally played by the R-symmetry I mentioned above, and it is generally introduced in most of the simplest models including SUSY, like the Minimal Supersymmetric Standard Model (MSSM). A general SUSY superpotential can be written in this framework as

(1)

A less radical solution is to allow for the existence in the superpotential of a bilinear term with structure . This is the simplest way to realize the idea of generating the neutrino masses without spoiling the current limits of proton decay/lifetime. The bilinear violation of R-parity implied by the term leads by a minimization condition to a non-zero vacuum expectation value or vev, . In such a model, the neutrino acquire a mass due to the mixing between neutrinos and the neutralinos.The neutrinos remain massless in this toy model, and it is supposed that they get masses from the scalar loop corrections. The model is phenomenologically equivalent to a “3 Higgs doublet” model where one of these doublets (the sneutrino) carry a lepton number which is broken spontaneously. The mass matrix for the neutralino-neutrino secto, in a “5×5” matrix display, is:

(2)

and where the matrix corresponds to the two “gauginos”. The matrix is a 2×3 matrix and it contains the vevs of the two higgses plus the sneutrino, i.e., respectively. The remaining two rows are the Higgsinos and the tau neutrino. It is necessary to remember that gauginos and Higgsinos are the supersymmetric fermionic partners of the gauge fields and the Higgs fields, respectively.

I should explain a little more the supersymmetric terminology. The *neutralino* is a hypothetical particle predicted by supersymmetry. There are some neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They can be seen as mixtures between binos and winos (the sparticles associated to the b quark and the W boson) and they are generally Majorana particles. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles (decays that happen in multiple steps) usually originating from colored supersymmetric particles such as squarks or gluinos. In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum (missing transverse energy) in a detector. As a heavy, stable particle, the lightest neutralino is an excellent candidate to comprise the universe’s cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early Universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly GeV is the leading weakly interacting massive particle (WIMP) dark matter candidate.

**Neutralino dark matter** could be observed experimentally in nature either indirectly or directly. In the former case, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. In the latter case, special purpose experiments such as the (now running) Cryogenic Dark Matter Search (CDMS) seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.

If we return to the matrix (2) above, we observe that when we diagonalize it, a “seesaw”-like mechanism is again at mork. There, the role of can be easily recognized. The mass is provided by

where and is the largest gaugino mass. However, an arbitrary SUSY model produces (unless M is “large” enough) still too large tau neutrino masses! To get a realistic and small (1777 GeV is “small”) tau neutrino mass, we have to assume some kind of “universality” between the “soft SUSY breaking” terms at the GUT scale. This solution is not “natural” but it does the work. In this case, the tau neutrino mass is predicted to be tiny due to cancellations between the two terms which makes negligible the vev . Thus, (2) can be also written as follows

(3)

We can study now the elementary properties of neutrinos in some elementary superstring inspired models. In some of these models, the effective theory implies a supersymmetric (exceptional group) GUT with matter fields belong to the 27 dimensional representation of the exceptional group plus additional singlet fields. The model contains additional neutral leptons in each generation and the neutral singlets, the gauginos and the Higgsinos. As the previous model, but with a larger number of them, every neutral particle can “mix”, making the undestanding of the neutrino masses quite hard if no additional simplifications or assumptions are done into the theory. In fact, several of these mechanisms have been proposed in the literature to understand the neutrino masses. For instance, a huge neutral mixing mass matris is reduced drastically down to a “3×3” neutrino mass matrix result if we mix and with an additional neutral field whose nature depends on the particular “model building” and “mechanism” we use. In some basis , the mass matrix can be rewritten

(4)

and where the energy scale is (likely) close to zero. We distinguish two important cases:

1st. R-parity violation.

2nd. R-parity conservation and a “mixing” with the singlet.

In both cases, the sneutrinos, superpartners of are assumed to acquire a v.e.v. with energy size . In the first case, the field corresponds to a gaugino with a Majorana mass than can be produced at two-loops! Usually , and if we assume , then additional dangerous mixing wiht the Higgsinos can be “neglected” and we are lead to a neutrino mass about , in agreement with current bounds. The important conclusion here is that we have obtained the smallness of the neutrino mass without any fine tuning of the parameters! Of course, this is quite subjective, but there is no doubt that this class of arguments are compelling to some SUSY defenders!

In the second case, the field corresponds to one of the singlets. We have to rely on the symmetries that may arise in superstring theory on specific Calabi-Yau spaces to restric the Yukawa couplings till “reasonable” values. If we have in the matrix (4) above, we deduce that a massless neutrino and a massive Dirac neutrino can be generated from this structure. If we include a possible Majorana mass term of the sfermion at a scale , we get similar values of the neutrino mass as the previous case.

**Final remark:** mass matrices, as we have studied here, have been proposed without embedding in a supersymmetric or any other deeper theoretical frameworks. In that case, small tree level neutrino masses can be obtained without the use of large scales. That is, the structure of the neutrino mass matrix is quite “model independent” (as the one in the CKM quark mixing) if we “measure it”. Models reducing to the neutrino or quark mass mixing matrices can be obtained with the use of large energy scales OR adding new (likely “dark”) particle species to the SM (not necessarily at very high energy scales!).

# LOG#110. Basic Cosmology (V).

**Posted:**2013/06/23

**Filed under:**Cosmology, Physmatics |

**Tags:**Boltzmann equation, Compton scattering, Coulomb scattering, dark matter, free electron fraction evolution, LSP, photon decoupling, recombination, WIMP, WISP Leave a comment

## Recombination

When the Universe cooled up to , the neutrinos decoupled from the primordial plasma (soup). Protons, electrons and photons remained tighly coupled by 2 main types of scattering processes:

1) Compton scattering:

2) Coulomb scattering:

Then, there were little hydrogen (H) and though due to small baryon fraction .

The evolution of the free electron fraction provided the ratio

where and the second equality is due to the neutrality of our universe, i.e., to the fact that (by charge conservation). If remains in the thermal equilibrium, then

where we have

It gives

and the last equality is due to the fact we take . It means that at . As we have , we are out of the thermal equilibrium.

From the Boltzmann equation, we also get

or equivalently

i.e.

Using that and , we obtain

with

, the ionization rate, and

the so-called **recombination rate. **It is taken the recombination to the n=2 state of the neutral hydrogen. Note that the ground state recombination is NOT relevant here since it produces an ionizing photon, which ionizes a neutral atom, and thus the net effect is zero. In fact, the above equations provide

The numerical integration produces the following qualitative figure

The decoupling of photons from the primordial plasma is explained as

Mathematicaly speaking, this fact implies that

where is the Thomson cross section. For the processes we are interesting in, it gives

and then

Thus, we deduce that

and where implies that the decoupling of photons occurs during the time of recombination! In fact, the decoupling of photons at time of recombination is what we observe when we look at the Cosmic Microwave Background (CMB). Fascinating, isn’t it?

## Dark Matter (DM)

Today, we have strong evidences and hints that non-baryonic dark matter (DM) exists (otherwise, we should modify newtonian dynamics and or the gravitational law at large scales, but it seems that even if we do that, we require this dark matter stuff).

In fact, from cosmological observations (and some astrotronomical and astrophysical measurements) we get the value of the DM energy density

The most plausible candidate for DM are the Weakly Interacting Massive Particles (WIMPs, for short). Generic WIMP scenarios provide annihilations

where is some “heavy” DM particle and the (ultra)weak interaction above produces light particles in form of leptons and antileptons, tighly couple to the cosmic plasma. The Boltzmann equation gives

Define the yield (or ratio) . It is produced since generally we have

and since , then . Thus,

and

Now, we can introduce a new time variable, say

Then, we calculate

For a radiation dominated (RD) Universe, implies that and

In this case, we obtain

with

The final freeze out abundance is got in the limit . Typically, , and when and , for , and there, the yield drops exponentially

or

Integrating this equation,

and then

Generally, and the freeze out temperature for WIMPs is got with the aid of the following equation

Indeed,

A qualitative numerical solution of the “WIMP” miracle (and its freeze out) is given by the following sketch

The present abundance of heavy particle relics gives

and where the effect of entrpy dumping after the freeze-out is encoded into the factor

with

Moreover, the DM energy density can also be estimated:

so

The main (current favourite) candidate for WIMP particles are the so called lightest supersymmetric particles (LSP). However, there are other possible elections. For instance, Majorana neutrinos (or other sterile neutrino species), Z prime bosons, and other exotic particles. We observe that here there is a deep connection between particle physics, astrophysics and cosmology when we talk about the energy density and its total composition, from a fundamental viewpoint.

**Remark:** there are also WISP particles (Weakly Interacting Slim Particles), like (superlight) axions and other exotics that could contribute to the DM energy density and/or the “dark energy”/vacum energy that we observe today. There are many experiments searching for these particles in laboratories, colliders, DM detection experiments and astrophysical/cosmological observations (cosmic rays and other HEP phenomena are also investigated to that goal).

**See you in a next cosmological post!**

# LOG#105. Einstein’s equations.

**Posted:**2013/05/24

**Filed under:**General Relativity, Physmatics |

**Tags:**action, cosmological constant, dark energy, dark matter, Einstein, Einstein's field equations, Einstein-Hilbert action, General Relativity, Physmatics, Relativity, tensor methods, tensors, vacuum energy, variational calculus 2 Comments

In 1905, one of Einstein’s achievements was to establish the theory of Special Relativity from 2 single postulates and correctly deduce their physical consequences (some of them time later). The essence of Special Relativity, as we have seen, is that all the inertial observers must agree on the speed of light “in vacuum”, and that the physical laws (those from Mechanics and Electromagnetism) are the same for all of them. Different observers will measure (and then they see) different wavelengths and frequencies, but the product of wavelength with the frequency is the same. The wavelength and frequency are thus* Lorentz covariant*, meaning that they change for different observers according some fixed mathematical prescription depending on its tensorial character (scalar, vector, tensor,…) respect to Lorentz transformations. The speed of light is **Lorentz invariant**.

By the other hand, **Newton’s law of gravity** describes the motion of planets and terrrestrial bodies. It is all that we need in contemporary rocket ships unless those devices also carry atomic clocks or other tools of exceptional accuracy. Here is Newton’s law in potential form:

In the special relativity framework, this equation has a terrible problem: if there is a change in the mass density , then it must propagate everywhere instantaneously. If you believe in the Special Relativity rules and in the speed of light invariance, it is impossible. Therefore, “Houston, we have a problem”.

Einstein was aware of it and he tried to solve this inconsistency. The final solution took him ten years .

The apparent silly and easy problem is to develop and describe all physics in the the same way irrespectively one is accelerating or not. However, it is not easy or silly at all. It requires deep physical insight and a high-end mathematical language. Indeed, what is the most difficult part are the details of Riemann geometry and tensor calculus on manifolds. Einstein got private aid from a friend called Marcel Grossmann. In fact, Einstein knew that SR was not compatible with Newton’s law of gravity. He (re)discovered the equivalence principle, stated by Galileo himself much before than him, but he interpreted deeper and seeked the proper language to incorporante that principle in such a way it were compatible (at least locally) with special relativity! His “journey” from 1907 to 1915 was a hard job and a continuous struggle with tensorial methods…

Today, we are going to derive the Einstein field equations for gravity, a set of equations for the “metric field” . Hilbert in fact arrived at Einstein’s field equations with the use of the variational method we are going to use here, but Einstein’s methods were more physical and based on physical intuitions. They are in fact “complementary” approaches. I urge you to read “The meaning of Relativity” by A.Einstein in order to read a summary of his discoveries.

We now proceed to derive Einstein’s Field Equations (EFE) for General Relativity (more properly, a relativistic theory of gravity):

**Step 1.** Let us begin with the so-called Einstein-Hilbert action (an ansatz).

Be aware of the square root of the determinant of the metric as part of the volume element. It is important since the volume element has to be invariant in curved spacetime (i.e.,in the presence of a metric). It also plays a critical role in the derivation.

**Step 2.** We perform the variational variation with respect to the metric field :

**Step 3.** Extract out the square root of the metric as a common factor and use the product rule on the term with the Ricci scalar R:

**Step 4.** Use the definition of a Ricci scalar as a contraction of the Ricci tensor to calculate the first term:

A total derivative does not make a contribution to the variation of the action principle, so can be neglected to find the extremal point. Indeed, this is the Stokes theorem in action. To show that the variation in the Ricci tensor is a total derivative, in case you don’t believe this fact, we can proceed as follows:

Check 1. Write the Riemann curvature tensor:

Note the striking resemblance with the non-abelian YM field strength curvature two-form

.

There are many terms with indices in the Riemann tensor calculation, but we can simplify stuff.

Check 2. We have to calculate the variation of the Riemann curvature tensor with respect to the metric tensor:

One cannot calculate the covariant derivative of a connection since it does not transform like a tensor. However, the difference of two connections does transform like a tensor.

Check 3. Calculate the covariant derivative of the variation of the connection:

Check 4. Rewrite the variation of the Riemann curvature tensor as the difference of two covariant derivatives of the variation of the connection written in Check 3, that is, substract the previous two terms in check 3.

Check 5. Contract the result of Check 4.

Check 6. Contract the result of Check 5:

Therefore, we have

Q.E.D.

**Step 5.** The variation of the second term in the action is the next step. Transform the coordinate system to one where the metric is diagonal and use the product rule:

The reason of the last equalities is that , and then its variation is

Thus, multiplication by the inverse metric produces

that is,

By the other hand, using the theorem for the derivation of a determinant we get that:

since

because of the classical identity

Indeed

and moreover

so

Q.E.D.

**Step 6.** Define the stress energy-momentum tensor as the third term in the action (that coming from the matter lagrangian):

or equivalently

**Step 7.** The extremal principle. The variation of the Hilbert action will be an extremum when the integrand is equal to zero:

i.e.,

Usually this is recasted and simplified using the Einstein’s tensor

as

This deduction has been mathematical. But there is a deep physical picture behind it. Moreover, there are a huge number of physics issues one could go into. For instance, these equations bind to particles with integral spin which is good for bosons, but there are matter fermions that also participate in gravity coupling to it. Gravity is universal. To include those fermion fields, one can consider the metric and the connection to be independent of each other. That is the so-called Palatini approach.

Final remark: you can add to the EFE above a “constant” times the metric tensor, since its “covariant derivative” vanishes. This constant is the cosmological constant (a.k.a. dark energy in conteporary physics). The, the most general form of EFE is:

Einstein’s additional term was added in order to make the Universe “static”. After Hubble’s discovery of the expansion of the Universe, Einstein blamed himself about the introduction of such a term, since it avoided to predict the expanding Universe. However, perhaps irocanilly, in 1998 we discovered that the Universe was accelerating instead of being decelerating due to gravity, and the most simple way to understand that phenomenon is with a positive cosmological constant domining the current era in the Universe. Fascinating, and more and more due to the WMAP/Planck data. The cosmological constant/dark energy and the dark matter we seem to “observe” can not be explained with the fields of the Standard Model, and therefore…They hint to new physics. The character of this new physics is challenging, and much work is being done in order to find some particle of model in which dark matter and dark energy fit. However, it is not easy at all!

May the Einstein’s Field Equations be with you!