# LOG#120. Basic Neutrinology(V).

**Posted:**2013/07/15

**Filed under:**Basic Neutrinology, Physmatics, The Standard Model: Basics |

**Tags:**bino, dark matter, Dirac mass term, E(6) group, exceptional group GUT, gauginos, GUT, GUT scale, Higgsino, LR models, LSP, Majorana mass term, MSSM, neutralino, neutrino masses, neutrino mixing, proton decay, proton lifetime, R-parity, R-parity violations, seesaw, sfermion, singlets, sneutrino, soft SUSY breaking terms, string inspired models, superparticle, superpartner, superpotential, SUSY models of neutrino masses, vev, WIMPs, wino, Yukawa coupling, Zinos Leave a comment

Supersymmetry (SUSY) is one of the most discussed ideas in theoretical physics. I am not discussed its details here (yet, in this blog). However, in this thread, some general features are worth to be told about it. SUSY model generally include a symmetry called R-parity, and its breaking provide an interesting example of how we can generate neutrino masses WITHOUT using a right-handed neutrino at all. The price is simple: we have to add new particles and then we enlarge the Higgs sector. Of course, from a pure phenomenological point, the issue is to discover SUSY! On the theoretical aside, we can discuss any idea that experiments do not exclude. Today, after the last LHC run at 8TeV, we have not found SUSY particles, so the lower bounds of supersymmetric particles have been increased. Which path will Nature follow? SUSY, LR models -via GUTs or some preonic substructure, or something we can not even imagine right now? Only experiment will decide in the end…

In fact, in a generic SUSY model, dut to the Higgs and lepton doublet superfields, we have the same quantum numbers. We also have in the so-called “superpotential” terms, bilinear or trilinear pieces in the superfields that violate the (global) baryon and lepton number explicitly. Thus, they lead to mas terms for the neutrino but also to proton decays with unacceptable high rates (below the actual lower limit of the proton lifetime, about years!). To protect the proton experimental lifetime, we have to introduce BY HAND a new symmetry avoiding the terms that give that “too high” proton decay rate. In SUSY models, this new symmetry is generally played by the R-symmetry I mentioned above, and it is generally introduced in most of the simplest models including SUSY, like the Minimal Supersymmetric Standard Model (MSSM). A general SUSY superpotential can be written in this framework as

(1)

A less radical solution is to allow for the existence in the superpotential of a bilinear term with structure . This is the simplest way to realize the idea of generating the neutrino masses without spoiling the current limits of proton decay/lifetime. The bilinear violation of R-parity implied by the term leads by a minimization condition to a non-zero vacuum expectation value or vev, . In such a model, the neutrino acquire a mass due to the mixing between neutrinos and the neutralinos.The neutrinos remain massless in this toy model, and it is supposed that they get masses from the scalar loop corrections. The model is phenomenologically equivalent to a “3 Higgs doublet” model where one of these doublets (the sneutrino) carry a lepton number which is broken spontaneously. The mass matrix for the neutralino-neutrino secto, in a “5×5” matrix display, is:

(2)

and where the matrix corresponds to the two “gauginos”. The matrix is a 2×3 matrix and it contains the vevs of the two higgses plus the sneutrino, i.e., respectively. The remaining two rows are the Higgsinos and the tau neutrino. It is necessary to remember that gauginos and Higgsinos are the supersymmetric fermionic partners of the gauge fields and the Higgs fields, respectively.

I should explain a little more the supersymmetric terminology. The *neutralino* is a hypothetical particle predicted by supersymmetry. There are some neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They can be seen as mixtures between binos and winos (the sparticles associated to the b quark and the W boson) and they are generally Majorana particles. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles (decays that happen in multiple steps) usually originating from colored supersymmetric particles such as squarks or gluinos. In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum (missing transverse energy) in a detector. As a heavy, stable particle, the lightest neutralino is an excellent candidate to comprise the universe’s cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early Universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly GeV is the leading weakly interacting massive particle (WIMP) dark matter candidate.

**Neutralino dark matter** could be observed experimentally in nature either indirectly or directly. In the former case, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. In the latter case, special purpose experiments such as the (now running) Cryogenic Dark Matter Search (CDMS) seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.

If we return to the matrix (2) above, we observe that when we diagonalize it, a “seesaw”-like mechanism is again at mork. There, the role of can be easily recognized. The mass is provided by

where and is the largest gaugino mass. However, an arbitrary SUSY model produces (unless M is “large” enough) still too large tau neutrino masses! To get a realistic and small (1777 GeV is “small”) tau neutrino mass, we have to assume some kind of “universality” between the “soft SUSY breaking” terms at the GUT scale. This solution is not “natural” but it does the work. In this case, the tau neutrino mass is predicted to be tiny due to cancellations between the two terms which makes negligible the vev . Thus, (2) can be also written as follows

(3)

We can study now the elementary properties of neutrinos in some elementary superstring inspired models. In some of these models, the effective theory implies a supersymmetric (exceptional group) GUT with matter fields belong to the 27 dimensional representation of the exceptional group plus additional singlet fields. The model contains additional neutral leptons in each generation and the neutral singlets, the gauginos and the Higgsinos. As the previous model, but with a larger number of them, every neutral particle can “mix”, making the undestanding of the neutrino masses quite hard if no additional simplifications or assumptions are done into the theory. In fact, several of these mechanisms have been proposed in the literature to understand the neutrino masses. For instance, a huge neutral mixing mass matris is reduced drastically down to a “3×3” neutrino mass matrix result if we mix and with an additional neutral field whose nature depends on the particular “model building” and “mechanism” we use. In some basis , the mass matrix can be rewritten

(4)

and where the energy scale is (likely) close to zero. We distinguish two important cases:

1st. R-parity violation.

2nd. R-parity conservation and a “mixing” with the singlet.

In both cases, the sneutrinos, superpartners of are assumed to acquire a v.e.v. with energy size . In the first case, the field corresponds to a gaugino with a Majorana mass than can be produced at two-loops! Usually , and if we assume , then additional dangerous mixing wiht the Higgsinos can be “neglected” and we are lead to a neutrino mass about , in agreement with current bounds. The important conclusion here is that we have obtained the smallness of the neutrino mass without any fine tuning of the parameters! Of course, this is quite subjective, but there is no doubt that this class of arguments are compelling to some SUSY defenders!

In the second case, the field corresponds to one of the singlets. We have to rely on the symmetries that may arise in superstring theory on specific Calabi-Yau spaces to restric the Yukawa couplings till “reasonable” values. If we have in the matrix (4) above, we deduce that a massless neutrino and a massive Dirac neutrino can be generated from this structure. If we include a possible Majorana mass term of the sfermion at a scale , we get similar values of the neutrino mass as the previous case.

**Final remark:** mass matrices, as we have studied here, have been proposed without embedding in a supersymmetric or any other deeper theoretical frameworks. In that case, small tree level neutrino masses can be obtained without the use of large scales. That is, the structure of the neutrino mass matrix is quite “model independent” (as the one in the CKM quark mixing) if we “measure it”. Models reducing to the neutrino or quark mass mixing matrices can be obtained with the use of large energy scales OR adding new (likely “dark”) particle species to the SM (not necessarily at very high energy scales!).

# LOG#057. Naturalness problems.

**Posted:**2012/12/02

**Filed under:**Physmatics, Quantum Gravity, The Standard Model: Basics |

**Tags:**CKM matrix, cosmic coincidence, cosmological constant, cosmological constant problem, critical energy density, curvature, dark energy, dark energy density, Dirac large number hypothesis, electroweak scale, energy, energy density, flatness problem, flavour problem, gauge hierarchy problem, Higgs boson, Higgs mechanism, Hubble constant, inflation, inflationary cosmologies, little hierarchy problem, mass, matter density, naturalness, naturalness problem, neutrino mass hierarchy, neutrino masses, neutrino oscillations, NO, NOSEX, parameter, parameter space, Planck era, Planck scale, PMNS matrix, QCD, QFT, quark-lepton complementarity, SM, Standard Cosmological Model, Standard Model, strong CP problem, theta term, types of naturalness, vacuum, vacuum energy, W boson, Z boson 8 Comments

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

**Naturalness problems** arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., Equivalently, imagine that the values of every fundamental and measurable physical quantity lies in the real interval . Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema or are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

**REMEMBER: Naturalness** of X is, thus, being 1 or close to it, while values approaching 0 or are unnatural. Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st.** Hierarchy problems**. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. **Nullity/Smallness problems**. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd.** Large number problems (or hypotheses).** This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. **Coincidence problems**. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. **The little hierarchy problem**. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give or even as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, and . However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

2. **The gauge hierarchy problem.** The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about . Likely, it is also of the Higgs mass order. By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

or more generally, dropping the factor

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

3. **The cosmological constant (hierarchy) problem.** The cosmological constant , from the so-called Einstein’s field equations of classical relativistic gravity

is estimated to be about from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about or in the framework of supersymmetric field theories, after SUSY symmetry breaking. Then, the problem is:

Why is ? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

4.** The strong CP problem/puzzle. **From neutron electric dipople measurements, theoretical physicists can calculate the so-called -angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

while, from the theoretical aside, it could be any number in the interval . Why is close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

5. **The flatness problem/puzzle.** In the Stantard Cosmological Model, also known as the model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

There, is the total energy density contained in the whole Universe and is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

At the Planck scale era, we can even calculate that

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

6. **The flavour problem/puzzle.** The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

7. **Cosmic matter-dark energy coincidence.** At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, . Why now? We do not know!

*“THERE* IS AS YET INSUFFICIENT *DATA* FOR A MEANINGFUL *ANSWER*.”

And my weblog is only just beginning! See you soon in my next post! 🙂