LOG#124. Basic Neutrinology(IX).

In supersymmetric LR models, inflation, baryogenesis (and/or leptogenesis) and neutrino oscillations can be closely related to each other. Baryosynthesis in GUTs is, in general, inconsistent with inflationary scenarios. The exponential expansion during the inflationary phase will wash out any baryon asymmetry generated previously by any GUT scale in your theory. One argument against this feature is the next idea: you can indeed generate the baryon or lepton asymmetry during the process of reheating at the end of inflation. This is a quite non-trivial mechanism. In this case, the physics of the “fundamental” scalar field that drives inflation, the so-called inflaton, would have to violate the CP symmetry, just as we know that weak interactions do! The challenge of any baryosynthesis model is to predict the observed asymmetry. It is generally written as a baryon to photon (in fact, a number of entropy) ratio. Tha baryon asymmetry is defined as

\dfrac{n_B}{s}\equiv \dfrac{(n_b-n_{\bar{b}})}{s}

At present time, there is only matter and only a very tiny (if any) amount of antimatter, and then n_{\bar{b}}\sim 0. The entropy density s is completely dominated by the contribution of relativistic particles so it is proportional to the photon number density. This number is calculated from CMBR measurements, and it shows to be about s=7.05n_\gamma. Thus,

\dfrac{n_B}{s}\propto \dfrac{n_b}{n_\gamma}

From BBN, we know that

\dfrac{n_B}{n_\gamma}=(5.1\pm 0.3)\cdot 10^{-10}

and

\dfrac{n_B}{s}=(7.2\pm 0.4)\cdot 10^{-11}

This value allows to obtain the observed lepton asymmetry ratio with analogue reasoning.

By the other hand, it has been shown that the “hybrid inflation” scenarios can be successfully realized in certain SUSY LR models with gauge groups

G_{SUSY}\supset G_{PS}=SU(4)_c\times SU(2)_L\times SU(2)_R

after SUSY symmetry breaking. This group is sometimes called the Pati-Salam group. The inflaton sector of this model is formed by two complex scalar fields H,\theta. At the end of the inflation do oscillate close to the SUSY minimum and respectively, they decay into a part of right-handed sneutrinos \nu_i^c and neutrinos. Moreover, a primordial lepton asymmetry is generated by the decay of the superfield \nu_2^c emerging as the decay product of the inflaton field. The superfield \nu_2^c also decays into electroweak Higgs particles and (anti)lepton superfields. This lepton asymmetry is partially converted into baryon asymmetry by some non-perturbative sphalerons!

Remark: (Sphalerons). From the wikipedia entry we read that a sphaleron (Greek: σφαλερός “weak, dangerous”) is a static (time independent) solution to the electroweak field equations of the SM of particle physics, and it is involved in processes that violate baryon and lepton number.Such processes cannot be represented by Feynman graphs, and are therefore called non-perturbative effects in the electroweak theory (untested prediction right now). Geometrically, a sphaleron is simply a saddle point of the electroweak potential energy (in the infinite dimensional field space), much like the saddle point  of the surface z(x,y)=x^2-y^2 in three dimensional analytic geometry. In the standard model, processes violating baryon number convert three baryons to three antileptons, and related processes. This violates conservation of baryon number and lepton number, but the difference B-L is conserved. In fact, a sphaleron may convert baryons to anti-leptons and anti-baryons to leptons, and hence a quark may be converted to 2 anti-quarks and an anti-lepton, and an anti-quark may be converted to 2 quarks and a lepton. A sphaleron is similar to the midpoint(\tau=0) of the instanton , so it is non-perturbative . This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early Universe.

The resulting lepton asymmetry can be written as a function of a number of parameters among them the neutrino masses and the mixing angles, and finally, this result can be compared with the observational constraints above in baryon asymmetry. However, this topic is highly non-trivial. It is not trivial that solutions satisfying the constraints above and other physical requirements can be found with natural values of the model parameters. In particular, it is shown that the value of the neutrino masses and the neutrino mixing angles which predict sensible values for the baryon or lepton asymmetry turn out to be also consistent with values require to solve the solar neutrino problem we have mentioned in this thread.

Advertisements

LOG#120. Basic Neutrinology(V).

Supersymmetry (SUSY) is one of the most discussed ideas in theoretical physics. I am not discussed its details here (yet, in this blog). However, in this thread, some general features are worth to be told about it. SUSY model generally include a symmetry called R-parity, and its breaking provide an interesting example of how we can generate neutrino masses WITHOUT using a right-handed neutrino at all. The price is simple: we have to add new particles and then we enlarge the Higgs sector. Of course, from a pure phenomenological point, the issue is to discover SUSY! On the theoretical aside, we can discuss any idea that experiments do not exclude. Today, after the last LHC run at 8TeV, we have not found SUSY particles, so the lower bounds of supersymmetric particles have been increased. Which path will Nature follow? SUSY, LR models -via GUTs or some preonic substructure, or something we can not even imagine right now? Only experiment will decide in the end…

In fact, in a generic SUSY model, dut to the Higgs and lepton doublet superfields, we have the same SU(3)_c\times SU(2)_L\times U(1)_Y quantum numbers. We also have in the so-called “superpotential” terms, bilinear or trilinear pieces in the superfields that violate the (global) baryon and lepton number explicitly. Thus, they lead to mas terms for the neutrino but also to proton decays with unacceptable high rates (below the actual lower limit of the proton lifetime, about 10^{33}  years!). To protect the proton experimental lifetime, we have to introduce BY HAND a new symmetry avoiding the terms that give that “too high” proton decay rate. In SUSY models, this new symmetry is generally played by the R-symmetry I mentioned above, and it is generally introduced in most of the simplest models including SUSY, like the Minimal Supersymmetric Standard Model (MSSM). A general SUSY superpotential can be written in this framework as

(1) \mathcal{W}'=\lambda{ijk}L_iL_jE_l^c+\lambda'_{ijk}L_iQ_jD_k^c+\lambda''_{ijk}D_i^cD_j^cU_k^c+\epsilon_iL_iH_2

A less radical solution is to allow for the existence in the superpotential of a bilinear term with structure \epsilon_3L_3H_2. This is the simplest way to realize the idea of generating the neutrino masses without spoiling the current limits of proton decay/lifetime. The bilinear violation of R-parity implied by the \epsilon_3 term leads by a minimization condition to a non-zero vacuum expectation value or vev, v_3. In such a model, the \tau neutrino acquire a mass due to the mixing between neutrinos and the neutralinos.The \nu_e, v_\mu neutrinos remain massless in this toy model, and it is supposed that they get masses from the scalar loop corrections. The model is phenomenologically equivalent to a “3 Higgs doublet” model where one of these doublets (the sneutrino) carry a lepton number which is broken spontaneously. The mass matrix for the neutralino-neutrino secto, in a “5×5” matrix display, is:

(2) \mathbb{M}=\begin{pmatrix}G_{2x2} & Q_{ab}^1 & Q_{ab}^2 & Q_{ab}^3\\ Q_{ab}^{1T} & 0 & -\mu & 0\\ Q_{ab}^{2T} & -\mu & 0 & \epsilon_3\\ Q_{ab}^{3T} & 0 & \epsilon_3 & 0\end{pmatrix}

and where the matrix G_{2x2}=\mbox{diag}(M_1, M_2) corresponds to the two “gauginos”. The matrix Q_{ab} is a 2×3 matrix and it contains the vevs of the two higgses H_1,H_2 plus the sneutrino, i.e., v_u, v_d, v_3 respectively. The remaining two rows are the Higgsinos and the tau neutrino. It is necessary to remember that gauginos and Higgsinos are the supersymmetric fermionic partners of the gauge fields and the Higgs fields, respectively.

I should explain a little more the supersymmetric terminology. The neutralino is a hypothetical particle predicted by supersymmetry. There are some neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They can be seen as mixtures between binos and winos (the sparticles associated to the b quark and the W boson) and they are generally Majorana particles. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles (decays that happen in multiple steps) usually originating from colored  supersymmetric particles such as squarks or gluinos. In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum (missing transverse energy) in a detector. As a heavy, stable particle, the lightest neutralino is an excellent candidate to comprise the universe’s cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early Universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly 10-10^4 GeV is the leading weakly interacting massive particle (WIMP) dark matter candidate.

Neutralino dark matter could be observed experimentally in nature either indirectly or directly. In the former case, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. In the latter case, special purpose experiments such as the (now running) Cryogenic Dark Matter Search (CDMS)  seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.

If we return to the matrix (2) above, we observe that when we diagonalize it, a “seesaw”-like mechanism is again at mork. There, the role of M_D, M_R can be easily recognized. The \nu_\tau mass is provided by

m_{\nu_\tau}\propto \dfrac{(v_3')^2}{M}

where v_3'\equiv \epsilon_3v_d+\mu v_3 and M is the largest gaugino mass. However, an arbitrary SUSY model produces (unless M is “large” enough) still too large tau neutrino masses! To get a realistic and small (1777 GeV is “small”) tau neutrino mass, we have to assume some kind of “universality” between the “soft SUSY breaking” terms at the GUT scale. This solution is not “natural” but it does the work. In this case, the tau neutrino mass is predicted to be tiny due to cancellations between the two terms which makes negligible the vev v_3'. Thus, (2) can be also written as follows

(3) \begin{pmatrix}M_1 & 0 & -\frac{1}{2}g'v_d & \frac{1}{2}g'v_u & -\frac{1}{2}g'v_3\\ 0 & M_2 & \frac{1}{2}gv_d & -\frac{1}{2}gv_u & \frac{1}{2}gv_3\\ -\frac{1}{2}g'v_d & \frac{1}{2}gv_d & 0 & -\mu & 0\\ \frac{1}{2}g'v_u& -\frac{1}{2}gv_u& -\mu & 0 & \epsilon_3\\ -\frac{1}{2}g'v_3 & \frac{1}{2}gv_3 & 0 & \epsilon_3 & 0\end{pmatrix}

We can study now the elementary properties of neutrinos in some elementary superstring inspired models. In some of these models, the effective theory implies a supersymmetric (exceptional group) E_6 GUT with matter fields belong to the 27 dimensional representation of the exceptional group E_6 plus additional singlet fields. The model contains additional neutral leptons in each generation and the neutral E_6 singlets, the gauginos and the Higgsinos. As the previous model, but with a larger number of them, every neutral particle can “mix”, making the undestanding of the neutrino masses quite hard if no additional simplifications or assumptions are done into the theory. In fact, several of these mechanisms have been proposed in the literature to understand the neutrino masses. For instance, a huge neutral mixing mass matris is reduced drastically down to a “3×3” neutrino mass matrix result if we mix \nu and \nu^c with an additional neutral field T whose nature depends on the particular “model building” and “mechanism” we use. In some basis (\nu, \nu^c,T), the mass matrix can be rewritten

(4) M=\begin{pmatrix}0 & m_D & 0\\ m_D & 0 & \lambda_2v_R\\ 0 & \lambda_2v_R & \mu\end{pmatrix}

and where the \mu energy scale is (likely) close to zero. We distinguish two important cases:

1st. R-parity violation.

2nd. R-parity conservation and a “mixing” with the singlet.

In both cases, the sneutrinos, superpartners of \nu^c are assumed to acquire a v.e.v. with energy size v_R. In the first case, the T field corresponds to a gaugino with a Majorana mass \mu than can be produced at two-loops! Usually \mu\approx 100GeV, and if we assume \lambda v_R\approx 1 TeV, then additional dangerous mixing wiht the Higgsinos can be “neglected” and we are lead to a neutrino mass about m_\nu\sim 0.1eV, in agreement with current bounds. The important conclusion here is that we have obtained the smallness of the neutrino mass without any fine tuning of the parameters! Of course, this is quite subjective, but there is no doubt that this class of arguments are compelling to some SUSY defenders!

In the second case, the field T corresponds to one of the E_6 singlets. We have to rely on the symmetries that may arise in superstring theory on specific Calabi-Yau spaces to restric the Yukawa couplings till “reasonable” values. If we have \mu=0 in the matrix (4) above, we deduce that a massless neutrino and a massive Dirac neutrino can be generated from this structure. If we include a possible Majorana mass term of the sfermion at a scale \mu\approx 100GeV, we get similar values of the neutrino mass as the previous case.

Final remark: mass matrices, as we have studied here, have been proposed without embedding in a supersymmetric or any other deeper theoretical frameworks. In that case, small tree level neutrino masses can be obtained without the use of large scales. That is, the structure of the neutrino mass matrix is quite “model independent” (as the one in the CKM quark mixing) if we “measure it”. Models reducing to the neutrino or quark mass mixing matrices can be obtained with the use of large energy scales OR adding new (likely “dark”) particle species to the SM (not necessarily at very high energy scales!).


LOG#057. Naturalness problems.

yogurt_berries

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)

There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂