# LOG#121. Basic Neutrinology(VI).

Models where the space-time is not 3+1 dimensional but higher dimensional (generally D=d+1=4+n dimensional, where n is the number of spacelike extra dimensions) are popular since the beginnings of the 20th century.

The fundamental scale of gravity need not to be the 4D “effective” Planck scale $M_P$ but a new scale $M_f$ (sometimes called $M_D$), and it could be as low as $M_f\sim 1-10TeV$. The observed Planck scale $M_P$ (related to the Newton constant $G_N$) is then related to $M_f$ in $D=4+n$ dimensions by a relationship like the next equation:

$\eta^2=\left(\dfrac{M_f}{M_D}\right)^2\sim\dfrac{1}{M_f^2R^n}$

Here, $R$ is the radius of the typical length of the extra dimensions. We can consider an hypertorus $T^n=(S^1)^n=S^1\times \underbrace{\cdots}_{n-times} \times S^1$ for simplicity (but other topologies are also studied in the literature). In fact, the coupling is $M_f/M_P\sim 10^{-16}$ if we choose $M_f\sim 1TeV$. When we take more than one extra dimension, e.g., taking $n=2$, the radius R of the extra dimension(s) can be as “large” as 1 millimeter! This fact can be understood as the “proof” that there could be hidden from us “large” extra dimensions. They could be only detected by many, extremely precise, measurements that exist at present or future experiments. However, it also provides a new test of new physics (perhaps fiction science for many physicists) and specially, we could explore the idea of hidden space dimensions and how or why is so feeble with respect to any other fundamental interaction.

According to the SM and the standard gravity framework (General Relativity), every group charged particle is localized on a 3-dimensional hypersurface that we could call “brane” (or SM brane). This brane is embedded in “the bulk” of the higher dimensional Universe (with $n$ extra space-like dimensions). All the particles can be separated into two categories: 1) those who live on the (SM) 3-brane, and 2) those who live “everywhere”, i.e., in “all the bulk” (including both the extra dimensions and our 3-brane where the SM fields only can propagate). The “bulk modes” are (generally speaking) quite “model dependent”, but any coupling between the brane where the SM lives and the bulk modes should be “suppressed” somehow. One alternative is provided by the geometrical factors of “extra dimensions” (like the one written above). Another option is to modify the metric where the fields propagate. This last recipe is the essence of non-factorizable models built by Randall, Sundrum, Shaposhnikov, Rubakov, Pavŝiĉ and many others as early as in the 80’s of the past century. Graviton and its “propagating degrees of freedom” or possible additional neutral states belongs to the second category. Indeed, the observed weakness of gravity in the 3-brane can be understood as a result of the “new space dimensions” in which gravity can live. However, there is no clear signal of extra dimensions until now (circa 2013, July).

The small coupling constant derived from the Planck mass above can also be used in order to explain the smallness of the neutrino masses! The left-handed neutrino $\nu_L$ having weak isospin and hypercharge is thought to reside in the SM brane in this picture. It can get a “naturally samll” Dirac mass through the mixing with some “bulk fermion” (e.g., the right-handed neutrino or any other neutral fermion under the SM gauge group) which can be interpreted as a right-handed neutrino $\nu_R$:

$\mathcal{L}(m,Dirac)\sim h\eta H\bar{\nu}_L\nu_R$

Here, $H,h$ are the two Higgs doublet fields and the Yukawa coupling, respectively. After spontaneous symmetry breaking, this interaction will generate the Dirac mass term

$m_D=hv\eta\sim 10^{-5}eV$

The right-handed neutrino $\nu_R$ has a hole tower of Kaluza-Klein relatives $\nu_{i,R}$. The masses of these states are given by

$M_{i,R}=\dfrac{i}{R}$ $i=0,\pm 1,\pm 2,\ldots, \pm \infty$

and the $\nu_L$ couples with all KK state having the same “mixing” mass. Thus, we can write the mass lagrangian as

$\mathcal{L}=\bar{\nu}_LM\nu_R$

with

$\nu_L=(\nu_L,\tilde{\nu}_{1L},\tilde{\nu}_{2L},\ldots)$

$\nu_R=(\nu_{0R},\tilde{\nu}_{1R},\tilde{\nu}_{2R},\ldots)$

Are you afraid of “infinite” neutrino flavors? The resulting neutrino mass matrix M is “an infinite array” with structure:

$\mathbb{M}=\begin{pmatrix}m_D &\sqrt{2}m_D &\sqrt{2}m_D &\ldots &\sqrt{2}m_D &\ldots \\ 0 &1/R &0 &\ldots &0 & \ldots\\ 0 & 0 &2/R & \ldots & 0 &\ldots \\ \ldots & \ldots & \ldots & \ldots & k/R & \ldots\\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\end{pmatrix}$

The eigenvalues of the matrix $MM^+$ are given by a trascendental equation. In the limit where $m_DR\sim 0$, or $m_D\sim 0$, the eigenvalues are $\lambda\sim k/R$, where $k\in \mathbb{Z}$ and $\lambda=0$ is a double eigenvalue (i.e., it is doubly degenerated). There are other examples with LR symmetry. For instance, $SU(2)_R$ right-handed neutrinos that, living on the SM brane, were additional neutrino species. In these models, it has been showed that the left-handed neutrino is exactly massless whereas the assumed bulk and “sterile” neutrino have a mass related to the size of the extra dimensions. These models produce masses that can be fitted to the expected values $\sim 10^{-3}eV$ coming from estimations at hand with the neutrino oscillation data, but generally, this implies that there should be at least one extra dimension with size in the micrometer range or less!

The main issues that extra dimension models of neutrino masses do have is related to the question of the renormalizability of their interactions. With an infinite number of KK states and/or large extra dimensions, extreme care have to be taken in order to not spoil the SM renormalizability and, at some point, it implies that the KK tower must be truncated at some level. There is no general principle or symmetry that explain this cut-off to my knowledge.

May the neutrinos and the extra dimensions be with you!

See you in my next neutrinological post!

# LOG#057. Naturalness problems.

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., $100000, 10^{-4},10^{122}, 10^{23},\ldots$ Equivalently, imagine that the values of every fundamental and measurable physical quantity $X$ lies in the real interval $\left[ 0,\infty\right)$. Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema $0$ or $\infty$ are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even $\infty$ can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

$(0, 1, \infty)$

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or $\infty$ are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple $(0,1,\infty)$ and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give $m_\nu \leq 10 eV$ or even $m_\nu \sim 1eV$ as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, $\Delta m^2_1\sim 10^{-3}$ and $\Delta m^2_2\sim 10^{-5}$. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is $m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?$

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about $M_Z\sim M_W \sim \mathcal{O} (100GeV)$. Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

$M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV$

or more generally, dropping the $8\pi$ factor

$M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV$

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses $M_{EW}< so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order $\mathcal{O}(M_P^2)$

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant $\Lambda$, from the so-called Einstein’s field equations of classical relativistic gravity

$\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}$

is estimated to be about $\mathcal{O} (10^{-47})GeV^4$ from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about $\mathcal{O}(M_P^4)$ or in the framework of supersymmetric field theories, $\mathcal{O}(M^4_{SUSY})$ after SUSY symmetry breaking. Then, the problem is:

Why is $\rho_\Lambda^{obs}<<\rho_\Lambda^{th}$? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called $\theta$-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

$\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}$

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

$\theta <10^{-12}$

while, from the theoretical aside, it could be any number in the interval $\left[-\pi,\pi\right]$. Why is $\theta$ close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the $\Lambda CDM$ model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

$\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)$

There, $\rho$ is the total energy density contained in the whole Universe and $\rho_c=\dfrac{3H^2}{8\pi G}$ is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

$\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01$

At the Planck scale era, we can even calculate that

$\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})$

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, $\rho_M\sim\rho_\Lambda=\rho_{DE}$. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂