LOG#127. Basic Neutrinology(XII).

neutrinoProbeOFtheworld

When neutrinos pass through matter or they propagate in a medium (not in the vacuum), a subtle and potentially important effect occurs. This is called the MSW effect (Mikheyev-Smirnov-Wolfenstein effect). It is pretty similar to a refraction of light in a medium, but now it happens that the particle (wave) propagating are not electromagnetic waves (photons) but neutrinos! In fact, the MSW effect consists in two different effects:

1st. A “resonance” enhancement of the neutrino oscillation pattern.

2nd. An adiabatic (i.e. slow) or partially adiabatic neutrino conversion (mixing).

In the presence of matter, the neutrino experiences scattering and absorption. This last phenomenon is always negligible (or almost in most cases). At very low energies, coherent elastic forward scattering is the most important process. Similarly to optics, the net effect is the appearance of a phase difference, a refractive index or, equivalently, a neutrino effective mass.

The neutrino effective mass can cause an important change in the neutrino oscillation pattern, depending on the densities and composition of the medium. It also depends on the nature of the neutrino (its energy, its type and its oscillation length). In the neutrino case, the medium is “flavor-dispersive”: the matter is usually non-symmetric with respect to the lepton numbers! Then, the effective neutrino mass is different for the different weak eigenstates!

I will try to explain it as simple as possible here. For instance, take the solar electron plasma. The electrons in the solar medium have charged current interactions with \nu_e but not with \nu_\mu, \nu_\tau. Thus, the resulting interaction energy is given by a interaction hamiltonian

(1) H_{int}=\sqrt{2}G_FN_e

where the numerical prefactor is conventional, G_F is the Fermi constant and N_e is the electron density. The corresponding neutral current interactions are identical fo al the neutrino species and, therefore, we have no net effect on their propagation. Hypothetical sterile neutrinos would have no interaction at all either. The effective global hamiltonian in flacor space is now the sum of two terms, the vacuum hamiltonian and the interaction part. We can write them together

(2) H_w^{eff}=H_w^{eff,vac}+H_{int}\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\end{pmatrix}

The consequence of this new effective hamiltonian is that the oscillation probabilities of the neutrino in matter can be largely increased due to a resonance with matter. In matter, for the simplest case with 2 flavors and 2 dimensions, we can define an effective oscillation/mixing angle as

(3) \boxed{\sin\theta_M=\dfrac{\sin 2\theta/L_{osc}}{\left[\left(\cos 2\theta/L_{osc}-G_FN_e/\sqrt{2}\right)^2+\left(\sin 2\theta/L_{osc}\right)^2\right]^{1/2}}}

The presence of the term proportional to the electron density can produce “a resonance” nullifying the denominator. there is a critical density N_c^{osc} such as

(3) \boxed{N_c^{osc}=\dfrac{\Delta m^2\cos 2\theta}{2\sqrt{2}EG_F}}

for which the matter mixing angle \theta_M becomes maximal and \sin 2\theta_M\longrightarrow 1, irrespectively of the value of the mixing angle in vacuum \theta. The probability that \nu_e oscillates or mixes into a \nu_\mu weak eigenstate after traveling a distance L in this medium is give by the vacuum oscillation formula modified as follows:

1st. \sin 2\theta\longrightarrow \sin 2\theta_M

2nd. The kinematical factor differs by the replacement of \Delta m^2 with \Delta m^2\sin 2\theta. Hence, it follows that, at the critical density, we have the oscillation probability in matter (2 flavor and 2 dimensions):

(4) \boxed{P_m (\nu_e\longrightarrow \nu_\mu;L)_{N_e=N_c^{osc}}=\sin^2\left(\sin 2\theta \dfrac{L}{L_{osc}}\right)}

This equation tells us that we can get a full conversion of electron neutrino weak eigenstates into muon weak eigenstates, provided that the length and energy of the neutrino satisfy the condition

\sin 2\theta \dfrac{L}{L_{osc}}=\dfrac{n\pi}{2} \forall n=1,2,3,\ldots,\infty

There is a second interesting limit that is mentioned often. This limit happens whenever the electron density N_e is so large such that \sin 2\theta_M\longrightarrow 0, or equivalently, \theta_M\longrightarrow \pi/2. In this (dense matter) limit, there are NO oscillation in matter (they are “density suppresed”) because \sin 2\theta_M vanishes and we have

P_m (\nu_e\longrightarrow \nu_\mu;L)_{\left(N_e>>\dfrac{\Delta m^2}{2\sqrt{2}EG_F}\right)}\longrightarrow 0

Therefore, the lesson here is that a big density can spoil the phenomenon of neutrino oscillations!

In summary, we have learned here that:

1st. There are neutrino oscillations “triggered” by matter. Matter can enhance or enlarge neutrino mixing by “resonance”.

2nd. A high enough matter density can spoil the neutrino mixing (the complementary effect to the previous one).

The MSW effect is particularly important in the field of geoneutrinos and when the neutrinos pass through the Earth core or mantle, as much as it also matters inside the stars or in collapsing stars that will become into supernovae. The flavor of neutrino states follows changes in the matter density!

See you in my next neutrinological post!


LOG#123. Basic Neutrinology(VIII).

There are some indirect constraints/bounds on neutrino masses provided by Cosmology. The most important is the one coming from the demand that the energy density of the neutrinos should not be too high, otherwise the Universe would collapse and it does not happen, apparently…

Firstly, stable neutrinos with low masses (about m_\nu\leq 1 MeV) make a contribution to the total energy density of the Universe given by:

\rho_\nu=m_{tot}n_\nu

and where the total mass is defined to be the quantity

\displaystyle{m_{tot}=\sum_\nu \dfrac{g_\nu}{2}m_\nu}

Here, the number of degrees of freedom g_\nu=4(2) for Dirac (Majorana) neutrinos in the framework of the Standard Model. The number density of the neutrino sea is revealed to be related to the photon number density by entropy conservation (entropy conservation is the key of this important cosmological result!) in the adiabatic expansion of the Universe:

n_\nu=\dfrac{3}{11}n_\gamma

From this, we can derive the relationship of the cosmic relic neutrino background (neutrinos coming from the Big Bang radiation when they lost the thermal equilibrium with photons!) or C\nu B and the cosmic microwave background (CMB):

T_{C\nu B}=\left(\dfrac{3}{11}\right)^{1/3}T_{CMB}

From the CMB radiation measurements we can obtain the value

n_\nu=411(photons)cm^{-3}

for a perfect Planck blackbody spectrum with temperature

T_{CMB}=2.725\pm0.001 K\approx 2.35\cdot 10^{-4}eV

This CMB temperature implies that the C\nu B temperature should be about

T_{C\nu B}^{theo}=1.95K\approx 0.17meV

Remark: if you do change the number of neutrino degrees of freedom you also change the temperature of the C\nu B and the quantity of neutrino “hot dark matter” present in the Universe!

Moreover, the neutrino density \Omega_\nu is related to the total neutrino density and the critical density as follows:

\Omega_\nu=\dfrac{\rho_\nu}{\rho_c}

and where the critical density is about

\rho_c=\dfrac{3H_0^2}{8\pi G_N}

When neutrinos “decouple” from the primordial plasma and they loose the thermal equilibrium, we have m_\nu>>T, and then we get

\Omega_\nu h^2=10^{-2}m_{tot}eV

with h the reduced Hubble constant. Recent analysis provide h\approx 67-71\cdot 10^{-2} (PLANCK/WMAP).

There is another useful requirement for the neutrino density in Cosmology. It comes from the requirements of the BBN (Big Bang Nucleosynthesis). I talked about this in my Cosmology thread. Galactic structure and large scale observations also increase evidence that the matter density is:

\Omega_Mh^2\approx 0.05-0.2

These values are obtained through the use of the luminosity-density relations, galactic rotation curves and the observation of large scale flows. Here, the \Omega_M is the total mass density of the Universe as a fraction of the critical density \rho_c. This \Omega_M includes radiation (photons), bayrons and non-baryonic “cold dark matter” (CDM) and “hot dark matter” (HDM). The two first components in the decomposition of \Omega_M

\Omega_M=\Omega_r+\Omega_b+\Omega_{nb}+\Omega_{HDM}+\Omega_{CDM}

are rather well known. The photon density is

\Omega_rh^2=\Omega_\gamma h^2=2.471\cdot 10^{-5}

The deuterium abundance can be extracted from the BBN predictions and compared with the deuterium abundances in the stellar medium (i.e. at stars!). It shows that:

0.017\leq\Omega_Bh^2\leq 0.021

The HDM component is formed by relativistic long-lived particles with masses less than about 1keV. In the SM framework, the only HDM component are the neutrinos!

The simulations of structure formation made with (super)computers fit the observations ONLY when one has about 20% of HDM plus 80% of CDM. A stunning surprise certainly! Some of the best fits correspond to neutrinos with a total mass about 4.7eV, well above the current limit of neutrino mass bounds. We can evade this apparent contradiction if we suppose that there are some sterile neutrinos out there. However, the last cosmological data by PLANCK have decreased the enthusiasm by this alternative. The apparent conflict between theoretical cosmology and observational cosmology can be caused by both unprecise measurements or our misunderstanding of fundamental particle physics. Anyway observations of distant objects (with high redshift) favor a large cosmological constant instead of Hot Dark Matter hypothesis. Therefore, we are forced to conclude that the HDM of \Omega_M does not exceed even 0.2. Requiring that \Omega_\nu <\Omega_M, we get that \Omega_\nu h^2\leq 0.1. Using the relationship with the total mass density, we can deduce that the total neutrino mass (or HDM in the SM) is about

m_\nu\leq 8-10 eV or less!

Mass limits, in this case lower limits, for heavy or superheavy neutrinos (M_N\sim 1GeV or higher) can also be obtained along the same reasoning. The puzzle gets very different if the neutrinos were “unstable” particles. One gets then joint bounds on mass and timelife, and from them, we deduce limits that can overcome the previously seen limits (above).

There is another interesting limit to the density of neutrinos (or weakly interacting dark matter in general) that comes from the amount of accumulated “density” in the halos of astronomical objects. This is called the Tremaine-Gunn limit. Up to numerical prefactors, and with the simplest case where the halo is a singular isothermal sphere with \rho\propto r^{-2}, the reader can easily check that

\rho=\dfrac{\sigma^2}{2\pi G_Nr^2}

Imposing the phase space bound at radius r then gives the lower bound

m_\nu>(2\pi)^{-5/8}\left(G_Nh_P^3\sigma r^2\right)^{-1/4}

This bound yields m_\nu\geq 33eV. This is the Tremaine-Gunn bound. It is based on the idea that neutrinos form an important part of the galactic bulges and it uses the phase-space restriction from the Fermi-Dirac distribution to get the lower limit on the neutrino mass. I urge you to consult the literature or google to gather more information about this tool and its reliability.

Remark: The singular isothermal sphere is probably a good model where the rotation curve produced by the dark matter halo is flat, but certainly breaks down at small radius. Because the neutrino mass bound is stronger for smaller \sigma r^2, the uncertainty in the halo core radius (interior to which the mass density saturates) limits the reliability of this neutrino mass bound. However, some authors take it seriously! As Feynman used to say, everything depends on the prejudges you have!

The abundance of additional weakly interacting light particles, such as a light sterile neutrino \nu_s or additional relativistic degrees of freedom uncharged under the Standard Model can change the number of relativistic degrees of freedom g_\nu. Sometimes you will hear about the number N_{eff}. Planck data, recently released, have decreased the hopes than we would be finding some additional relativistic degree of freedom that could mimic neutrinos. It is also constrained by the BBN and the deuterium abundances we measured from astrophysical objects. Any sterile neutrino or extra relativistic degree of freedom would enter into equilibrium with the active neutrinos via neutrino oscillations! A limit on the mass differences and mixing angle with another active neutrino of the type

\Delta m^2\sin^2 2\theta\leq 3\cdot 10^{-6}eV^2 should be accomplished in principle. From here, it can be deduced that the effective number of neutrino species allowed by neutrino oscillations is in fact a litle higher the the 3 light neutrinos we know from the Z-width bound:

N_\nu (eff)<3.5-4.5

PLANCK data suggest indeed that N_\nu (eff)< 3.3. However, systematical uncertainties in the derivation of the BBN make it too unreliable to be taken too seriously and it can eventually be avoided with care.


LOG#107. Basic Cosmology (II).

piechart_wmapPlanck_cosmic_recipecosmicparticles

Evolution of the Universe: the scale factor

The Universe expands, and its expansion rate is given by the Hubble parameter (not constant in general!)

\boxed{H(t)\equiv \dfrac{\dot{a}(t)}{a(t)}}

Remark  (I): The Hubble “parameter” is “constant” at the present value (or a given time/cosmological age), i.e., H_0=H(t_0).

Remark (II): The Hubble time defines a Hubble length about L_H=H^{-1}, and it defines the time scale of the Universe and its expasion “rate”.

The critical density of matter is a vital quantity as well:

\boxed{\rho_c=\dfrac{3H^2}{\kappa^2}\vert_{t_0}}

We can also define the density parameters

\Omega_i=\dfrac{\rho_i}{\rho_c}\vert_{t_0}

This quantity represents the amount of substance for certain particle species. The total composition of the Universe is the total density, or equivalently, the sum over all particle species of the density parameters, that is:

\boxed{\displaystyle{\Omega=\sum_i\Omega_i=\dfrac{\displaystyle{\sum_i\rho_i}}{\rho_c}}}

There is a nice correspondence between the sign of the curvature k and that of \Omega-1. Using the Friedmann’s equation

\displaystyle{\dfrac{\dot{a}^2}{a^2}+\dfrac{k}{a^2}=\dfrac{\kappa^2}{3}\sum_i\rho_i}

then we have

\dfrac{k}{H^2a^2}=\dfrac{\displaystyle{\sum_i\rho_i}}{\rho_c}-1=\Omega-1

Thus, we observe that

1st. \Omega>1 if and only if (iff) k=+1, i.e., iff the Universe is spatially closed (spherical/elliptical geometry).

2nd. \Omega=1 if and only if (iff) k=0, i.e., iff the Universe is spatially “flat” (euclidean geometry).

3rd. \Omega<1 if and only if (iff) k=-1, i.e., iff the Universe is spatially “open” (hyperbolic geometry).

In the early Universe, the curvature term is negligible (as far as we know). The reason is as follows:

k/a^2\propto a^{-2}<<\dfrac{\kappa\rho}{3}\propto a^{-3}(MD),a^{-4}(RD) as a goes to zero. MD means matter dominated Universe, and RD means radiation dominated Universe. Then, the Friedmann’s equation at the early time is given by

\boxed{H^2=\dfrac{\kappa^2}{3}\rho}

Furthermore, the evolution of the curvature term

\Omega_k\equiv \Omega-1

is given by

\Omega-1=\dfrac{k}{H^2a^2}\propto \dfrac{1}{\rho a^2}\propto a(MD),a^2(RD)

and thus

\vert \Omega-1\vert=\begin{cases}(1+z)^{-1}, \mbox{if MD}\\ 10^4(1+z)^{-2}, \mbox{if RD}\end{cases}

The spatial curvature will be given by

\boxed{R_{(3)}=\dfrac{6k}{a^2}=6H^2(\Omega-1)}

and the curvature radius will be

\boxed{R=a\vert k\vert ^{-1/2}=H^{-1}\vert \Omega-1\vert ^{-1/2}}

We have arrived at the interesting result that in the early Universe, it was nearly “critical”. The Universe close to the critical density is very flat!

By the other hand, supposing that a_0=1, we can integrate the Friedmann’s equation easily:

\boxed{\displaystyle{\left(\dfrac{\dot{a}}{a}\right)^2+\dfrac{k}{a^2}=\dfrac{\kappa^2}{3}\sum_i\rho_i=\dfrac{\kappa^2}{3}\sum_i\rho_i(0)a^{-3(1+\omega_i)}}}

Then, we obtain

\dot{a}^2=H_0^2\left[-\Omega_k+\sum_i\Omega_ia^{-1-3\omega_i}\right]

We can make an analogy of this equation to certain simple equation from “newtonian Mechanics”:

\dfrac{\dot{a}^2}{2}+V(a)=0

Therefore, if we identify terms, we get that the density parameters work as “potential”, with

\displaystyle{V(a)=\dfrac{1}{2}H_0^2\left[\Omega_k-\sum_i\Omega_ia^{-1-3\omega_i}\right]}

and the total energy is equal to zero (a “machian” behaviour indeed!). In addition to this equation, we also get

\boxed{\displaystyle{H_0t=\int_0^a\left[-\Omega_k+\sum_i\Omega_i\chi^{-1-3\omega_i}\right]^{-1/2}d\chi}}

The age of the Universe can be easily calculated (symbolically and algebraically):

\boxed{t_0=H_0^{-1}f(\Omega_i)}

with

f(\Omega_i)=\int_0^1\left[-\Omega_k+\sum_i\Omega_i\chi^{-1-3\omega_i}\right]^{-1/2}d\chi

This equation can be evaluated for some general and special cases. If we write p=\omega \rho for a single component, then

a\propto t^{2/3(1+\omega)} if \omega\neq -1

Moreover, 3 common cases arise:

1) Matter dominated Universe (MD): a\propto t^{2/3}

2) Radiation dominated Universe (RD): a\propto t^{1/2}

3) Vacuum dominated Universe (VD): e^{H_0t} (w=-1 for the cosmological constant, vacuum energy or dark energy).

THE MATTER CONTENT OF THE UNIVERSE

We can find out how much energy is contributed by the different compoents of the Universe, i.e., by the different density parameters.

Case 1. Photons.

The CMB temperature gives us “photons” with T_\gamma=2\mbox{.}725\pm 0\mbox{.}002K

The associated energy density is given by the Planck law of the blackbody, that is

\rho_\gamma=\dfrac{\pi^2}{15}T^4 and \mu/T<9\cdot 10^{-5}

or equivalently

\Omega_\gamma=\Omega_r=\dfrac{2\mbox{.}47\cdot 10^{-5}}{h^2a^4}

Case 2. Baryons.

There are four established ways of measuring the baryon density:

i) Baryons in galaxies: \Omega_b\sim 0\mbox{.}02

ii) Baryons through the spectra fo distant quasars: \Omega_b h^{1\mbox{.}5}\approx 0\mbox{.}02

iii) CMB anisotropies: \Omega_bh^2=0\mbox{.}024\pm ^{0\mbox{.}004}_{0\mbox{.}003}

iv) Big Bag Nucleosynthesis: \Omega_bh^2=0\mbox{.}0205\pm 0\mbox{.}0018

Note that these results are “globally” compatible!

Case 3. (Dark) Matter/Dust.

The mass-to-light ratio from galactic rotation curves are “flat” after some cut-off is passed. It also works for clusters and other bigger structures. This M/L ratio provides a value about \Omega_m=0\mbox{.}3. Moreover, the galaxy power spectrum is sensitive to \Omega_m h. It also gives \Omega_m\sim 0\mbox{.}2. By the other hand, the cosmic velocity field of galaxies allows us to derive \Omega_m\approx 0\mbox{.}3 as well. Finally, the CMB anisotropies give us the puzzling values:

\Omega_m\sim 0\mbox{.}25

\Omega_b\sim 0\mbox{.}05

We are forced to accept that either our cosmological and gravitational theory is a bluff or it is flawed or the main component of “matter” is not of baryonic nature, it does not radiate electromagnetic radiation AND that the Standard Model of Particle Physics has no particle candidate (matter field) to fit into that non-baryonic dark matter. However, it could be partially formed by neutrinos, but we already know that it can NOT be fully formed by neutrinos (hot dark matter). What is dark matter? We don’t know. Some candidates from beyond standard model physics: axion, new (likely massive or sterile) neutrinos, supersymmetric particles (the lightest supersymmetric particle LSP is known to be stable: the gravitino, the zino, the neutralino,…), ELKO particles, continuous spin particles, unparticles, preons, new massive gauge bosons, or something even stranger than all this and we have not thought yet! Of course, you could modify gravity at large scales to erase the need of dark matter, but it seems it is not easy at all to guess a working Modified Gravitational theory or Modified Newtonian(Einsteinian) dynmanics that avoids the need for dark matter. MOND’s, MOG’s or similar ideas are an interesting idea, but it is not thought to be the “optimal” solution at current time. Maybe gravitons and quantum gravity could be in the air of the dark issues? We don’t know…

Case 4. Neutrinos.

They are NOT observed, but we understand them their physics, at least in the Standard Model and the electroweak sector. We also know they suffer “oscillations”/flavor oscillations (as kaons). The (cosmic) neutrino temperature can be determined and related to the CMB temperature. The idea is simple: the neutrino decoupling in the early Universe implied an electron-positron annihilation! And thus, the (density) entropy dump to the photons, but not to neutrinos. It causes a difference between the neutrino and photon temperature “today”. Please, note than we are talking about “relic” neutrinos and photons from the Big Bang! The (density) entropy before annihilation was:

s(a_1)=\dfrac{2\pi^2}{45}T_1^3\left[2+\dfrac{7}{8}(2\cdot 2+3\cdot 2)\right]=\dfrac{43}{90}\pi^2 T_1^3

After the annihilation, we get

s(a_2)=\dfrac{2\pi^2}{45}\left[2T_\gamma^3+\dfrac{7}{8}(3\cdot 2)T_\nu^3\right]

Therefore, equating

s(a_1)a_1^3=s(a_2)a_2^3 and a_1T_1=a_2T_\nu (a_2)

\dfrac{43}{90}\pi^2(a_1T_1)^3=\dfrac{2\pi^2}{45}\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

\dfrac{43}{2}\pi^2(a_1T_1)^3=2\pi^2\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

and then

\boxed{\left(\dfrac{T_\nu}{T_\gamma}\right)=\left(\dfrac{4}{11}\right)^{1/3}}

or equivalently

\boxed{T_\nu=\sqrt[3]{\dfrac{4}{11}}T_\gamma\approx 1\mbox{.}9K}

In fact, the neutrino energy density can be given in two different ways, depending if it is “massless” or “massive”. For massless neutrinos (or equivalently “relativistic” massless matter particles):

I) Massless neutrinos: \Omega_\nu=\dfrac{1\mbox{.}68\cdot 10^{-5}}{h^2}

2) Massive neutrinos: \Omega_\nu= \dfrac{m_\nu}{94h^2 \; eV}

Case 5. The dark energy/Cosmological constant/Vacuum energy.

The budget of the Universe provides (from cosmological and astrophysical measurements) the shocking result

\Omega\approx 1 with \Omega_M\approx 0\mbox{.}3

Then, there is some missin smooth, unclustered energy-matter “form”/”species”. It is the “dark energy”/vacuum energy/cosmological cosntant! It can be understood as a “special” pressure term in the Einstein’s equations, but one with NEGATIVE pressure! Evidence for this observation comes from luminosity-distance-redshift measurements from SNae, clusters, and the CMB spectrum! The cosmological constant/vacuum energy/dark energy dominates the Universe today, since, it seems, we live in a (positively!) accelerated Universe!!!!! What can dark energy be? It can not be a “normal” matter field. Like the Dark Matter field, we believe that (excepting perhaps the scalar Higgs field/s) the SM has no candidate to explain the Dark Energy. What field could dark matter be? Perhaps an scalar field or something totally new and “unknown” yet.

In short, we are INTO a DARKLY, darkly, UNIVERSE! Darkness is NOT coming, darkness has arrived and, if nothing changes, it will turn our local Universe even darker and darker!

See you in the next cosmological post!


LOG#057. Naturalness problems.

yogurt_berries

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)

There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂