LOG#126. Basic Neutrinology(XI).
Posted: 2013/07/22 Filed under: Basic Neutrinology, Physmatics, The Standard Model: Basics | Tags: IceCube, LBE, long baseline experiments, neutrino beam experiments, neutrino masses and lepton asymmetry, neutrino mixing, neutrino oscillation experiments, neutrino oscillations, neutrino oscillations in matter, neutrino oscillations in vacuum, neutrino telescopes, neutrinology, NOCILLA, NOSEX, reactor experiments, right-handed neutrinos, SBE, short baseline experiments, sterile neutrinos Leave a commentWhy is the case of massive neutrinos so relevant in contemporary physics? The full answer to this question would be very long. In fact, I am making this long thread about neutrinology in order you understand it a little bit. If neutrinos do have nonzero masses, then, due to the basic postulates of the quantum theory there will be in a “linear combination” or “mixing” among all the possible “states”. It also happens with quarks! This mixing will be observable even at macroscopic distances from the production point or source and it has very important practical consequences ONLY if the difference of the neutrino masses squared are very small. Mathematically speaking . Typically,
, but some “subtle details” can increae this upper bound up to the keV scale (in the case of sterile or right-handed neutrinos, undetected till now).
In the presence of neutrino masses, the so-called “weak eigenstates” are different to “mass eigenstates”. There is a “transformation” or “mixing”/”oscillation” between them. This phenomenon is described by some unitary matrix U. The idea is:
If neutrinos can only be created and detected as a result of weak processes, at origin (or any arbitrary point) we have a weak eigenstate as a “rotation” of a mass eigenstate through the mixing matrix U:
In this post, I am only to introduce the elementary theory of neutrino oscillations (NO or NOCILLA)/neutrino mixing (NOMIX) from a purely heuristic viewpoint. I will be using natural units with .
If we ignore the effects of the neutrino spin, after some time the system will evolve into the next state (recall we use elementary hamiltonian evolution from quantum mechanics here):
and where is the free hamiltonian of the system, i.e., in vacuum. It will be characterized by certain eigenvalues
and here, using special relativity, we write
In most of the interesting cases (when and
), this relativistic dispersion relationship
can be approximated by the next expression (it is the celebrated “ultra-relativistic” approximation):
The effective neutrino hamiltonian can be written as
and
In this last equation, we write
with
We can perform this derivation in a more rigorous mathematical structure, but I am not going to do it here today. The resulting theory of neutrino mixing and neutrino oscillations (NO) has a beautiful corresponded with Neutrino OScillation EXperiments (NOSEX). These experiments are usually analyzed under the simplest assumption of two flavor mixing, or equivalently, under the perspective of neutrino oscillations with 2 simple neutrino species we can understand this process better. In such a case, the neutrino mixing matrix U becomes a simple 2-dimensional orthogonal rotation matrix depending on a single parameter , the oscillation angle. If we repeat all the computations above in this simple case, we find that the probability that a weak interaction eigenstate neutrino
has oscillated to other weak interaction eigenstate, say
when the neutrino travels some distance
(remember we are supposing the neutrino are “almost” massless, so they move very close to the speed of light) is, taking
and
,
(1)
This important formula describes the probability of NO in the 2-flavor case. It is a very important and useful result! There, we have defined the oscillation length as
with . In practical units, we have
(2)
As you can observe, the probabilities depend on two factors: the mixing (oscillation) angle and the kinematical factor as a function of the traveled distance, the momentum of the neutrinos and the mass difference between the two species. If this mass difference were probed to be non-existent, the phenomenon of the neutrino oscillation would not be possible (it would have 0 probability!). To observe the neutrino oscillation, we have to make (observe) neutrinos in which some of this parameters are “big”, so the probability is significant. Interestingly, we can have different kind of neutrino oscillation experiments according to how large are these parameters. Namely:
–Long baseline experiments (LBE). This class of NOSEX happen whenever you have an oscillation length of order or bigger. Even, the neutrino oscillations of solar neutrinos (neutrinos emitted by the sun) and other astrophysical sources can also be understood as one of this. Neutrino beam experiments belong to this category as well.
-Short baseline experiments (SBE). This class of NOSEX happen whenever the distances than neutrino travel are lesser than hundreds of kilometers, perhaps some. Of course, the issue is conventional. Reactor experiments like KamLAND in Japan (Daya Bay in China, or RENO in South Korea) are experiments of this type.
Moreover, beyond reactor experiments, you also have neutrino beam experiments (T2K, , OPERA,…). Neutrino telescopes or detectors like IceCube are the next generation of neutrino “observers” after SuperKamiokande (SuperKamiokande will become HyperKamiokande in the near future, stay tuned!).
In summary, the phenomenon of neutrino mixing/neutrino oscillations/changing neutrino flavor transforms the neutrino in a very special particle under quantum and relativistic theories. Neutrinos are one of the best tools or probes to study matter since they only interact under weak interactions and gravity! Therefore, neutrinos are a powerful “laboratory” in which we can test or search for new physics (The fact that neutrinos are massive is, said this, a proof of new physics beyond the SM since the SM neutrinos are massless!). Indeed, the phenomenon is purely quantum and (special) relativist since the neutrinos are tiny particles and “very fast”. We have seen what are the main ideas behind this phenomenon and the main classes of neutrino experiments (long baseline and shortbaseline experiments). Moreover, we also have “passive” neutrino detectors like SuperKamiokande, IceCube and many others I will not quote here. They study the neutrino oscillations detecting atmospheric neutrinos (the result of cosmic rays hitting the atmosphere), solar neutrinos and other astrophysical sources of neutrinos (like supernovae!). I have talked you about cosmic relic neutrinos too in the previous post. Aren’t you convinced that neutrinos are cool? They are “metamorphic”, they have flavor, they are everywhere!
See you in my next neutrinological post!
LOG#057. Naturalness problems.
Posted: 2012/12/02 Filed under: Physmatics, Quantum Gravity, The Standard Model: Basics | Tags: CKM matrix, cosmic coincidence, cosmological constant, cosmological constant problem, critical energy density, curvature, dark energy, dark energy density, Dirac large number hypothesis, electroweak scale, energy, energy density, flatness problem, flavour problem, gauge hierarchy problem, Higgs boson, Higgs mechanism, Hubble constant, inflation, inflationary cosmologies, little hierarchy problem, mass, matter density, naturalness, naturalness problem, neutrino mass hierarchy, neutrino masses, neutrino oscillations, NO, NOSEX, parameter, parameter space, Planck era, Planck scale, PMNS matrix, QCD, QFT, quark-lepton complementarity, SM, Standard Cosmological Model, Standard Model, strong CP problem, theta term, types of naturalness, vacuum, vacuum energy, W boson, Z boson 8 CommentsIn this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.
Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., Equivalently, imagine that the values of every fundamental and measurable physical quantity
lies in the real interval
. Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema
or
are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even
can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:
REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or are unnatural. Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple
and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!
I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:
1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.
2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.
3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.
4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.
The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:
1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give or even
as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences,
and
. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).
Why is
We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about . Likely, it is also of the Higgs mass order. By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:
or more generally, dropping the factor
Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
3. The cosmological constant (hierarchy) problem. The cosmological constant , from the so-called Einstein’s field equations of classical relativistic gravity
is estimated to be about from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about
or in the framework of supersymmetric field theories,
after SUSY symmetry breaking. Then, the problem is:
Why is ? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called -angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:
The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,
while, from the theoretical aside, it could be any number in the interval . Why is
close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the model, the curvature of the Universe is related to the critical density and the Hubble “constant”:
There, is the total energy density contained in the whole Universe and
is the so called critical density. The flatness problem arise when we deduce from cosmological data that:
At the Planck scale era, we can even calculate that
This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, . Why now? We do not know!
“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
And my weblog is only just beginning! See you soon in my next post! 🙂