LOG#116. Basic Neutrinology(I).


This new post ignites a new thread.

Subject: the Science of Neutrinos. Something I usually call Neutrinology.  

I am sure you will enjoy it, since I will keep it elementary (even if I discuss some more advanced topics at some moments). Personally, I believe that the neutrinos are the coolest particles in the Standard Model, and their applications in Science (Physics and related areas) or even Technology in the future ( I will share my thoughts on this issue in a forthcoming post) will be even greater than those we have at current time.

Let me begin…

The existence of the phantasmagoric neutrinos ( light, electrically neutral and feebly -very weakly- interacting fermions) was first proposed by W. Pauli in 1930 to save the principle of energy conservation in the theory of nuclear beta decay. The idea was promptly adopted by the physics community but the detection of that particle remained elusive: how could we detect a particle that is electrically neutral and that interact very,very weakly with normal matter? In 1933, E. Fermi takes the neutrino hypothesis, gives the neutrino its name (meaning “little neutron”, since it was realized than neutrinos were not Chadwick’s neutrons) and builds his theory of beta decay and weak interactions. With respect to its mass, Pauli initially expected the mass of the neutrino to be small, but necessarily zero. Pauli believed (originally) that the neutrino should not be much more massive than the electron itself. In 1934, F. Perrin showed that its mass had to be less than that of the electron.

By the other hand, it was firstly proposed to detect neutrinos exploding nuclear bombs! However, it was only in 1956 that C. Cowan and F. Reines (in what today is known as the Reines-Cowan experiment) were able to detect and discover the neutrino (or more precisely, the antineutrino). In 1962, Leon M. Lederman, M. Schwartz, J. Steinberger and Danby et al. showed that more than one type of neutrino species \nu_e,\nu_\mu should exist by first detecting interactions of the muon neutrino. They won the Nobel Prize in 1988.

When we discovered the third lepton, the tau particle (or tauon), in 1975 at the Stanford Linear Accelerator Center, it too was expected to have an associated neutrino particle. The first evidence for this 3rd neutrino “flavor” came from the observation of missing energy and momentum in tau decays. These decays were analogue to the beta decay behaviour leading to the discovery of the neutrino particle.

In 1989, the study of the Z boson lifetime allows us to show with great experimental confidence that only 3 light neutrino species (or flavors) do exist. In 2000, the first detection of tau neutrino (\nu_\tau in addition to \nu_e,\nu_\mu) interactions was announced by the DONUT collaboration at Fermilab, making it the latest particle of the Standard Model to have been discovered until the recent Higgs particle discovery (circa 2012, about one year ago).

In 1998, research results at the Super-Kamiokande neutrino detector in Japan (and later, independently, from SNO, Canada) determined for the first time that neutrinos do indeed experiment “neutrino oscillations” (I usually call NOCILLA, or NO for short, this phenomenon), i.e., neutrinos flavor “oscillate” and change their flavor when they travel  “short/long” distances. SNO and Super-Kamiokande tested and confirmed this hypothesis using “solar neutrinos”. this (quantum) phenomenon implies that:

1st. Neutrinos do have a mass. If they were massless, they could not oscillate. Then, the old debate of massless vs. massive neutrinos was finally ended.


2nd. The solar neutrino problem is solved. Some solar neutrinos scape to the detection in Super-Kamiokande and SNO, since they could not detect all the neutrino species. It also solved the old issue of “solar neutrinos”. The flux of (detected) solar neutrinos was lesser than expected (generally speaking by a factor 2). The neutrino oscillation hypothesis solved it since it was imply the fact that some neutrinos have been “transformed” into a type we can not detect.


3rd. New physics does exist. There is new physics at some energy scale beyond the electroweak scale (the electroweak symmetry breaking and typical energy scale is about 100GeV). The SM is not complete. The SM does (indeed) “predict” that the neutrinos are massless. Or, at least, it can be made simpler if you make neutrinos to be massless neutrinos described by Weyl spinors. It shows that, after the discovery of neutrino oscillations, it is not the case. Neutrinos are massive particles. However, they could be Dirac spinors (as all the known spinors in the Standard Model, SM) or they could also be Majorana particles, neutral fermions described by “Majorana” spinors and that makes them to be their own antiparticles! Dirac particles are different to their antiparticles. Majorana particles ARE the same that their own antiparticles.


In the period 2001-2005, neutrino oscillations (NO)/neutrino mixing phenomena(NEMIX) were observed for the first time at a reactor experiment (this type of experiment are usually referred as short baseline experiment in the neutrino community) called KamLAND. They give a good estimate (by the first time) of the difference in the squares of the neutrino masses. In May 2010, it was reported that physicists from CERN and the Italian National Institute for Nuclear Physics, in Gran Sasso National Laboratory, had observed for the first time a transformation between neutrino flavors during an accelerator experiment (also called neutrino beam experiment, a class of neutrino experiment belonging to “long range” or “long” baseline experiments with neutrino particles). It was a new solid evidence that at least one neutrino species or flavor does have mass. In 2012, the Daya Bay Reactor experiment in China, and later RENO in South Korea measured the so called \theta_{13} mixing angle, the last neutrino mixing angle remained to be measured from the neutrino mass matrix. It showed to be larger than expected and it was consistent with earlier, but less significant results by the experiments T2K (another neutrino beam experiment), MINOS (other neutrino beam experiment) and Double Chooz (a reactor neutrino experiment).

With the known value of \theta_{13} there are some probabilities that the NO\nu A experiment at USA can find the neutrino mass hierarchy. In fact, beyond to determine the spinorial character (Dirac or Majorana) of the neutrino particles, and to determine their masses (yeah, we have not been able to “weight” the neutrinos, but we are close to it: they are the only particle in the SM with no “precise” value of mass), the remaining problem with neutrinos is to determine what kind of spectrum they have and to measure the so called CP violating processes. There are generally 3 types of neutrino spectra usually discussed in the literature:

A) Normal Hierarchy (NH): m_1<<m_2<<m_3. This spectrum follows the same pattern in the observed charged leptons, i.e., m(e)<<m(\mu)<<m(\tau). The electron is about 0.511MeV, muon is about 106 MeV and the tau particle is 1777MeV.

B) Inverted Hierarchy (IH): m_1<<m_2\sim m_3. This spectrum follows a pattern similar to the electron shells in atoms. Every “new” shell is closer in energy (“mass”) to the previous “level”.

C) Quasidegenerated (or degenerated) hierarchy/spectrum (QD): m_1\sim m_2\sim m_3.


While the above experiments show that neutrinos do have mass, the absolute neutrino mass scale is still not known. There are reasons to believe that its mass scale is in the range of some milielectron-volts (meV) up to the electron-volt scale (eV) if some extra neutrino degree of freedom (sterile neutrinos) do appear. In fact, the Neutrino OScillation EXperiments (NOSEX) are sensitive only to the difference in the square of the neutrino masses. There are some strongest upper limits on the masses of neutrinos that come from Cosmology:

1) The Big Bang model states that there is a fixed ratio between the number of neutrino species and the number of photons in the cosmic microwave background (CMB). If the total energy of all the neutrino species exceeded an upper bound about

m_\nu\leq 50eV

per neutrino, then, there would be so much mass in the Universe that it would collapse. It does not (apparently) happen.

2) Cosmological data, such as the cosmic microwave background radiation, the galaxy surveys, or the technique of the Lyman-alpha forest indicate that the sum of the neutrino masses should be less than 0.3 eV (if we don’t include sterile neutrinos, new neutrino species uncharged under the SM gauge group, that could increase that upper bound a little bit).

3) Some early measurements coming from lensing data of a galaxy cluster were analyzed in 2009. They suggest that the neutrino mass upper bound is about 1.5eV. This result is compatible with all the above results.

Today, some measurements in controlled experiments have given us some data about the squared mass differences (from both, solar neutrinos, atmospheric neutrinos produced by cosmic rays and accelerator/reactor experiments):

1) From KamLAND (2005), we get

\Delta m_{21}^2=0\mbox{.}000079eV^2

2) From MINOS (2006), we get

\Delta m_{32}^2=0\mbox{.}0027eV^2

There are some increasing efforts to directly determine the absolute neutrino mass scale in different laboratory experiments (LEX), mainly:

1) Nuclear beta decay (KATRIN, MARE,…).

2) Neutrinoless double beta decay (e.g., GERDA; CUORE, Cuoricino, NEMO3,…). If the neutrino is a Majorana particle, a new kind of beta decay becomes possible: the double beta decay without neutrinos (i.e., two electrons emitted and no neutrino after this kind of decay).

Neutrinos have a unique place among all the SM elementary particles. Their role in the cosmic evolution and the fundamental asymmetries in the SM (like CP violating reactions, or the C, T, and P single violations) make them the most fascinating and interesting particle that we know today (well, maybe, today, the Higgs particle is also as mysterious as the neutrino itself). We believe that neutrinos play an important role in Beyond Standard Model (BSM) Physics. Specially, I would like to highlight two aspects:

1) Baryogenesis from leptogenesis. Neutrinos can allow us to understand how could the Universe end in such an state that it contains (essentially) baryons and no antibaryons (i.e., the apparent matter-antimatter asymmetry of the Universe can be “explained”, with some unsolved problems we have not completely understood, if massive neutrinos are present).

2) Asymmetric mass generation mechanisms or the seesaw. Neutrinos allow us to build an asymmetric mass mechanism known as “seesaw” that makes “some neutrino species/states” very light and other states become “superheavy”. This mechanism is unique and, from some  non-subjective viewpoint, “simple”.

After nearly a century, the question of the neutrino mass and its origin is still an open question and a hot topic in high energy physics, particle physics, astrophysics, cosmology and theoretical physics in general.

If we want to understand the fermion masses, a detailed determination of the neutrino mass is necessary. The question why the neutrino masses are much smaller than their charged partners could be important! The little hierarchy problem is the problem of why the neutrino mass scale is smaller than the other fermionic masses and the electroweak scale. Moreover, neutrinos are a powerful probe of new physics at scales larger than the electroweak scale. Why? It is simple. (Massive) Neutrinos only interact under weak interactions and gravity! At least from the SM perspective, neutrinos are uncharged under electromagnetism or the color group, so they can only interact via intermediate weak bosons AND gravity (via the undiscovered gravitons!).

If neutrino are massive particles, as they show to be with the neutrino oscillation phenomena, the superposition postulates of quantum theory state that neutrinos, particles with identical quantum numbers, could oscillate in flavor space since they are electrically neutral particles. If the absolute difference of masses among them is small, then these oscillations or neutrino (flavor) mixing could have important phenomenological consequences in Astrophysics or Cosmology. Furthermore, neutrinos are basic ingredients of these two fields (Astrophysics and Cosmology). There may be a hot dark matter component (HDM) in the Universe: simulations of structure formation fit the observations only when some significant quantity of HDM is included. If so, neutrinos would be there, at least by weight, and they would be one of the most important ingredients in the composition of the Universe.


Regardless the issue of mass and neutrino oscillations/mixing, astrophysical interests in the neutrino interactions and their properties arise from the fact that it is produced in high temperature/high density environment, such as collapsing stars and/or supernovae or related physical processes. Neutrino physics dominates the physics of those astrophysical objects. Indeed, the neutrino interactions with matter is so weak, that it passes generally unnoticed and travels freely through any ordinary matter existing in the Universe. Thus, neutrinos can travel millions of light years before they interact (in general) with some piece of matter! Neutrinos are a very efficient carrier of energy drain from optically thick objects and they can serve as very good probes for studying the interior of such objects. Neutrino astronomy is just being born in recent years. IceCube and future neutrino “telescopes” will be able to see the Universe in a range of wavelengths and frequencies we have not ever seen till now. Electromagnetic radiation becomes “opaque” at some very high energies that neutrinos are likely been able to explore! Isn’t it wonderful? Neutrinos are high energy “telescopes”!

By the other hand, the solar neutrino flux is, together with heliosysmology and the field of geoneutrinos (neutrinos coming from the inner shells of Earth), some of the known probes of solar core and the Earth core. A similar statement applies to objects like type-II supernovae. Indeed, the most interesting questions around supernovae and the explosion dynamics itself with the shock revival (and the synthesis of the heaviest elements by the so-called r-processes) could be positively affected by changes in the observed neutrino fluxes (via some processes called resonant conversion, and active-sterile conversions).

Finally, ultra high energy neutrinos are likely to be useful probes of diverse distant astrophysical objects. Active Galactic Nuclei (AGN) should be copious emitters of neutrinos, providing detectable point sources and and observable “diffuse” background which is larger in fact that the atmospheric neutrino background in the very high energy range. Relic cosmic neutrinos, their thermal background, known as the cosmic neutrino background, and their detection about 1.9K are one of the most important lacking missing pieces in the Standard Cosmological Model (LCDM).

Do you understand why neutrinos are my favorite particles? I will devote this basic thread to them. I will make some advanced topics in the future. I promise.

May the Neutrinos be with you!

LOG#058. LHC: last 2012 data/bounds.

Today, 12/12/12, the following paper  arised in the arxiv http://arxiv.org/abs/1212.2339

This interesting paper reviews the last bounds about Beyond Stantard Model particles (both, fermions and bosons) for a large class of models until the end of this year, 2012. Particle hunters, some theoretical physicists are! The fundamental conclusions of this paper are encoded in a really beautiful table:


There, we have:

1. Extra gauge bosons W', Z'. They are excluded below 1-2 TeV, depending on the channel/decay mode.

2. Heavy neutrinos N. They are excluded with softer lower bounds.

3. Fourth generation quarks t', b' and B, T vector-like quarks are also excluded with \sim 0.5 TeV bounds.

4. Exotic quarks with charge Q= 5/3 are also excluded below 0.6 TeV.

We continue desperately searching for deviations to the Standard Model (SM). SUSY, 4th family, heavy likely right-handed neutrinos, tecnifermions, tecniquarks, new gauge bosons, Kalula-Klein resonances (KK particles), and much more are not appearing yet, but we are seeking deeply insight the core of the matter and the deepest structure of the quantum vacuum. We know we have to find “something” beyond the Higgs boson/particle, but what and where is not clear for any of us.

Probably, we wish, some further study of the total data in the next months will clarify the whole landscape, but these data are “bad news” and “good news” for many reasons. They are bad, since they point out to no new physics beyond the Higgs up to 1 TeV (more or less). They are good since we are collecting many data and hopefully, we will complement the collider data with cosmological searches next year, and then, some path relative to the Standard Model extension and the upcoming quantum theory of gravity should be enlightened, or at least, some critical models and theories will be ruled out! Of course, I am being globally pesimist but some experimental hint beyond the Higgs (beyond collider physics) is necessary in order to approach the true theory of this Universe.

And if it is not low energy SUSY (it could if one superparticle is found, but we have not found any superparticle yet), what stabilizes the Higgs potential and provides a M_H\sim 127GeV Higgs mass, i.e, what does that “job”/role? What is forbidding the Higgs mass to receive Planck mass quantum corrections? For me, as a theoretical physicist, this question is mandatory! If SUSY fails to be the answer, we really need some good theoretical explanation for the “light” mass the Higgs boson seems to have!

Stay tuned!

LOG#057. Naturalness problems.


In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:


2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)


3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,


4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?


5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:


There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so


6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:


7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!


And my weblog is only just beginning! See you soon in my next post! 🙂

LOG#047. The Askaryan effect.

I discussed and reviewed the important Cherenkov effect and radiation in the previous post, here:


Today we are going to study a relatively new effect ( new experimentally speaking, because it was first detected when I was an undergraduate student, in 2000) but it is not so new from the theoretical aside (theoretically, it was predicted in 1962). This effect is closely related to the Cherenkov effect. It is named Askaryan effect or Askaryan radiation, see below after a brief recapitulation of the Cherenkov effect last post we are going to do in the next lines.

We do know that charged particles moving faster than light through the vacuum emit Cherenkov radiation. How can a particle move faster than light? The weak speed of a charged particle can exceed the speed of light. That is all. About some speculations about the so-called tachyonic gamma ray emissions, let me say that the existence of superluminal energy transfer has not been established so far, and one may ask why. There are two options:

1) The simplest solution is that superluminal quanta just do not exist, the vacuum speed of light being the definitive upper bound.

2) The second solution is that the interaction of superluminal radiation with matter is very small, the quotient of tachyonic and electric fine-structure constants being q_{tach}^2/e^2<10^{-11}. Therefore superluminal quanta and their substratum are hard to detect.

A related and very interesting question could be asked now related to the Cherenkov radiation we have studied here. What about neutral particles? Is there some analogue of Cherenkov radiation valid for chargeless or neutral particles? Because neutrinos are electrically neutral, conventional Cherenkov radiation of superluminal neutrinos does not arise or it is otherwise weakened. However neutrinos do carry electroweak charge and may emit certain Cherenkov-like radiation via weak interactions when traveling at superluminal speeds. The Askaryan effect/radiation is this Cherenkov-like effect for neutrinos, and we are going to enlighten your knowledge of this effect with this entry.

We are being bombarded by cosmic rays, and even more, we are being bombarded by neutrinos. Indeed, we expect that ultra-high energy (UHE) neutrinos or extreme ultra-high energy (EHE) neutrinos will hit us as too. When neutrinos interact wiht matter, they create some shower, specifically in dense media. Thus, we expect that the electrons and positrons which travel faster than the speed of light in these media or even in the air and they should emit (coherent) Cherenkov-like radiation.

Who was Gurgen Askaryan?

Let me quote what wikipedia say about him: Gurgen Askaryan (December 14, 1928-1997) was a prominent Soviet (armenian) physicist, famous for his discovery of the self-focusing of light, pioneering studies of light-matter interactions, and the discovery and investigation of the interaction of high-energy particles with condensed matter. He published more than 200 papers about different topics in high-energy physics.

Other interesting ideas by Askaryan: the bubble chamber (he discovered the idea independently to Glaser, but he did not published it so he did not win the Nobel Prize), laser self-focussing (one of the main contributions of Askaryan to non-linear optics was the self-focusing of light), and the acoustic UHECR detection proposal. Askaryan was the first to note that the outer few metres of the Moon’s surface, known as the regolith, would be a sufficiently transparent medium for detecting microwaves from the charge excess in particle showers. The radio transparency of the regolith has since been confirmed by the Apollo missions.

If you want to learn more about Askaryan ideas and his biography, you can read them here: http://en.wikipedia.org/wiki/Gurgen_Askaryan

What is the Askaryan effect?

The next figure is from the Askaryan radiation detected by the ANITA experiment:

The Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric medium (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contain a charge anisotropy  and thus emits a cone of coherent radiation in the radio or microwave  part of the electromagnetic spectrum. It is similar, or more precisely it is based on the Cherenkov effect.

High energy processes such as Compton, Bhabha and Moller scattering along with positron annihilation  rapidly lead to about a 20%-30% negative charge asymmetry in the electron-photon part of a cascade. For instance, they can be initiated by UHE (higher than, e.g.,100 PeV) neutrinos.

1962, Askaryan first hypothesized this effect and suggested that it should lead to strong coherent radio and microwave Cherenkov emission for showers propagating within the dielectric. Since the dimensions of the clump of charged particles are small compared to the wavelength of the radio waves, the shower radiates coherent radio Cherenkov radiation whose power is proportional to the square of the net charge in the shower. The net charge in the shower is proportional to the primary energy so the radiated power scales quadratically with the shower energy, P_{RF}\propto E^2.

Indeed, these radio and coherent radiations are originated by the Cherenkov effect radiation. We do know that:

\dfrac{P_{CR}}{d\nu}\propto \nu d\nu

from the charged particle in a dense (refractive) medium experimenting Cherenkov radiation (CR). Every charge emittes a field \vert E\vert\propto \exp (i\mathbf{k}\cdot\mathbf{r}). Then, the power is proportional to E^2. In a dense medium:

R_{M}\sim 10cm

We have two different experimental and interesting cases:

A) The optical case, with \lambda <<R_M. Then, we expect random phases and P\propto N.

B) The microwave case, with \lambda>>R_M. In this situation, we expect coherent radiation/waves with P\propto N^2.

We can exploit this effect in large natural volumes transparent to radio (dry): pure ice, salt formations, lunar regolith,…The peak of this coherent radiation for sand is produced at a frequency around 5GHz, while the peak for ice is obtained around 2GHz.

The first experimental confirmation of the Askaryan effect detection were the next two experiments:

1) 2000 Saltzberg et.al., SLAC. They used as target silica sand. The paper is this one http://arxiv.org/abs/hep-ex/0011001

2) 2002 Gorham et.al., SLAC. They used a synthetic salt target. The paper appeared in this place http://arxiv.org/abs/hep-ex/0108027

Indeed, in 1965, Askaryan himself proposes ice and salt as possible target media. The reasons are easy to understand:
1st. They provide high densities and then it means a higher probability for neutrino interaction.
2nd. They have a high refractive index. Therefore, the Cerenkov emission becomes important.
3rd. Salt and ice are radio transparent, and of course, they can be supplied in large volumes available throughout the world.

The advantages of radio detection of UHE neutrinos provided by the Askaryan effect are very interesting:

1) Low attenuation: clear signals from large detection volumes.
2) We can observe distant and inclined events.
3) It has a high duty cycle: good statistics in less time.
4) I has a relative low cost: large areas covered.
5) It is available for neutrinos and/or any other chargeless/neutral particle!

Problems with this Askaryan effect detection are, though: radio interference, correlation with shower parameters (still unclear), and that it is limited only to particles with very large energies, about E>10^{17}eV.

In summary:

Askaryan effect = coherent Cerenkov radiation from a charge excess induced by (likely) neutral/chargeless particles like (specially highly energetic) neutrinos passing through a dense medium.

Why the Askaryan effect matters?

It matters since it allows for the detection of UHE neutrinos, and it is “universal” for chargeless/neutral particles like neutrinos, just in the same way that the Cherenkov effect is universal for charged particles. And tracking UHE neutrinos is important because they point out towards its source, and it is suspected they can help us to solve the riddle of the origin and composition of cosmic rays, the acceleration mechanism of cosmic radiation, the nuclear interactions of astrophysical objects, and tracking the highest energy emissions of the Universe we can observe at current time.

Is it real? Has it been detected? Yes, after 38 years, it has been detected. This effect was firstly demonstrated in sand (2000), rock salt (2004) and ice (2006), all done in a laboratory at SLAC and later it has been checked in several independent experiments around the world. Indeed, I remember to have heard about this effect during my darker years as undergraduate student. Fortunately or not, I forgot about it till now. In spite of the beauty of it!

Moreover, it has extra applications to neutrino detection using the Moon as target: GLUE (detectors are Goldstone RTs), NuMoon (Westerbork array; LOFAR), or RESUN (EVLA), or the LUNASKA project. Using ice as target, there has been other experiments checking the reality of this effect: FORTE (satellite observing Greenland ice sheet), RICE (co-deployed on AMANDA strings, viewing Antarctic ice), and the celebrated ANITA (balloon-borne over Antarctica, viewing Antarctic ice) experiment.

Furthermore, even some experiments have used the Moon (an it is likely some others will be built in the near future) as a neutrino detector using the Askaryan radiation (the analogue for neutral particles of the Cherenkov effect, don’t forget the spot!).

Askaryan effect and the mysterious cosmic rays.

Askaryan radiation is important because is one of the portals of the UHE neutrino observation coming from cosmic rays. The mysteries of cosmic rays continue today. We have detected indeed extremely energetic cosmic rays beyond the 10^{20}eV scale. Their origin is yet unsolved. We hope that tracking neutrinos we will discover the sources of those rays and their nature/composition. We don’t understand or know any mechanism being able to accelerate particles up to those incredible particles. At current time, IceCube has not detected UHE neutrinos, and it is a serious issue for curren theories and models. It is a challenge if we don’t observe enough UHE neutrinos as the Standard Model would predict. Would it mean that cosmic rays are exclusively composed by heavy nuclei or protons? Are we making a bad modelling of the spectrum of the sources and the nuclear models of stars as it happened before the neutrino oscillations at SuperKamiokande and Kamikande were detected -e.g.:SN1987A? Is there some kind of new Physics living at those scales and avoiding the GZK limit we would naively expect from our current theories?