# LOG#070. Natural Units.

Happy New Year 2013 to everyone and everywhere!

Let me apologize, first of all, by my absence… I have been busy, trying to find my path and way in my field, and I am busy yet, but finally I could not resist without a new blog boost… After all, you should know the fact I have enough materials to write many new things.

So, what’s next? I will dedicate some blog posts to discuss a nice topic I began before, talking about a classic paper on the subject here:

https://thespectrumofriemannium.wordpress.com/2012/11/18/log054-barrow-units/

The topic is going to be pretty simple: natural units in Physics.

First of all, let me point out that the election of any system of units is, a priori, totally conventional. You are free to choose any kind of units for physical magnitudes. Of course, that is not very clever if you have to report data, so everyone can realize what you do and report. Scientists have some definitions and popular systems of units that make the process pretty simpler than in the daily life. Then, we need some general conventions about “units”. Indeed, the traditional wisdom is to use the international system of units, or S (Iabbreviated SI from French language: Le Système international d’unités). There, you can find seven fundamental magnitudes and seven fundamental (or “natural”) units:

1) Space: $\left[ L\right]=\mbox{meter}=m$

2) Time: $\left[ T\right]=\mbox{second}=s$

3) Mass: $\left[ M\right]=\mbox{kilogram}=kg$

4) Temperature: $\left[ t\right]=\mbox{Kelvin degree}= K$

5) Electric intensity: $\left[ I\right]=\mbox{ampere}=A$

6) Luminous intensity: $\left[ I_L\right]=\mbox{candela}=cd$

7) Amount of substance: $\left[ n\right]=\mbox{mole}=mol(e)$

The dependence between these 7 great units and even their definitions can be found here http://en.wikipedia.org/wiki/International_System_of_Units and references therein. I can not resist to show you the beautiful graph of the 7 wonderful units that this wikipedia article shows you about their “interdependence”:

In Physics, when you build a radical new theory, generally it has the power to introduce a relevant scale or system of units. Specially, the Special Theory of Relativity, and the Quantum Mechanics are such theories. General Relativity and Statistical Physics (Statistical Mechanics) have also intrinsic “universal constants”, or, likely, to be more precise, they allow the introduction of some “more convenient” system of units than those you have ever heard ( metric system, SI, MKS, cgs, …). When I spoke about Barrow units (see previous comment above) in this blog, we realized that dimensionality (both mathematical and “physical”), and fundamental theories are bound to the election of some “simpler” units. Those “simpler” units are what we usually call “natural units”. I am not a big fan of such terminology. It is confusing a little bit. Maybe, it would be more interesting and appropiate to call them “addapted X units” or “scaled X units”, where X denotes “relativistic, quantum,…”. Anyway, the name “natural” is popular and it is likely impossible to change the habits.

In fact, we have to distinguish several “kinds” of natural units. First of all, let me list “fundamental and universal” constants in different theories accepted at current time:

1. Boltzmann constant: $k_B$.

Essential in Statistical Mechanics, both classical and quantum. It measures “entropy”/”information”. The fundamental equation is:

$\boxed{S=k_B\ln \Omega}$

It provides a link between the microphysics and the macrophysics ( it is the code behind the equation above). It can be understood somehow as a measure of the “energetic content” of an individual particle or state at a given temperature. Common values for this constant are:

$k_B=1.3806488(13)\times 10^{-23}J/K = 8.6173324(78)\times 10^{-5}eV/K$

$k_B=1.3806488(13)\times 10^{-16}erg/K$

Statistical Physics states that there is a minimum unit of entropy or a minimal value of energy at any given temperature. Physical dimensions of this constant are thus entropy, or since $E=TS$, $\left[ k_B\right] =E/t=J/K$, where t denotes here dimension of temperature.

2. Speed of light.  $c$.

From classical electromagnetism:

$\boxed{c^2=\dfrac{1}{\sqrt{\varepsilon_0\mu_0}}}$

The speed of light, according to the postulates of special relativity, is a universal constant. It is frame INDEPENDENT. This fact is at the root of many of the surprising results of special relativity, and it took time to be understood. Moreover, it also connects space and time in a powerful unified formalism, so space and time merge into spacetime, as we do know and we have studied long ago in this blog. The spacetime interval in a D=3+1 dimensional space and two arbitrary events reads:

$\Delta s^2=\Delta x^2+\Delta y^2+\Delta z^2-c^2\Delta t^2$

In fact, you can observe that “c” is the conversion factor between time-like and space-like coordinates.  How big the speed of light is? Well, it is a relatively large number from our common and ordinary perception. It is exactly:

$\boxed{c=299,792,458m/s}$

although you often take it as $c\approx 3\cdot 10^{8}m/s=3\cdot 10^{10}cm/s$.  However, it is the speed of electromagnetic waves in vacuum, no matter where you are in this Universe/Polyverse. At least, experiments are consistent with such an statement. Moreover, it shows that $c$ is also the conversion factor between energy and momentum, since

$\mathbf{P}^2c^2-E^2=-m^2c^4$

and $c^2$ is the conversion factor between rest mass and pure energy, because, as everybody knows,  $E=mc^2$! According to the special theory of relativity, normal matter can never exceed the speed of light. Therefore, the speed of light is the maximum velocity in Nature, at least if specially relativity holds. Physical dimensions of c are $\left[c\right]=LT^{-1}$, where L denotes length dimension and T denotes time dimension (please, don’t confuse it with temperature despite the capital same letter for both symbols).

3. Planck’s constant. $h$ or generally rationalized $\hbar=h/2\pi$.

Planck’s constant (or its rationalized version), is the fundamental universal constant in Quantum Physics (Quantum Mechanics, Quantum Field Theory). It gives

$\boxed{E=h\nu=\hbar \omega}$

Indeed, quanta are the minimal units of energy. That is, you can not divide further a quantum of light, since it is indivisible by definition! Furthermore, the de Broglie relationship relates momentum and wavelength for any particle, and it emerges from the combination of special relativity and the quantum hypothesis:

$\lambda=\dfrac{h}{p}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{p}$

In the case of massive particles, it yields

$\lambda=\dfrac{h}{Mv}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{Mv}$

In the case of massless particles (photons, gluons, gravitons,…)

$\lambda=\dfrac{hc}{E}$ or $\bar{\lambda}=\dfrac{\hbar c}{E}$

Planck’s constant also appears to be essential to the uncertainty principle of Heisenberg:

$\boxed{\Delta x \Delta p\geq \hbar/2}$

$\boxed{\Delta E \Delta t\geq \hbar/2}$

$\boxed{\Delta A\Delta B\geq \hbar/2}$

Some particularly important values of this constant are:

$h=6.62606957(29)\times 10^{-34} J\cdot s$
$h=4.135667516(91)\times 10^{-15}eV\cdot s$
$h=6.62606957(29)\times 10^{-27} erg\cdot s$
$\hbar =1.054571726(47)\times 10^{-34} J\cdot s$
$\hbar =6.58211928(15)\times 10^{-16} eV\cdot s$
$\hbar= 1.054571726(47)\times 10^{-27}erg\cdot s$

It is also useful to know that
$hc=1.98644568\times 10^{-25}J\cdot m$
$hc=1.23984193 eV\cdot \mu m$

or

$\hbar c=0.1591549hc$ or $\hbar c=197.327 eV\cdot nm$

Planck constant has dimension of $\mbox{Energy}\times \mbox{Time}=\mbox{position}\times \mbox{momentum}=ML^2T^{-1}$. Physical dimensions of this constant coincide also with angular momentum (spin), i.e., with $L=mvr$.

4. Gravitational constant. $G_N$.

Apparently, it is not like the others but it can also define some particular scale when combined with Special Relativity. Without entering into further details (since I have not discussed General Relativity yet in this blog), we can calculate the escape velocity of a body moving at the speed of light

$\dfrac{1}{2}mv^2-G_N\dfrac{Mm}{R}=0$ with $v=c$ implies a new length scale where gravitational relativistic effects do appear, the so-called Schwarzschild radius $R_S$:

$\boxed{R_S=\dfrac{2G_NM}{c^2}=\dfrac{2G_NM_{\odot}}{c^2}\left(\dfrac{M}{M_{\odot}}\right)\approx 2.95\left(\dfrac{M}{M_{\odot}}\right)km}$

5. Electric fundamental charge. $e$.

It is generally chosen as fundamental charge the electric charge of the positron (positive charged “electron”). Its value is:

$e=1.602176565(35)\times 10^{-19}C$

where C denotes Coulomb. Of course, if you know about quarks with a fraction of this charge, you could ask why we prefer this one. Really, it is only a question of hystory of Science, since electrons were discovered first (and positrons). Quarks, with one third or two thirds of this amount of elementary charge, were discovered later, but you could define the fundamental unit of charge as multiple or entire fraction of this charge. Moreover, as far as we know, electrons are “elementary”/”fundamental” entities, so, we can use this charge as unit and we can define quark charges in terms of it too. Electric charge is not a fundamental unit in the SI system of units. Charge flow, or electric current, is.

An amazing property of the above 5 constants is that they are “universal”. And, for instance, energy is related with other magnitudes in theories where the above constants are present in a really wonderful and unified manner:

$\boxed{E=N\dfrac{k_BT}{2}=Mc^2=TS=Pc=N\dfrac{h\nu}{2}=N\dfrac{\hbar \omega}{2}=\dfrac{R_Sc^4}{2G_N}=\hbar c k=\dfrac{hc}{\lambda}}$

Caution: k is not the Boltzmann constant but the wave number.

There is a sixth “fundamental” constant related to electromagnetism, but it is also related to the speed of light, the electric charge and the Planck’s constant in a very sutble way. Let me introduce you it too…

6. Coulomb constant. $k_C$.

This is a second constant related to classical electromagnetism, like the speed of light in vacuum. Coulomb’s constant, the electric force constant, or the electrostatic constant (denoted $k_C$) is a proportionality factor that takes part in equations relating electric force between  point charges, and indirectly it also appears (depending on your system of units) in expressions for electric fields of charge distributions. Coulomb’s law reads

$F_C=k_C\dfrac{Qq}{r^2}$

Its experimental value is

$k_C=\dfrac{1}{4\pi \varepsilon_0}=\dfrac{c^2\mu_0}{4\pi}=c^2\cdot 10^{-7}H\cdot m^{-1}= 8.9875517873681764\cdot 10^9 Nm^2/C^2$

Generally, the Coulomb constant is dropped out and it is usually preferred to express everything using the electric permitivity of vacuum $\varepsilon_0$ and/or numerical factors depending on the pi number $\pi$ if you choose the gaussian system of units  (read this wikipedia article http://en.wikipedia.org/wiki/Gaussian_system_of_units ), the CGS system, or some hybrid units based on them.

## H.E.P. units

High Energy Physicists use to employ units in which the velocity is measured in fractions of the speed of light in vacuum, and the action/angular momentum is some multiple of the Planck’s constant. These conditions are equivalent to set

$\boxed{c=1_c=1}$ $\boxed{\hbar=1_\hbar=1}$

Complementarily, or not, depending on your tastes and preferences, you can also set the Boltzmann’s constant to the unit as well

$k_B=1_{k_B}=1$

and thus the complete HEP system is defined if you set

$\boxed{c=\hbar=k_B=1}$

This “natural” system of units is lacking yet a scale of energy. Then, it is generally added the electron-volt $eV$ as auxiliary quantity defining the reference energy scale. Despite the fact that this is not a “natural unit” in the proper sense because it is defined by a natural property, the electric charge,  and the anthropogenic unit of electric potential, the volt. The SI prefixes multiples of eV are used as well: keV, MeV, GeV, etc. Here, the eV is used as reference energy quantity, and with the above election of “elementary/natural units” (or any other auxiliary unit of energy), any quantity can be expressed. For example, a distance of 1 m can be expressed in terms of eV, in natural units, as

$1m=\dfrac{1m}{\hbar c}\approx 510eV^{-1}$

This system of units have remarkable conversion factors

A) $1 eV^{-1}$ of length is equal to $1.97\cdot 10^{-7}m =(1\text{eV}^{-1})\hbar c$

B) $1 eV$ of mass is equal to $1.78\cdot 10^{-36}kg=1\times \dfrac{eV}{c^2}$

C) $1 eV^{-1}$ of time is equal to $6.58\cdot 10^{-16}s=(1\text{eV}^{-1})\hbar$

D) $1 eV$ of temperature is equal to $1.16\cdot 10^4K=1eV/k_B$

E) $1 unit$ of electric charge in the Lorentz-Heaviside system of units is equal to $5.29\cdot 10^{-19}C=e/\sqrt{4\pi\alpha}$

F) $1 unit$ of electric charge in the Gaussian system of units is equal to $1.88\cdot 10^{-18}C=e/\sqrt{\alpha}$

This system of units, therefore, leaves free only the energy scale (generally it is chosen the electron-volt) and the electric measure of fundamentl charge. Every other unit can be related to energy/charge. It is truly remarkable than doing this (turning invisible the above three constants) you can “unify” different magnitudes due to the fact these conventions make them equivalent. For instance, with natural units:

1) Length=Time=1/Energy=1/Mass.

It is due to $x=ct$, $E=Mc^2$ and $E=hc/\lambda$ equations. Setting $c$ and $h$ or $\hbar$ provides

$x=t$, $E=M$ and $E=1/\lambda$.

Note that natural units turn invisible the units we set to the unit! That is the key of the procedure. It simplifies equations and expressions. Of course, you must be careful when you reintroduce constants!

2) Energy=Mass=Momemntum=Temperature.

It is due to $E=k_BT$, $E=Pc$ and $E=Mc^2$ again.

One extra bonus for theoretical physicists is that natural units allow to build and write proper lagrangians and hamiltonians (certain mathematical operators containing the dynamics of the system enconded in them), or equivalently the action functional, with only the energy or “mass” dimension as “free parameter”. Let me show how it works.

Natural units in HEP identify length and time dimensions. Thus $\left[L\right]=\left[T\right]$. Planck’s constant allows us to identify those 2 dimensions with 1/Energy (reciprocals of energy) physical dimensions. Therefore, in HEP units, we have

$\boxed{\left[L\right]=\left[T\right]=\left[E\right]^{-1}}$

The speed of light identifies energy and mass, and thus, we can often heard about “mass-dimension” of a lagrangian in the following sense. HEP units can be thought as defining “everything” in terms of energy, from the pure dimensional ground. That is, every physical dimension is (in HEP units) defined by a power of energy:

$\boxed{\left[E\right]^n}$

Thus, we can refer to any magnitude simply saying the power of such physical dimension (or you can think logarithmically to understand it easier if you wish). With this convention, and recalling that energy dimension is mass dimension, we have that

$\left[L\right]=\left[T\right]=-1$ and $\left[E\right]=\left[M\right]=1$

Using these arguments, the action functional is a pure dimensionless quantity, and thus, in D=4 spacetime dimensions, lagrangian densities must have dimension 4 ( or dimension D is a general spacetime).

$\displaystyle{S=\int d^4x \mathcal{L}\rightarrow \left[\mathcal{L}\right]=4}$

$\displaystyle{S=\int d^Dx \mathcal{L}\rightarrow \left[\mathcal{L}\right]=D}$

In D=4 spacetime dimensions, it can be easily showed that

$\left[\partial_\mu\right]=\left[\Phi\right]=\left[A^\mu\right]=1$

$\left[\Psi_D\right]=\left[\Psi_M\right]=\left[\chi\right]=\left[\eta\right]=\dfrac{3}{2}$

where $\Phi$ is a scalar field, $A^\mu$ is a vector field (like the electromagnetic or non-abelian vector gauge fields), and $\Psi_D, \Psi_M, \chi, \eta$ are a Dirac spinor, a Majorana spinor, and $\chi, \eta$ are Weyl spinors (of different chiralities). Supersymmetry (or SUSY) allows for anticommuting c-numbers (or Grassmann numbers) and it forces to introduce auxiliary parameters with mass dimension $-1/2$. They are the so-called SUSY transformation parameters $\zeta_{SUSY}=\epsilon$. There are some speculative spinors called ELKO fields that could be non-standandard spinor fields with mass dimension one! But it is an advanced topic I am not going to discuss here today. In general D spacetime dimensions a scalar (or vector) field would have mass dimension $(D-2)/2$, and a spinor/fermionic field in D dimensions has generally $(D-1)/2$ mass dimension (excepting the auxiliary SUSY grassmanian fields and the exotic idea of ELKO fields).  This dimensional analysis is very useful when theoretical physicists build up interacting lagrangians, since we can guess the structure of interaction looking at purely dimensional arguments every possible operator entering into the action/lagrangian density! In summary, therefore, for any D:

$\boxed{\left[\Phi\right]=\left[A_\mu\right]=\dfrac{D-2}{2}\equiv E^{\frac{D-2}{2}}=M^{\frac{D-2}{2}}}$

$\boxed{\left[\Psi\right]=\dfrac{D-1}{2}\equiv E^{\frac{D-1}{2}}=M^{\frac{D-1}{2}}}$

Remark (for QFT experts only): Don’t confuse mass dimension with the final transverse polarization degrees or “degrees of freedom” of a particular field, i.e., “components” minus “gauge constraints”. E.g.: a gauge vector field has $D-2$ degrees of freedom in D dimensions. They are different concepts (although both closely related to the spacetime dimension where the field “lives”).

In summary:

i) HEP units are based on QM (Quantum Mechanics), SR (Special Relativity) and Statistical Mechanics (Entropy and Thermodynamics).

ii) HEP units need to introduce a free energy scale, and it generally drives us to use the eV or electron-volt as auxiliary energy scale.

iii) HEP units are useful to dimensional analysis of lagrangians (and hamiltonians) up to “mass dimension”.

## Stoney Units

In Physics, the Stoney units form a alternative set of natural units named after the Irish physicist George Johnstone Stoney, who first introduced them as we know it today in 1881. However, he presented the idea in a lecture entitled “On the Physical Units of Nature” delivered to the British Association before that date, in 1874. They are the first historical example of natural units and “unification scale” somehow. Stoney units are rarely used in modern physics for calculations, but they are of historical interest but some people like Wilczek has written about them (see, e.g., http://arxiv.org/abs/0708.4361). These units of measurement were designed so that certain fundamental physical constants are taken as reference basis without the Planck scale being explicit, quite a remarkable fact! The set of constants that Stoney used as base units is the following:

A) Electric charge, $e=1_e$.

B) Speed of light in vacuum, $c=1_c$.

C) Gravitational constant, $G_N=1_{G_N}$.

D) The Reciprocal of Coulomb constant, $1/k_C=4\pi \varepsilon_0=1_{k_C^{-1}}=1_{4\pi \varepsilon_0}$.

Stony units are built when you set these four constants to the unit, i.e., equivalently, the Stoney System of Units (S) is determined by the assignments:

$\boxed{e=c=G_N=4\pi\varepsilon_0=1}$

Interestingly, in this system of units, the Planck constant is not equal to the unit and it is not “fundamental” (Wilczek remarked this fact here ) but:

$\hbar=\dfrac{1}{\alpha}\approx 137.035999679$

Today, Planck units are more popular Planck than Stoney units in modern physics, and even there are many physicists who don’t know about the Stoney Units! In fact, Stoney was one of the first scientists to understand that electric charge was quantized!; from this quantization he deduced the units that are now named after him.

The Stoney length and the Stoney energy are collectively called the Stoney scale, and they are not far from the Planck length and the Planck energy, the Planck scale. The Stoney scale and the Planck scale are the length and energy scales at which quantum processes and gravity occur together. At these scales, a unified theory of physics is thus likely required. The only notable attempt to construct such a theory from the Stoney scale was that of H. Weyl, who associated a gravitational unit of charge with the Stoney length and who appears to have inspired Dirac’s fascination with the large number hypothesis. Since then, the Stoney scale has been largely neglected in the development of modern physics, although it is occasionally discussed to this day. Wilczek likes to point out that, in Stoney Units, QM would be an emergent phenomenon/theory, since the Planck constant wouldn’t be present directly but as a combination of different constants. By the other hand, the Planck scale is valid for all known interactions, and does not give prominence to the electromagnetic interaction, as the Stoney scale does. That is, in Stoney Units, both gravitation and electromagnetism are on equal footing, unlike the Planck units, where only the speed of light is used and there is no more connections to electromagnetism, at least, in a clean way like the Stoney Units do. Be aware, sometimes, rarely though, Planck units are referred to as Planck-Stoney units.

What are the most interesting Stoney system values? Here you are the most remarkable results:

1) Stoney Length, $L_S$.

$\boxed{L_S=\sqrt{\dfrac{G_Ne^2}{(4\pi\varepsilon)c^4}}\approx 1.38\cdot 10^{-36}m}$

2) Stoney Mass, $M_S$.

$\boxed{M_S=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.86\cdot 10^{-9}kg}$

3) Stoney Energy, $E_S$.

$\boxed{E_S=M_Sc^2=\sqrt{\dfrac{e^2c^4}{G_N(4\pi\varepsilon_0)}}\approx 1.67\cdot 10^8 J=1.04\cdot 10^{18}GeV}$

4) Stoney Time, $t_S$.

$\boxed{t_S=\sqrt{\dfrac{G_Ne^2}{c^6(4\pi\varepsilon_0)}}\approx 4.61\cdot 10^{-45}s}$

5) Stoney Charge, $Q_S$.

$\boxed{Q_S=e\approx 1.60\cdot 10^{-19}C}$

6) Stoney Temperature, $T_S$.

$\boxed{T_S=E_S/k_B=\sqrt{\dfrac{e^2c^4}{G_Nk_B^2(4\pi\varepsilon_0)}}\approx 1.21\cdot 10^{31}K}$

## Planck Units

The reference constants to this natural system of units (generally denoted by P) are the following 4 constants:

1) Gravitational constant. $G_N$

2) Speed of light. $c$.

3) Planck constant or rationalized Planck constant. $\hbar$.

4) Boltzmann constant. $k_B$.

The Planck units are got when you set these 4 constants to the unit, i.e.,

$\boxed{G_N=c=\hbar=k_B=1}$

It is often said that Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even features of any fundamental particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of classical spacetime in the relativistic theory of gravitation, also known as general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of elementary quantum mechanics. This is the reason why Planck units particularly useful and common in theories of quantum gravity, including string theory or loop quantum gravity.

This system defines some limit magnitudes, as follows:

1) Planck Length, $L_P$.

$\boxed{L_P=\sqrt{\dfrac{G_N\hbar}{c^3}}\approx 1.616\cdot 10^{-35}s}$

2) Planck Time, $t_P$.

$\boxed{t_P=L_P/c=\sqrt{\dfrac{G_N\hbar}{c^5}}\approx 5.391\cdot 10^{-44}s}$

3) Planck Mass, $M_P$.

$\boxed{M_P=\sqrt{\dfrac{\hbar c}{G_N}}\approx 2.176\cdot 10^{-8}kg}$

4) Planck Energy, $E_P$.

$\boxed{E_P=M_Pc^2=\sqrt{\dfrac{\hbar c^5}{G_N}}\approx 1.96\cdot 10^9J=1.22\cdot 10^{19}GeV}$

5) Planck charge, $Q_P$.

In Lorentz-Heaviside electromagnetic units

$\boxed{Q_P=\sqrt{\hbar c \varepsilon_0}=\dfrac{e}{\sqrt{4\pi\alpha}}\approx 5.291\cdot 10^{-19}C}$

In Gaussian electromagnetic units

$\boxed{Q_P=\sqrt{\hbar c (4\pi\varepsilon_0)}=\dfrac{e}{\sqrt{\alpha}}\approx 1.876\cdot 10^{-18}C}$

6) Planck temperature, $T_P$.

$\boxed{T_P=E_P/k_B=\sqrt{\dfrac{\hbar c^5}{G_Nk_B^2}}\approx 1.417\cdot 10^{32}K}$

From these “fundamental” magnitudes we can build many derived quantities in the Planck System:

1) Planck area.

$A_P=L_P^2=\dfrac{\hbar G_N}{c^3}\approx 2.612\cdot 10^{-70}m^2$

2) Planck volume.

$V_P=L_P^3=\left(\dfrac{\hbar G_N}{c^3}\right)^{3/2}\approx 4.22\cdot 10^{-105}m^3$

3) Planck momentum.

$P_P=M_Pc=\sqrt{\dfrac{\hbar c^3}{G_N}}\approx 6.52485 kgm/s$

A relatively “small” momentum!

4) Planck force.

$F_P=E_P/L_P=\dfrac{c^4}{G_N }\approx 1.21\cdot 10^{44}N$

It is independent from Planck constant! Moreover, the Planck acceleration is

$a_P=F_P/M_P=\sqrt{\dfrac{c^7}{G_N\hbar}}\approx 5.561\cdot 10^{51}m/s^2$

5) Planck Power.

$\mathcal{P}_P=\dfrac{c^5}{G_N}\approx 3.628\cdot 10^{52}W$

6) Planck density.

$\rho_P=\dfrac{c^5}{\hbar G_N^2}\approx 5.155\cdot 10^{96}kg/m^3$

Planck density energy would be equal to

$\rho_P c^2=\dfrac{c^7}{\hbar G_N^2}\approx 4.6331\cdot 10^{113}J/m^3$

7) Planck angular frequency.

$\omega_P=\sqrt{\dfrac{c^5}{\hbar G_N}}\approx 1.85487\cdot 10^{43}Hz$

8) Planck pressure.

$p_P=\dfrac{F_P}{A_P}=\dfrac{c^7}{G_N^2\hbar}=\rho_P c^2\approx 4.6331\cdot 10^{113}Pa$

Note that Planck pressure IS the Planck density energy!

9) Planck current.

$I_P=Q_P/t_P=\sqrt{\dfrac{4\pi\varepsilon_0 c^6}{G_N}}\approx 3.4789\cdot 10^{25}A$

10) Planck voltage.

$v_P=E_P/Q_P=\sqrt{\dfrac{c^4}{4\pi\varepsilon_0 G_N}}\approx 1.04295\cdot 10^{27}V$

11) Planck impedance.

$Z_P=v_P/I_P=\dfrac{\hbar^2}{Q_P}=\dfrac{1}{4\pi \varepsilon_0 c}\approx 29.979\Omega$

A relatively small impedance!

12) Planck capacitor.

$C_P=Q_P/v_P=4\pi\varepsilon_0\sqrt{\dfrac{\hbar G_N}{ c^3}} \approx 1.798\cdot 10^{-45}F$

Interestingly, it depends on the gravitational constant!

Some Planck units are suitable for measuring quantities that are familiar from daily experience. In particular:

1 Planck mass is about 22 micrograms.

1 Planck momentum is about 6.5 kg m/s

1 Planck energy is about 500kWh.

1 Planck charge is about 11 elementary (electronic) charges.

1 Planck impendance is almost 30 ohms.

Moreover:

i) A speed of 1 Planck length per Planck time is the speed of light, the maximum possible speed in special relativity.

ii) To understand the Planck Era and “before” (if it has sense), supposing QM holds yet there, we need a quantum theory of gravity to be available there. There is no such a theory though, right now. Therefore, we have to wait if these ideas are right or not.

iii) It is believed that at Planck temperature, the whole symmetry of the Universe was “perfect” in the sense the four fundamental foces were “unified” somehow. We have only some vague notios about how that theory of everything (TOE) would be.

The physical dimensions of the known Universe in terms of Planck units are “dramatic”:

i) Age of the Universe is about $t_U=8.0\cdot 10^{60} t_P$.

ii) Diameter of the observable Universe is about $d_U=5.4\cdot 10^{61}L_P$

iii) Current temperature of the Universe is about $1.9 \cdot 10^{-32}T_P$

iv) The observed cosmological constant is about $5.6\cdot 10^{-122}t_P^{-2}$

v) The mass of the Universe is about $10^{60}m_p$.

vi) The Hubble constant is $71km/s/Mpc\approx 1.23\cdot 10^{-61}t_P^{-1}$

## Schrödinger Units

The Schrödinger Units do not obviously contain the term c, the speed of light in a vacuum. However, within the term of the Permittivity of Free Space [i.e., electric constant or vacuum permittivity], and the speed of light plays a part in that particular computation. The vacuum permittivity results from the reciprocal of the speed of light squared times the magnetic constant. So, even though the speed of light is not apparent in the Schrödinger equations it does exist buried within its terms and therefore influences the decimal placement issue within square roots. The essence of Schrödinger units are the following constants:

A) Gravitational constant $G_N$.

B) Planck constant $\hbar$.

C) Boltzmann constant $k_B$.

D) Coulomb constant or equivalently the electric permitivity of free space/vacuum $k_C=1/4\pi\varepsilon_0$.

E) The electric charge of the positron $e$.

In this sistem $\psi$ we have

$\boxed{G_N=\hbar =k_B =k_C =1}$

1) Schrödinger Length $L_{Sch}$.

$L_\psi=\sqrt{\dfrac{\hbar^4 G_N(4\pi\varepsilon_0)^3}{e^6}}\approx 2.593\cdot 10^{-32}m$

2) Schrödinger time $t_{Sch}$.

$t_\psi=\sqrt{\dfrac{\hbar^6 G_N(4\pi\varepsilon_0)^5}{e^{10}}}\approx 1.185\cdot 10^{-38}s$

3) Schrödinger mass $M_{Sch}$.

$M_\psi=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.859\cdot 10^{-9}kg$

4) Schrödinger energy $E_{Sch}$.

$E_\psi=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_N}}\approx 8890 J=5.55\cdot 10^{13}GeV$

5) Schrödinger charge $Q_{Sch}$.

$Q_\psi =e=1.602\cdot 10^{-19}C$

6) Schrödinger temperature $T_{Sch}$.

$T_\psi=E_\psi/k_B=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_Nk_B^2}}\approx 6.445\cdot 10^{26}K$

## Atomic Units

There are two alternative systems of atomic units, closely related:

1) Hartree atomic units:

$\boxed{e=m_e=\hbar=k_B=1}$ and $\boxed{c=\alpha^{-1}}$

2) Rydberg atomic units:

$\boxed{\dfrac{e}{\sqrt{2}}=2m_e=\hbar=k_B=1}$ and $\boxed{c=2\alpha^{-1}}$

There, $m_e$ is the electron mass and $\alpha$ is the electromagnetic fine structure constant. These units are designed to simplify atomic and molecular physics and chemistry, especially the quantities related to the hydrogen atom, and they are widely used in these fields. The Hartree units were first proposed by Doublas Hartree, and they are more common than the Rydberg units.

The units are adapted to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Böhr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy equal to 1/2, and so on.

Some quantities in the Hartree system of units are:

1) Atomic Length (also called Böhr radius):

$L_A=a_0=\dfrac{\hbar^2 (4\pi\varepsilon_0)}{m_ee^2}\approx 5.292\cdot 10^{-11}m=0.5292\AA$

2) Atomic Time:

$t_A=\dfrac{\hbar^3(4\pi\varepsilon_0)^2}{m_ee^4}\approx 2.419\cdot 10^{-17}s$

3) Atomic Mass:

$M_A=m_e\approx 9.109\cdot 10^{-31}kg$

4) Atomic Energy:

$E_A=m_ec^2=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2} \approx 4.36\cdot 10^{ -18}J=27.2eV=2\times(13.6)eV=2Ry$

5) Atomic electric Charge:

$Q_A=q_e=e\approx 1.602\cdot 10^{-19}C$

6) Atomic temperature:

$T_A=E_A/k_B=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2k_B}\approx 3.158\cdot 10^5K$

The fundamental unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant  is extremely small in atomic units (about 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force . The unit length, LA, is the so-called and well known Böhr radius, a0.

The values of c and e shown above imply that $e=\sqrt{\alpha \hbar c}$, as in Gaussian units, not Lorentz-Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units. Be aware of these issues!

## QCD Units

In the framework of Quantum Chromodynamics, a quantum field theory (QFT) we know as QCD, we can define the QCD system of units based on:

1) QCD Length $L_{QCD}$.

$L_{QCD}=\dfrac{\hbar}{m_pc}\approx 2.103\cdot 10^{-16}m$

and where $m_p$ is the proton mass (please, don’t confuse it with the Planck mass $M_P$).

2) QCD Time $t_{QCD}$.

$t_{QCD}=\dfrac{\hbar}{m_pc^2}\approx 7.015\cdot 10^{-25}s$

3) QCD Mass $M_{QCD}$.

$M_{QCD}=m_p\approx 1.673\cdot 10^{-27}kg$

4) QCD Energy $E_{QCD}$.

$E_{QCD}=M_{QCD}c^2=m_pc^2\approx 1.504\cdot 10^{-10}J=938.6MeV=0.9386GeV$

Thus, QCD energy is about 1 GeV!

5) QCD Temperature $T_{QCD}$.

$T_{QCD}=E_{QCD}/k_B=\dfrac{m_pc^2}{k_B}\approx 1.089\cdot 10^{13}K$

6) QCD Charge $Q_{QCD}$.

In Heaviside-Lorent units:

$Q_{QCD}=\dfrac{1}{\sqrt{4\pi\alpha}}e\approx 5.292\cdot 10^{-19}C$

In Gaussian units:

$Q_{QCD}=\dfrac{1}{\sqrt{\alpha}}e\approx 1.876\cdot 10^{-18}C$

## Geometrized Units

The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units. That is, we set:

$\boxed{G_N=c=1}$

and the remaining constants are set to the unit according to your needs and tastes.

## Conversion Factors

This table from wikipedia is very useful:

where:

i) $\alpha$ is the fine-structure constant, approximately 0.007297.

ii) $\alpha_G=\dfrac{m_e^2}{M_P^2}\approx 1.752\cdot 10^{-45}$ is the gravitational fine-structure constant.

Some conversion factors for geometrized units are also available:

Conversion from kg, s, C, K into m:

$G_N/c^2$  [m/kg]

$c$ [m/s]

$\sqrt{G_N/(4\pi\varepsilon_0)}/c^2$ [m/C]

$G_Nk_B/c^2$ [m/K]

Conversion from m, s, C, K into kg:

$c^2/G_N$ [kg/m]

$c^3/G_N$ [kg/s]

$1/\sqrt{G_N4\pi\varepsilon_0}$ [kg/C]

$k_B/c^2$[kg/K]

Conversion from m, kg, C, K into s

$1/c$ [s/m]

$G_N/c^3$[s/kg]

$\sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}/c^3$ [s/C]

$G_Nk_B/c^5$ [s/K]

Conversion from m, kg, s, K into C

$c^2/\sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}$[C/m]

$(G_N4\pi\varepsilon_0)^{1/2}$ [C/kg]

$c^3/(G_N/(4\pi\varepsilon_0))^{1/2}$[C/s]

$k_B\sqrt{G_N4\pi\varepsilon_0}/c^2$   [C/K]

Conversion from m, kg, s, C into K

$c^4/(G_Nk_B)$[K/m]

$c^2/k_B$ [K/kg]

$c^5/(G_Nk_B)$ [K/s]

$c^2/(k_B\sqrt{G_N4\pi\varepsilon_0})$ [K/C]

Or you can read off factors from this table as well:

and

Natural units have some advantages (“Pro”):

1) Equations and mathematical expressions are simpler in Natural Units.

2) Natural units allow for the match between apparently different physical magnitudes.

3) Some natural units are independent from “prototypes” or “external patterns” beyond some clever and trivial conventions.

4) They can help to unify different physical concetps.

However, natural units have also some disadvantages (“Cons”):

1) They generally provide less precise measurements or quantities.

2) They can be ill-defined/redundant and own some ambiguity. It is also caused by the fact that some natural units differ by numerical factors of pi and/or pure numbers, so they can not help us to understand the origin of some pure numbers (adimensional prefactors) in general.

Moreover, you must not forget that natural units are “human” in the sense you can addapt them to your own needs, and indeed,you can create your own particular system of natural units! However, said this, you can understand the main key point: fundamental theories are who finally hint what “numbers”/”magnitudes” determine a system of “natural units”.

Remark: the smart designer of a system of natural unit systems must choose a few of these constants to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized: if the mass of an electron is defined to be 1, then the mass of a proton has to be $\approx 6\pi^5\approx 1936$. In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants through a very known equation:

$\alpha=\dfrac{k_Ce^2}{\hbar c}$

where $k_C$ is the Coulomb constant, e is the positron electric charge (elementary charge), ℏ is the reduced Planck constant, and c is the again the speed of light in vaccuum. It is believed that in a normal theory is not possible to simultaneously normalize all four of the constants c, ℏ, e, and kC.

## Fritzsch-Xing  plot

Fritzsch and Xing have developed a very beautiful plot of the fundamental constants in Nature (those coming from gravitation and the Standard Model). I can not avoid to include it here in the 2 versions I have seen it. The first one is “serious”, with 29 “fundamental constants”:

However, I prefer the “fun version” of this plot. This second version is very cool and it includes 28 “fundamental constants”:

## The Okun Cube

Long ago, L.B. Okun provided a very interesting way to think about the Planck units and their meaning, at least from current knowledge of physics! He imagined a cube in 3d in which we have 3 different axis. Planck units are defined as we have seen above by 3 constants $c, \hbar, G_N$ plus the Boltzmann constant. Imagine we arrange one axis for c-Units, one axis for $\hbar$-units and one more for $G_N$-units. The result is a wonderful cube:

Or equivalently, sometimes it is seen as an equivalent sketch ( note the Planck constant is NOT rationalized in the next cube, but it does not matter for this graphical representation):

Classical physics (CP) corresponds to the vanishing of the 3 constants, i.e., to the origin $(0,0,0)$.

Newtonian mechanics (NM) , or more precisely newtonian gravity plus classical mechanics, corresponds to the “point” $(0,0,G_N)$.

Special relativity (SR) corresponds to the point $(0,1/c,0)$, i.e., to “points” where relativistic effects are important due to velocities close to the speed of light.

Quantum mechanics (QM) corresponds to the point $(h,0,0)$, i.e., to “points” where the action/angular momentum fundamental unit is important, like the photoelectric effect or the blackbody radiation.

Quantum Field Theory (QFT) corresponds to the point $(h,1/c,0)$, i.e, to “points” where both, SR and QM are important, that is, to situations where you can create/annihilate pairs, the “particle” number is not conserved (but the particle-antiparticle number IS), and subatomic particles manifest theirselves simultaneously with quantum and relativistic features.

Quantum Gravity (QG) would correspond to the point $(h,0,G_N)$ where gravity is quantum itself. We have no theory of quantum gravity yet, but some speculative trials are effective versions of (super)-string theory/M-theory, loop quantum gravity (LQG) and some others.

Finally, the Theory Of Everything (TOE) would be the theory in the last free corner, that arising in the vertex $(h,1/c,G_N)$. Superstring theories/M-theory are the only serious canditate to TOE so far. LQG does not generally introduce matter fields (some recent trials are pushing into that direction, though) so it is not a TOE candidate right now.

## Some final remarks and questions

1) Are fundamental “constants” really constant? Do they vary with energy or time?

2) How many fundamental constants are there? This questions has provided lots of discussions. One of the most famous was this one:

http://arxiv.org/abs/physics/0110060

The trialogue (or dialogue if you are precise with words) above discussed the opinions by 3 eminent physicists about the number of fundamental constants: Michael Duff suggested zero, Gabriel Veneziano argued that there are only 2 fundamental constants while L.B. Okun defended there are 3 fundamental constants

3) Should the cosmological constant be included as a new fundamental constant? The cosmological constant behaves as a constant from current cosmological measurements and cosmological data fits, but is it truly constant? It seems to be…But we are not sure. Quintessence models (some of them related to inflationary Universes) suggest that it could vary on cosmological scales very slowly. However, the data strongly suggest that

$P_\Lambda=-\rho c^2$

It is simple, but it is not understood the ultimate nature of such a “fluid” because we don’t know what kind of “stuff” (either particles or fields) can make the cosmological constant be so tiny and so abundant (about the 72% of the Universe is “dark energy”/cosmological constant) as it seems to be. We do know it can not be “known particles”. Dark energy behaves as a repulsive force, some kind of pressure/antigravitation on cosmological scales. We suspect it could be some kind of scalar field but there are many other alternatives that “mimic” a cosmological constant. If we identify the cosmological constant with the vacuum energy we obtain about 122 orders of magnitude of mismatch between theory and observations. A really bad “prediction”, one of the worst predictions in the history of physics!

Be natural and stay tuned!

# LOG#057. Naturalness problems.

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., $100000, 10^{-4},10^{122}, 10^{23},\ldots$ Equivalently, imagine that the values of every fundamental and measurable physical quantity $X$ lies in the real interval $\left[ 0,\infty\right)$. Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema $0$ or $\infty$ are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even $\infty$ can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

$(0, 1, \infty)$

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or $\infty$ are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple $(0,1,\infty)$ and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give $m_\nu \leq 10 eV$ or even $m_\nu \sim 1eV$ as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, $\Delta m^2_1\sim 10^{-3}$ and $\Delta m^2_2\sim 10^{-5}$. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is $m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?$

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about $M_Z\sim M_W \sim \mathcal{O} (100GeV)$. Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

$M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV$

or more generally, dropping the $8\pi$ factor

$M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV$

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses $M_{EW}< so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order $\mathcal{O}(M_P^2)$

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant $\Lambda$, from the so-called Einstein’s field equations of classical relativistic gravity

$\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}$

is estimated to be about $\mathcal{O} (10^{-47})GeV^4$ from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about $\mathcal{O}(M_P^4)$ or in the framework of supersymmetric field theories, $\mathcal{O}(M^4_{SUSY})$ after SUSY symmetry breaking. Then, the problem is:

Why is $\rho_\Lambda^{obs}<<\rho_\Lambda^{th}$? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called $\theta$-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

$\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}$

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

$\theta <10^{-12}$

while, from the theoretical aside, it could be any number in the interval $\left[-\pi,\pi\right]$. Why is $\theta$ close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the $\Lambda CDM$ model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

$\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)$

There, $\rho$ is the total energy density contained in the whole Universe and $\rho_c=\dfrac{3H^2}{8\pi G}$ is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

$\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01$

At the Planck scale era, we can even calculate that

$\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})$

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, $\rho_M\sim\rho_\Lambda=\rho_{DE}$. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂

# LOG#046. The Cherenkov effect.

The Cherenkov effect/Cherenkov radiation, sometimes also called Vavilov-Cherenkov radiation, is our topic here in this post.

In 1934, P.A. Cherenkov was a post graduate student of S.I.Vavilov. He was investigating the luminescence of uranyl salts under the incidence of gamma rays from radium and he discovered a new type of luminiscence which could not be explained by the ordinary theory of fluorescence. It is well known that fluorescence arises as the result of transitions between excited states of atoms or molecules. The average duration of fluorescent emissions is about $\tau>10^{-9}s$ and the transition probability is altered by the addition of “quenching agents” or by some purification process of the material, some change in the ambient temperature, etc. It shows that none of these methods is able to quench the fluorescent emission totally, specifically the new radiation discovered by Cherenkov. A subsequent investigation of the new radiation ( named Cherenkov radiation by other scientists after the Cherenkov discovery of such a radiation) revealed some interesting features of its characteristics:

1st. The polarization of luminiscence changes sharply when we apply a magnetic field. Cherenkov radiation luminescence is then causes by charged particles rather than by photons, the $\gamma$-ray quanta! Cherenkov’s experiment showed that these particles could be electrons produced by the interaction of $\gamma$-photons with the medium due to the photoelectric effect or the Compton effect itself.

2nd. The intensity of the Cherenkov’s radiation is independent of the charge Z of the medium. Therefore, it can not be of radiative origin.

3rd. The radiation is observed at certain angle (specifically forming a cone) to the direction of motion of charged particles.

The Cherenkov radiation was explained in 1937 by Frank and Tamm based on the foundations of classical electrodynamics. For the discovery and explanation of Cherenkov effect, Cherenkov, Frank and Tamm were awarded the Nobel Prize in 1958. We will discuss the Frank-Tamm formula later, but let me first explain how the classical electrodynamics handle the Vavilov-Cherenkov radiation.

The main conclusion that Frank and Tamm obtained comes from the following observation. They observed that the statement of classical electrodynamics concerning the impossibility of energy loss by radiation for a charged particle moving uniformly and following a straight line in vacuum is no longer valid if we go over from the vacuum to a medium with certain refractive index $n>1$. They went further with the aid of an easy argument based on the laws of conservation of momentum and energy, a principle that rests in the core of Physics as everybody knows. Imagine a charged partice moving uniformly in a straight line, and suppose it can loose energy and momentum through radiation. In that case, the next equation holds:

$\left(\dfrac{dE}{dp}\right)_{particle}=\left(\dfrac{dE}{dp}\right)_{radiation}$

This equation can not be satisfied for the vacuum but it MAY be valid for a medium with a refractive index gretear than one $n>1$. We will simplify our discussion if we consider that the refractive index is constant (but similar conclusions would be obtained if the refractive index is some function of the frequency).

By the other hand, the total energy E of a particle having a non-null mass $m\neq 0$ and moving freely in vacuum with some momentum p and velocity v will be:

$E=\sqrt{p^2c^2+m^2c^4}$

and then

$\left(\dfrac{dE}{dp}\right)_{particle}=\dfrac{pc^2}{E}=\beta c=v$

Moreover, the electromagnetic radiation in vaccum is given by the relativistic relationship

$E_{rad}=pc$

From this equation, we easily get that

$\left(\dfrac{dE}{dp}\right)_{radiation}=c$

Since the particle velocity is $v, we obtain that

$\left(\dfrac{dE}{dp}\right)_{particle}<\left(\dfrac{dE}{dp}\right)_{radiation}$

In conclusion: the laws of conservation of energy and momentum prevent that a charged particle moving with a rectilinear and uniform motion in vacuum from giving away its energy and momentum in the form of electromagnetic radiation! The electromagnetic radiation can not accept the entire momentum given away by the charged particle.

Anyway, we realize that this restriction and constraint is removed and given up when the aprticle moves in a medium with a refractive index $n>1$. In this case, the velocity of light in the medium would be

$c'=c/n

and the velocity v of the particle may not only become equal to the velocity of light $c'$ in the medium, but even exceed it when the following phenomenological condition is satisfied:

$\boxed{v\geq c'=c/n}$

It is obvious that, when $v=c'$ the condition

$\left(\dfrac{dE}{dp}\right)_{particle}=\left(\dfrac{dE}{dp}\right)_{radiation}$

will be satisfied for electromagnetic radiation emitted strictly in the direction of motion of the particle, i.e., in the direction of the angle $\theta=0\textdegree$. If $v>c'$, this equation is verified for some direction $\theta$ along with $v=c'$, where

$v'=v\cos\theta$

is the projection of the particle velocity v on the observation direction. Then, in a medium with $n>1$, the conservation laws of energy and momentum say that it is allowed that a charged particle with rectilinear and uniform motion, $v\geq c'=c/n$ can loose fractions of energy and momentum $dE$ and $dp$, whenever those lost energy and momentum is carried away by an electromagnetic radiation propagating in the medium at an angle/cone given by:

$\boxed{\theta=arccos\left(\dfrac{1}{n\beta}\right)=\cos^{-1}\left(\dfrac{1}{n\beta}\right)}$

with respect to the observation direction of the particle motion.

These arguments, based on the conservation laws of momenergy, do not provide any ide about the real mechanism of the energy and momentum which are lost during the Cherenkov radiation. However, this mechanism must be associated with processes happening in the medium since the losses can not occur ( apparently) in vacuum under normal circumstances ( we will also discuss later the vacuum Cherenkov effect, and what it means in terms of Physics and symmetry breaking).

We have learned that Cherenkov radiation is of the same nature as certain other processes we do know and observer, for instance, in various media when bodies move in these media at a velocity exceeding that of the wave propagation. This is a remarkable result! Have you ever seen a V-shaped wave in the wake of a ship? Have you ever seen a conical wave caused by a supersonic boom of a plane or missile? In these examples, the wave field of the superfast object if found to be strongly perturbed in comparison with the field of a “slow” object ( in terms of the “velocity of sound” of the medium). It begins to decelerate the object!

Question: What is then the mechanism behind the superfast  motion of a charged particle in a medium wiht a refractive index $n>1$ producing the Cherenkov effect/radiation?

Answer:  The mechanism under the Cherenkov effect/radiation is the coherent emission by the dipoles formed due to the polarization of the medium atoms by the charged moving particle!

The idea is as follows. Dipoles are formed under the action of the electric field of the particle, which displaces the electrons of the sorrounding atoms relative to their nuclei. The return of the dipoles to the normal state (after the particle has left the given region) is accompanied by the emission of an electromagnetic signal or beam. If a particle moves slowly, the resulting polarization will be distribute symmetrically with respect to the particle position, since the electric field of the particle manages to polarize all the atoms in the near neighbourhood, including those lying ahead in its path. In that case, the resultant field of all dipoles away from the particle are equal to zero and their radiations neutralize one to one.

Then, if the particle move in a medium with a velocity exceeding the velocity or propagation of the electromagnetic field in that medium, i.e., whenever $v>c'=c/n$, a delayed polarization of the medium is observed, and consequently the resulting dipoles will be preferably oriented along the direction of motion of the particle. See the next figure:

It is evident that, if it occurs, there must be a direction along which a coherent radiation form dipoles emerges, since the waves emitted by the dipoles at different points along the path of the particle may turn our to be in the same phase. This direction can be easiy found experimentally and it can be easily obtained theoretically too. Let us imagine that a charged particle move from the left to the right with some velocity $v$ in a medium with a $n>1$ refractive index, with $c'=c/n$. We can apply the Huygens principle to build the wave front for the emitted particle. If, at instant $t$, the aprticle is at the point $x=vt$, the surface enveloping the spherical waves emitted by the same particle on its own path from the origin at $x=0$ to the arbitrary point $x$. The radius of the wave at the point $x=0$ at such an instant t is equal to $R_0=c't$. At the same moment, the wave radius at th epint x is equal to $R_x=c'(t-(x/v))=0$. At any intermediate point x’, the wave radius at instant t will be $R_{x'}=c'(t-(x'/v))$. Then, the radius decreases linearly with increasing $x'$. Thus, the enveloping surface is a cone with angle $2\varphi$, where the angle satisfies in addition

$\sin\varphi=\dfrac{R_0}{x}=\dfrac{c't}{vt}=\dfrac{c'}{v}=\dfrac{c}{vn}=\dfrac{1}{\beta n}$

The normal to the enveloping surface fixes the direction of propagation of the Cherenkov radiation. The angle $\theta$ between the normal and the $x$-axis is equal to $\pi/2-\varphi$, and it is defined by the condition

$\boxed{\cos\theta=\dfrac{1}{\beta n}}$

or equivalently

$\boxed{\tan\theta=\sqrt{\beta^2n^2-1}}$

This is the result we anticipated before. Indeed, it is completely general and Quantum Mechanics instroudces only a light and subtle correction to this classical result. From this last equation, we observer that the Cherenkov radiation propagates along the generators of a cone whose axis coincides with the direction of motion of the particle an the cone angle is equal to $2\theta$. This radiation can be registered on a colour film place perpendicularly to the direction of motion of the particle. Radiation flowing from a radiator of this type leaves a blue ring on the photographic film. These blue rings are the archetypical fingerprints of Vavilov-Cherenkov radiation!

The sharp directivity of the Cherenkov radiation makes it possible to determine the particle velocity $\beta$ from the value of the Cherenkov’s angle $\theta$. From the Cherenkov’s formula above, it follows that the range of measurement of $\beta$ is equal to

$1/n\leq\beta<1$

For $\beta=1/n$, the radiation is observed at an angle $\theta=0\textdegree$, while for the extreme with $\beta=1$, the angle $\theta$ reaches a maximum value

$\theta_{max}=\cos^{-1}\left(\dfrac{1}{n}\right)=arccos \left(\dfrac{1}{n}\right)$

For instance, in the case of water, $n=1.33$ and $\beta_{min}=1/1.33=0.75$. Therefore, the Cherenkov radiation is observed in water whenever $\beta\geq 0.75$. For electrons being the charged particles passing through the water, this condition is satisfied if

$T_e=m_ec^2\left(\dfrac{1}{\sqrt{1-\beta^2}}-1\right)=0.5\left( \dfrac{1}{\sqrt{1-0.75^2}}-1\right)=0.26MeV$

As a consequence of this, the Cherenkov effect should be observed in water even for low-energy electrons ( for isntance, in the case of electrons produced by beta decay, or Compton electrons, or photoelectroncs resulting from the interaction between water and gamma rays from radioactive products, the above energy can be easily obtained and surpassed!). The maximum angle at which the Cherenkov effec can be observed in water can be calculated from the condition previously seen:

$\cos\theta_{max}=1/n=0.75$

This angle (for water) shows to be equal to about $\theta\approx 41.5\textdegree=41\textdegree 30'$. In agreement with the so-called Frank-Tamm formula ( please, see below what that formula is and means), the number of photons in the frequency interval $\nu$ and $\nu+d\nu$ emitted by some particle with charge Z moving with a velocity $\beta$ in a medium with a refractive indez n is provided by the next equation:

$\boxed{N(\nu) d\nu=4\pi^2\dfrac{(Zq)^2}{hc^2}\left(1-\dfrac{1}{n^2\beta^2}\right) d\nu}$

This formula has some striking features:

1st. The spectrum is identical for particles with $Z=constant$, i.e., the spectrum is exactly the same, irespectively the nature of the particle. For instance, it could be produced both by protons, electrons, pions, muons or their antiparticles!

2nd. As Z increases, the number of emitted photons increases as $Z^2$.

3rd. $N(\nu)$ increases with $\beta$, the particle velocity, from zero ( with $\beta=1/n$) to

$N=4\pi^2\left(\dfrac{q^2Z^2}{hc^2}\right)\left(1-\dfrac{1}{n^2}\right)$

with $\beta\approx 1$.

4th. $N(\nu)$ is approximately independent of $\nu$. We observe that $dN(\nu)\propto d\nu$.

5th. As the spectrum is uniform in frequency, and $E=h\nu$, this means that the main energy of radiation is concentrated in the extreme short-wave region of the spectrum, i.e.,

$\boxed{dE_{Cherenkov}\propto \nu d\nu}$

And then, this feature explains the bluish-violet-like colour of the Cherenkov radiation!

Indeed, this feature also indicates the necessity of choosing materials for practical applications that are “transparent” up to the highest frequencies ( even the ultraviolet region). As a rule, it is known that $n<1$ in the X-ray region and hence the Cherenkov condition can not be satisfied! However, it was also shown by clever experimentalists that in some narrow regions of the X-ray spectrum the refractive index is $n>1$ ( the refractive index depends on the frequency in any reasonable materials. Practical Cherenkov materials are, thus, dispersive! ) and the Cherenkov radiation is effectively observed in apparently forbidden regions.

The Cherenkov effect is currently widely used in diverse applications. For instance, it is useful to determine the velocity of fast charged particles ( e.g, neutrino detectors can not obviously detect neutrinos but they can detect muons and other secondaries particles produced in the interaction with some polarizable medium, even when they are produced by (electro)weak intereactions like those happening in the presence of chargeless neutrinos). The selection of the medium fo generating the Cherenkov radiation depends on the range of velocities $\beta$ over which measurements have to be produced with the aid of such a “Cherenkov counter”. Cherenkov detectors/counters are filled with liquids and gases and they are found, e.g., in Kamiokande, Superkamiokande and many other neutrino detectors and “telescopes”. It is worth mentioning that velocities of ultrarelativistic particles are measured with Cherenkov detectors whenever they are filled with some special gasesous medium with a refractive indes just slightly higher than the unity. This value of the refractive index can be changed by realating the gas pressure in the counter! So, Cherenkov detectors and counters are very flexible tools for particle physicists!

Remark: As I mentioned before, it is important to remember that (the most of) the practical Cherenkov radiators/materials ARE dispersive. It means that if $\omega$ is the photon frequency, and $k=2\pi/\lambda$ is the wavenumber, then the photons propagate with some group velocity $v_g=d\omega/dk$, i.e.,

$\boxed{v_g=\dfrac{d\omega}{dk}=\dfrac{c}{\left[n(\omega)+\omega \frac{dn}{d\omega}\right]}}$

Note that if the medium is non-dispersive, this formula simplifies to the well known formula $v_g=c/n$. As it should be for vacuum.

Accodingly, following the PDG, Tamm showed in a classical paper that for dispersive media the Cherenkov radiation is concentrated in a thin  conical shell region whose vertex is at the moving charge and whose opening half-angle $\eta$ is given by the expression

$\boxed{cotan \theta_c=\left[\dfrac{d}{d\omega}\left(\omega\tan\theta_c\right)\right]_{\omega_0}=\left(\tan\theta_c+\beta^2\omega n(\omega) \dfrac{dn}{d\omega} cotan (\theta_c)\right)\bigg|_{\omega_0}}$

where $\theta_c$ is the critical Cherenkov angle seen before, $\omega_0$ is the central value of the small frequency range under consideration under the Cherenkov condition. This cone has an opening half-angle $\eta$ (please, compare with the previous convention with $\varphi$ for consistency), and unless the medium is non-dispersive (i.e. $dn/d\omega=0$, $n=constant$), we get $\theta_c+\eta\neq 90\textdegree$. Typical Cherenkov radiation imaging produces blue rings.

THE CHERENKOV EFFECT: QUANTUM FORMULAE

When we considered the Cherenkov effect in the framework of QM, in particular the quantum theory of radiation, we can deduce the following formula for the Cherenkov effect that includes the quantum corrections due to the backreaction of the particle to the radiation:

$\boxed{\cos\theta=\dfrac{1}{\beta n}+\dfrac{\Lambda}{2\lambda}\left(1-\dfrac{1}{n^2}\right)}$

where, like before, $\beta=v/c$, n is the refraction index, $\Lambda=\dfrac{h}{p}=\dfrac{h}{mv}$ is the De Broglie wavelength of the moving particle and $\lambda$ is the wavelength of the emitted radiation.

Cherenkov radiation is observed whenever $\beta_n>1$ (i.e. if $v>c/n$), and the limit of the emission is on the short wave bands (explaining the typical blue radiation of this effect). Moreover, $\lambda_{min}$ corresponds to $\cos\theta\approx 1$.

By the other hand, the radiated energy per particle per unit of time is equal to:

$\boxed{-\dfrac{dE}{dt}=\dfrac{e^2V}{c^2}\int_0^{\omega_{max}}\omega\left[1-\dfrac{1}{n^2\beta^2}-\dfrac{\Lambda}{n\beta\lambda}\left(1-\dfrac{1}{n^2}\right)-\dfrac{\Lambda^2}{4\lambda^2}\left(1-\dfrac{1}{n^2}\right)\right]d\omega}$

where $\omega=2\pi c/n\lambda$ is the angular frequency of the radiation, with a maximum value of $\omega_{max}=2\pi c/n\lambda_{min}$.
Remark: In the non-relativistic case, $v<, and the condition $\beta n>1$ implies that $n>>1$. Therefore, neglecting the quantum corrections (the charged particle self-interaction/backreaction to radiation), we can insert the limit $\Lambda/\lambda\rightarrow 0$ and the above previous equations will simplify into:

$\boxed{\cos\theta=\dfrac{1}{n\beta}-\dfrac{c}{nv}}$

$\boxed{-\dfrac{dE}{dt}=\dfrac{e^2 v}{c^2}\int_0^{\omega_{max}}\omega\left(1-\dfrac{c^2}{n^2v^2}\right)d\omega}$

Remember: $\omega_{max}$ is determined with the condition $\beta n(\omega_{max})=1$, where $n(\omega_{max})$ represents the dispersive effect of the material/medium through the refraction index.

THE FRANK-TAMM FORMULA

The number of photons produced per unit path length and per unit of energy of a charged particle (charge equals to $Zq$) is given by the celebrated Frank-Tamm formula:

$\boxed{\dfrac{d^2N}{dEdx}=\dfrac{\alpha Z^2}{\hbar c}\sin^2\theta_c=\dfrac{\alpha^2 Z^2}{r_em_ec^2}\left(1-\dfrac{1}{\beta^2n^2(E)}\right)}$

In terms of common values of fundamental constants, it takes the value:

$\boxed{\dfrac{d^2N}{dEdx}\approx 370Z^2\sin^2\theta_c(E)eV^{-1}\cdot cm^{-1}}$

or equivalently it can be written as follows

$\boxed{\dfrac{d^2N}{dEdx}=\dfrac{2\pi \alpha Z^2}{\lambda^2}\left(1-\dfrac{1}{\beta^2n^2(\lambda)}\right)}$

The refraction index is a function of photon energy $E=\hbar \omega$, and it is also the sensitivity of the transducer used to detect the light with the Cherenkov effect! Therefore, for practical uses, the Frank-Tamm formula must be multiplied by the transducer response function and integrated over the region for which we have $\beta n(\omega)>1$.

Remark: When two particles are close toghether ( to be close here means to be separated a distance $d<1$ wavelength), the electromagnetic fields form the particles may add coherently and affect the Cherenkov radiation. The Cherenkov radiation for a electron-positron pair at close separation is suppressed compared to two independent leptons!

Remark (II): Coherent radio Cherenkov radiation from electromagnetic showers is significant and it has been applied to the study of cosmic ray air showers. In addition to this, it has been used to search for electron neutrinos induced showers by cosmic rays.

CHERENKOV DETECTOR: MAIN FORMULA AND USES

The applications of Cherenkov detectors for particle identification (generally labelled as PID Cherenkov detectors) are well beyond the own range of high-energy Physics. Its uses includes: A) Fast particle counters. B) Hadronic particle indentifications. C) Tracking detectors performing complete event reconstruction. The PDG gives some examples of each category: a) Polarization detector of SLD, b) the hadronic PID detectors at B factories like BABAR or the aerogel threshold Cherenkov in Belle, c) large water Cherenkov counters liket those in Superkamiokande and other neutrino detector facilities.

Cherenkov detectors contain two main elements: 1) A radiator/material through which the particle passes, and 2) a photodetector. As Cherenkov radiation is a weak source of photons, light collection and detection must be as efficient as possible. The presence of a refractive material specifically designed to detect some special particles is almost vindicated in general.

The number of photoelectrons detected in a given Cherenkov radiation detector device is provided by the following formula (derived from the Tamm-Frank formula simply taking into account the efficiency in a straightforward manner):

$\boxed{N=L\dfrac{\alpha^2 Z^2}{r_em_ec^2}\int \epsilon (E)\sin^2\theta_c(E)dE}$

where $L$ is the path length of the particle in the radiator/material, $\epsilon (E)$ is the efficiency for the collector of Cherenkov light and transducing it in photoelectrons, and

$\boxed{\dfrac{\alpha^2}{r_em_ec^2}=370eV^{-1}cm^{-1}}$

Remark: The efficiencies and the Cherenkov critical angle are functions of the photon energy, generally speaking. However, since the typical energy dependen variation of the refraction index is modest, a quantity sometimes called Cherenkov detector quality fact $N_0$ can be defined as follows

$\boxed{N_0=\dfrac{\alpha^2Z^2}{r_em_ec^2}\int \epsilon dE}$

In this case, we can write

$\boxed{N\approx LN_0<\sin^2\theta_c>}$

Remark(II): Cherenkov detectors are classified into imaging or threshold types, depending on its ability to make use of Cherenkov angle information. Imaging counters may be used to track particles as well as identify particles.

Other main uses/applications of the Vavilov-Cherenkov effect are:

1st. Detection of labeled biomolecules. Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. For instance, radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.

2nd. Nuclear reactors. Cherenkov radiation is used to detect high-energy charged particles. In pool-type nuclear reactors, the intensity of Cherenkov radiation is related to the frequency of the fission events that produce high-energy electrons, and hence is a measure of the intensity of the reaction. Similarly, Cherenkov radiation is used to characterize the remaining radioactivityof spent fuel rods.

3rd. Astrophysical experiments. The Cherenkov radiation from these charged particles is used to determine the source and intensity of the cosmic ray,s which is used for example in the different classes of cosmic ray detection experiments. For instance, Ice-Cube, Pierre-Auger, VERITAS, HESS, MAGIC, SNO, and many others. Cherenkov radiation can also be used to determine properties of high-energy astronomical objects that emit gamma rays, such as supernova remnants and blazars. In this last class of experiments we place STACEE, in new Mexico.

4th. High-energy experiments. We have quoted already this, and there many examples in the actual LHC, for instance, in the ALICE experiment.

Vacuum Cherenkov radiation (VCR) is the alledged and  conjectured phenomenon which refers to the Cherenkov radiation/effect of a charged particle propagating in the physical vacuum. You can ask: why should it be possible? It is quite straightforward to understand the answer.

The classical (non-quantum) theory of relativity (both special and general)  clearly forbids any superluminal phenomena/propagating degrees of freedom for material particles, including this one (the vacuum case) because a particle with non-zero rest mass can reach speed of light only at infinite energy (besides, the nontrivial vacuum itself would create a preferred frame of reference, in violation of one of the relativistic postulates).

However, according to modern views coming from the quantum theory, specially our knowledge of Quantum Field Theory, physical vacuum IS a nontrivial medium which affects the particles propagating through, and the magnitude of the effect increases with the energies of the particles!

Then, a natural consequence follows: an actual speed of a photon becomes energy-dependent and thus can be less than the fundamental constant $c=299792458m/s$ of  speed of light, such that sufficiently fast particles can overcome it and start emitting Cherenkov radiation. In summary, any charged particle surpassing the speed of light in the physical vacuum should emit (Vacuum) Cherenkov radiation. Note that it is an inevitable consequence of the non-trivial nature of the physical vacuum in Quantum Field Theory. Indeed, some crazy people saying that superluminal particles arise in jets from supernovae, or in colliders like the LHC fail to explain why those particles don’t emit Cherenkov radiation. It is not true that real particles become superluminal in space or collider rings. It is also wrong in the case of neutrino propagation because in spite of being chargeless, neutrinos should experiment an analogue effect to the Cherenkov radiation called the Askaryan effect. Other (alternative) possibility or scenario arises in some Lorentz-violating theories ( or even CPT violating theories that can be equivalent or not to such Lorentz violations) when a speed of a propagating particle becomes higher than c which turns this particle into the tachyon.  The tachyon with an electric charge would lose energy as Cherenkov radiation just as ordinary charged particles do when they exceed the local speed of light in a medium. A charged tachyon traveling in a vacuum therefore undergoes a constant proper-time acceleration and, by necessity, its worldline would form an hyperbola in space-time. These last type of vacuum Cherenkov effect can arise in theories like the Standard Model Extension, where Lorentz-violating terms do appear.

One of the simplest kinematic frameworks for Lorentz Violating theories is to postulate some modified dispersion relations (MODRE) for particles , while keeping the usual energy-momentum conservation laws. In this way, we can provide and work out an effective field theory for breaking the Lorentz invariance. There are several alternative definitions of MODRE, since there is no general guide yet to discriminate from the different theoretical models. Thus, we could consider a general expansion  in integer powers of the momentum, in the next manner (we set units in which $c=1$):

$\boxed{E^2=f(p,m,c_n)=p^2+m^2+\sum_{n=-\infty}^{\infty}c_n p^n}$

However, it is generally used a more soft expansion depending only on positive powers of the momentum in the MODRE. In such a case,

$\boxed{E^2=f(p,m,a_n)=p^2+m^2+\sum_{n=1}^{\infty}a_n p^n}$

and where $p=\vert \mathbf{p}\vert$. If Lorentz violations are associated to the yet undiscovered quantum theory of gravity, we would get that ordinary deviations of the dispersion relations in the special theory of relativity should appear at the natural scale of the quantum gravity, say the Planck mass/energy. In units where $c=1$ we obtain that Planck mass/energy is:

$\boxed{M_P=\sqrt{\hbar^5/G_N}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV}$

Lets write and parametrize the Lorentz violations induced by the fundamental scale of quantum gravity (naively this Planck mass scale) by:

$\boxed{a_n=\dfrac{\Xi_n}{M_P^{n-2}}}$

Here, $\Xi_n$ is a dimensionless quantity that can differ from one particle (type) to another (type). Considering, for instance $n=3,4$, since the $n<3$ seems to be ruled out by previous terrestrial experiments, at higer energies the lowest non-null term will dominate the expansion with $n\geq 3$. The MODRE reads:

$E^2=p^2+m^2+\dfrac{\Xi_a p^n}{M_P^{n-2}}$

and where the label $a$ in the term $\Xi_a$ is specific of the particle type. Such corrections might only become important at the Planck scale, but there are two exclusions:

1st. Particles that propagate over cosmological distances can show differences in their propagation speed.
2nd. Energy thresholds for particle reactions can be shifted or even forbidden processes can be allowed. If the $p^n$-term is comparable to the $m^2$-term in the MODRE. Thus, threshold reactions can be significantly altered or shifted, because they are determined by the particle masses. So a threshold shift should appear at scales where:

$\boxed{p_{dev}\approx\left(\dfrac{m^2M_P^{n-2}}{\Xi}\right)^{1/n}}$

Imposing/postulating that $\Xi\approx 1$, the typical scales for the thresholds for some diffent kind of particles can be calculated. Their values for some species are given in the next table:

We can even study some different sources of modified dispersion relationships:

1. Measurements of time of flight.

2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum.

3. Shift in the so-called GZK cut-off.

4. Modified dispersion relationships induced by non-commutative theories of spacetime. Specially, there are time shifts/delays of photon signals induced by non-commutative spacetime theories.

We will analyse this four cases separately, in a very short and clear fashion. I wish!

Case 1. Time of flight. This is similar to the recently controversial OPERA experiment results. The OPERA experiment, and other similar set-ups, measure the neutrino time of flight. I dedicated a post to it early in this blog

https://thespectrumofriemannium.wordpress.com/2012/06/08/

In fact, we can measure the time of flight of any particle, even photons. A modified dispersion relation, like the one we introduced here above, would lead to an energy dependent speed of light. The idea of the time of flight (TOF) approach is to detect a shift in the arrival time of photons (or any other massless/ultra-relativistic particle like neutrinos) with different energies, produced simultaneous in a distant object, where the distance gains the usually Planck suppressed effect. In the following we use the dispersion relation for $n=3$ only, as modifications in higher orders are far below the sensitivity of current or planned experiments. The modified group velocity becomes:

$v=\dfrac{\partial E}{\partial p}$

and then, for photons,

$v\approx 1-\Xi_\gamma\dfrac{p}{M}$

The time difference in the photon shift detection time will be:

$\Delta t=\Xi_\gamma \dfrac{p}{M}D$

where D is the distance multiplied (if it were the case) by the redshift $(1+z)$ to correct the energy with the redshift. In recent years, several measurements on different objects in various energy bands leading to constraints up to the order of 100 for $\Xi$. They can be summarized in the next table ( note that the best constraint comes from a short flare of the Active Galactic Nucleus (AGN) Mrk 421, detected in the TeV band by the Whipple Imaging Air Cherenkov telescope):

There is still room for improvements with current or planned experiments, although the distance for TeV-observations is limited by absorption of TeV photons in low energy metagalactic radiation fields. Depending on the energy density of the target photon field one gets an energy dependent mean free path length, leading to an energy and redshift dependent cut off energy (the cut off energy is defined as the energy where the optical depth is one).

2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum. By the other hand, the interaction vertex in quantum electrodynamics (QED) couples one photon with two leptons. When we assume for photons and leptons the following dispersion relations (for simplicity we adopt all units with M=1). Then:

$\omega_k^2=k^2+\xi k^n$                $E^2_p=p^2+m^2+\Xi p^n$

Let us write the photon tetramomentum like $\mathbb{P}=(\omega_k,\mathbf{k})$ and the lepton tetramomentum $\mathbb{P}=(E_p,\mathbf{p})$ and $\mathbb{Q}=(E_q,\mathbf{q})$. It can be shown that the transferred tetramomentum will be

$\xi k^n+\Xi p^n-\Xi q^n=2(E_p\omega_k-\mathbf{p}\cdot\mathbf{k})$

where the r.h.s. is always positive. In the Lorentz invariant case the parameters $\xi, \Xi$  are zero, so that this equation can’t be solved and all processes of the single vertex are forbidden. If these parameters are non-zero, there can exist a solution and so these processes can be allowed. We now consider two of these interactions to derive constraints on the parameters $\Xi, \xi$. The vacuum
Cherenkov effect $e^-\rightarrow \gamma e^-$ and the spontaneous photon-decay $\gamma\rightarrow e^+e^-$.

A) As we have studied here, the vacuum Cherenkov effect is a spontaneous emission of a photon by a charged particle $0.  These effect occurs if the particle moves faster than the slowest possible radiated photon in vacuum!
In the case of $\Xi>0$, the maximal attainable speed for the particle $c_{max}$ is faster than c. This means, that the particle can always be faster than a zero energy photon with

$\displaystyle{c_{\gamma_0}=c\lim_{k\rightarrow 0}\dfrac{\partial \omega}{\partial k}=c\lim_{k\rightarrow 0}\dfrac{2k+n\xi k^{n-1}}{2\sqrt{k^2+\xi k^n}}=c}$

and it is independent of $\xi$. In the case of $\Xi<0$, i.e., $c_{par}$ decreases with energy, you need a photon with $c_\gamma. This is only possible if $\xi<\Xi$.

Therefore, due to the radiation of photons such an electron loose energy. The observation of high energetic electrons allows to derive constraints on $\Xi$ and $\xi$.  In the case of $\Xi<0$, in the case with n=3, we have the bound

$\Xi<\dfrac{m^2}{2p^3_{max}}$

Moreover, from the observation of 50 TeV photons in the Crab Nebula (and its pulsar) one can conclude the existens of 50 TeV electrons due to the inverse Compton scattering of these electrons with those photons. This leads to a constraint on $\Xi$ of about

$\Xi<1.2\times 10^{-2}$

where we have used $\Xi>0$ in this case.

B) The decay of photons into positrons and electrons $\gamma\rightarrow e^+e^-$ should be a very rapid spontaneous decay process. Due to the observation of Gamma rays from the Crab Nebula on earth with an energy up to $E\sim 50TeV$. Thus, we can reason that these rapid decay doesn’t occur on energies below 50 TeV. For the constraints on $\Xi$ and $\xi$ these condition means (again we impose n=3):

$\xi<\dfrac{\Xi}{2}+0.08, \mbox{for}\; \xi\geq 0$

$\xi<\Xi+\sqrt{-0.16\Xi}, \mbox{for}\;\Xi<\xi<0$.

3. Shift in the GZK cut-off. As the energy of a proton increases,the pion production reaction can happen with low energy photons of the Cosmic Microwave Background (CMB).

This leads to an energy dependent mean free path length of the particles, resulting in a cutoff at energies around $E_{GZK}\approx 10^{20}eV$. This is the the celebrated Greisen-Kuzmin-Zatsepin (GZK) cut off. The resonance for the GZK pion photoproduction with the CMB backgroud can be read from the next condition (I will derive this condition in a future post):

$\boxed{E_{GZK}\approx\dfrac{m_p m_\pi}{2E_\gamma}=3\times 10^{20}eV\left(\dfrac{2.7K}{E_\gamma}\right)}$

Thus in Lorentz invariant world, the mean free path length of a particle of energy 5.1019 eV is 50 Mpc i.e. particle over this energy are readily absorbed due to pion photoproduction reaction. But most of the sources of particle of ultra high energy are outside 50 Mpc. So, one expects no trace of particles of energy above $10^{20}eV$ on Earth. From the experimental point of view AGASA has found
a few particles having energy higher than the constraint given by GZK cutoff limit and claimed to be disproving the presence of GZK cutoff or at least for different threshold for GZK cutoff, whereas HiRes is consistent with the GZK effect. So, there are two main questions, not yet completely unsolved:

i) How one can get definite proof of non-existence GZK cut off?
ii) If GZK cutoff doesn’t exist, then find out the reason?

The first question could by answered by observation of a large sample of events at these energies, which is necessary for a final conclusion, since the GZK cutoff is a statistical phenomena. The current AUGER experiment, still under construction, may clarify if the GZK cutoff exists or not. The existence of the GZK cutoff would also yield new limits on Lorentz or CPT violation. For the second question, one explanation can be derived from Lorentz violation. If we do the calculation for GZK cutoff in Lorentz violated world we would get the modified proton dispersion relation as described in our previous equations with MODRE.

4. Modified dispersion relationships induced by non-commutative theories of spacetime. As we said above, there are time shifts/delays of photon signals induced by non-commutative spacetime theories. Noncommutative spacetime theories introduce a new source of MODRE: the fuzzy nature of the discreteness of the fundamental quantum spacetime. Then, the general ansatz of these type of theories comes from:

$\boxed{\left[\hat{x}^\mu,\hat{x}^\nu\right]=i\dfrac{\theta^{\mu\nu}}{\Lambda_{NC}^2}}$

where $\theta^{\mu\nu}$ are the components of an antisymmetric Lorentz-like tensor which components are the order one. The fundamental scale of non-commutativity $\Lambda^2_{NC}$ is supposed to be of the Planck length. However, there are models with large extra dimensions that induce non-commutative spacetime models with scale near the TeV scale! This is interesting from the phenomenological aside as well, not only from the theoretical viewpoint. Indeed, we can investigate in the following whether astrophysical observations are able to constrain certain class of models with noncommutative spacetimes which are broken at the TeV scale or higher. However, there due to the antisymmetric character of the noncommutative tensor, we need a magnetic and electric background field in order to study these kind of models (generally speaking, we need some kind of field inducing/producing antisymmetric field backgrounds), and then the dispersion relation for photons remains the same as in a commutative spacetime. Furthermore, there is no photon energy dependence of the dispersion relation. Consequently, the time-of-flight experiments are inappopriate because of their energy-dependent dispersion. Therefore, we suggest the next alternative scenario: suppose, there exists a strong magnetic field  (for instance, from a star or a cluster of stars) on the path photons emitted at a light source (e.g. gamma-ray bursts). Then, analogous to gravitational lensing, the photons experience deflection and/or change in time-of-arrival, compared to the same path without a magnetic background field. We can make some estimations for several known objects/examples are shown in this final table:

In summary:

1st. Vacuum Cherenkov and related effects modifying the dispersion relations of special relativity are natural in many scenarios beyond the Standard Relativity (BSR) and beyond the Standard Model (BSM).

2nd. Any theory allowing for superluminal propagation has to explain the null-results from the observation of the vacuum Cherenkov effect. Otherwise, they are doomed.

3rd. There are strong bounds coming from astrophysical processes and even neutrino oscillation experiments that severely imposes and kill many models. However, it is true that current MODRE bound are far from being the most general bounds. We expect to improve these bounds with the next generation of experiments.

4th. Theories that can not pass these tests (SR obviously does) have to be banned.

5th. Superluminality has observable consequences, both in classical and quantum physics, both in standard theories and theories beyond standard theories. So, it you buid a theory allowing superluminal stuff, you must be very careful with what kind of predictions can and can not do. Otherwise, your theory is complentely nonsense.

As a final closing, let me include some nice Cherenkov rings from Superkamiokande and MiniBoone experiments. True experimental physics in action. And a final challenge…

FINAL CHALLENGE: Are you able to identify the kind of particles producing those beautiful figures? Let me know your guesses ( I do know the answer, of course).

Figure 1. Typical SuperKamiokande Ring.  I dedicate this picture to my admired Japanase scientists there. I really, really admire that country and their people, specially after disasters like the 2011 Earthquake and the Fukushima accident. If you are a japanase reader/follower, you must know we support your from abroad. You were not, you are not and you shall not be alone.

Figure 2. Typical MiniBooNe ring. History: I used this nice picture in my Master Thesis first page, as the cover/title page main picture!