LOG#123. Basic Neutrinology(VIII).

There are some indirect constraints/bounds on neutrino masses provided by Cosmology. The most important is the one coming from the demand that the energy density of the neutrinos should not be too high, otherwise the Universe would collapse and it does not happen, apparently…

Firstly, stable neutrinos with low masses (about m_\nu\leq 1 MeV) make a contribution to the total energy density of the Universe given by:

\rho_\nu=m_{tot}n_\nu

and where the total mass is defined to be the quantity

\displaystyle{m_{tot}=\sum_\nu \dfrac{g_\nu}{2}m_\nu}

Here, the number of degrees of freedom g_\nu=4(2) for Dirac (Majorana) neutrinos in the framework of the Standard Model. The number density of the neutrino sea is revealed to be related to the photon number density by entropy conservation (entropy conservation is the key of this important cosmological result!) in the adiabatic expansion of the Universe:

n_\nu=\dfrac{3}{11}n_\gamma

From this, we can derive the relationship of the cosmic relic neutrino background (neutrinos coming from the Big Bang radiation when they lost the thermal equilibrium with photons!) or C\nu B and the cosmic microwave background (CMB):

T_{C\nu B}=\left(\dfrac{3}{11}\right)^{1/3}T_{CMB}

From the CMB radiation measurements we can obtain the value

n_\nu=411(photons)cm^{-3}

for a perfect Planck blackbody spectrum with temperature

T_{CMB}=2.725\pm0.001 K\approx 2.35\cdot 10^{-4}eV

This CMB temperature implies that the C\nu B temperature should be about

T_{C\nu B}^{theo}=1.95K\approx 0.17meV

Remark: if you do change the number of neutrino degrees of freedom you also change the temperature of the C\nu B and the quantity of neutrino “hot dark matter” present in the Universe!

Moreover, the neutrino density \Omega_\nu is related to the total neutrino density and the critical density as follows:

\Omega_\nu=\dfrac{\rho_\nu}{\rho_c}

and where the critical density is about

\rho_c=\dfrac{3H_0^2}{8\pi G_N}

When neutrinos “decouple” from the primordial plasma and they loose the thermal equilibrium, we have m_\nu>>T, and then we get

\Omega_\nu h^2=10^{-2}m_{tot}eV

with h the reduced Hubble constant. Recent analysis provide h\approx 67-71\cdot 10^{-2} (PLANCK/WMAP).

There is another useful requirement for the neutrino density in Cosmology. It comes from the requirements of the BBN (Big Bang Nucleosynthesis). I talked about this in my Cosmology thread. Galactic structure and large scale observations also increase evidence that the matter density is:

\Omega_Mh^2\approx 0.05-0.2

These values are obtained through the use of the luminosity-density relations, galactic rotation curves and the observation of large scale flows. Here, the \Omega_M is the total mass density of the Universe as a fraction of the critical density \rho_c. This \Omega_M includes radiation (photons), bayrons and non-baryonic “cold dark matter” (CDM) and “hot dark matter” (HDM). The two first components in the decomposition of \Omega_M

\Omega_M=\Omega_r+\Omega_b+\Omega_{nb}+\Omega_{HDM}+\Omega_{CDM}

are rather well known. The photon density is

\Omega_rh^2=\Omega_\gamma h^2=2.471\cdot 10^{-5}

The deuterium abundance can be extracted from the BBN predictions and compared with the deuterium abundances in the stellar medium (i.e. at stars!). It shows that:

0.017\leq\Omega_Bh^2\leq 0.021

The HDM component is formed by relativistic long-lived particles with masses less than about 1keV. In the SM framework, the only HDM component are the neutrinos!

The simulations of structure formation made with (super)computers fit the observations ONLY when one has about 20% of HDM plus 80% of CDM. A stunning surprise certainly! Some of the best fits correspond to neutrinos with a total mass about 4.7eV, well above the current limit of neutrino mass bounds. We can evade this apparent contradiction if we suppose that there are some sterile neutrinos out there. However, the last cosmological data by PLANCK have decreased the enthusiasm by this alternative. The apparent conflict between theoretical cosmology and observational cosmology can be caused by both unprecise measurements or our misunderstanding of fundamental particle physics. Anyway observations of distant objects (with high redshift) favor a large cosmological constant instead of Hot Dark Matter hypothesis. Therefore, we are forced to conclude that the HDM of \Omega_M does not exceed even 0.2. Requiring that \Omega_\nu <\Omega_M, we get that \Omega_\nu h^2\leq 0.1. Using the relationship with the total mass density, we can deduce that the total neutrino mass (or HDM in the SM) is about

m_\nu\leq 8-10 eV or less!

Mass limits, in this case lower limits, for heavy or superheavy neutrinos (M_N\sim 1GeV or higher) can also be obtained along the same reasoning. The puzzle gets very different if the neutrinos were “unstable” particles. One gets then joint bounds on mass and timelife, and from them, we deduce limits that can overcome the previously seen limits (above).

There is another interesting limit to the density of neutrinos (or weakly interacting dark matter in general) that comes from the amount of accumulated “density” in the halos of astronomical objects. This is called the Tremaine-Gunn limit. Up to numerical prefactors, and with the simplest case where the halo is a singular isothermal sphere with \rho\propto r^{-2}, the reader can easily check that

\rho=\dfrac{\sigma^2}{2\pi G_Nr^2}

Imposing the phase space bound at radius r then gives the lower bound

m_\nu>(2\pi)^{-5/8}\left(G_Nh_P^3\sigma r^2\right)^{-1/4}

This bound yields m_\nu\geq 33eV. This is the Tremaine-Gunn bound. It is based on the idea that neutrinos form an important part of the galactic bulges and it uses the phase-space restriction from the Fermi-Dirac distribution to get the lower limit on the neutrino mass. I urge you to consult the literature or google to gather more information about this tool and its reliability.

Remark: The singular isothermal sphere is probably a good model where the rotation curve produced by the dark matter halo is flat, but certainly breaks down at small radius. Because the neutrino mass bound is stronger for smaller \sigma r^2, the uncertainty in the halo core radius (interior to which the mass density saturates) limits the reliability of this neutrino mass bound. However, some authors take it seriously! As Feynman used to say, everything depends on the prejudges you have!

The abundance of additional weakly interacting light particles, such as a light sterile neutrino \nu_s or additional relativistic degrees of freedom uncharged under the Standard Model can change the number of relativistic degrees of freedom g_\nu. Sometimes you will hear about the number N_{eff}. Planck data, recently released, have decreased the hopes than we would be finding some additional relativistic degree of freedom that could mimic neutrinos. It is also constrained by the BBN and the deuterium abundances we measured from astrophysical objects. Any sterile neutrino or extra relativistic degree of freedom would enter into equilibrium with the active neutrinos via neutrino oscillations! A limit on the mass differences and mixing angle with another active neutrino of the type

\Delta m^2\sin^2 2\theta\leq 3\cdot 10^{-6}eV^2 should be accomplished in principle. From here, it can be deduced that the effective number of neutrino species allowed by neutrino oscillations is in fact a litle higher the the 3 light neutrinos we know from the Z-width bound:

N_\nu (eff)<3.5-4.5

PLANCK data suggest indeed that N_\nu (eff)< 3.3. However, systematical uncertainties in the derivation of the BBN make it too unreliable to be taken too seriously and it can eventually be avoided with care.


LOG#107. Basic Cosmology (II).

piechart_wmapPlanck_cosmic_recipecosmicparticles

Evolution of the Universe: the scale factor

The Universe expands, and its expansion rate is given by the Hubble parameter (not constant in general!)

\boxed{H(t)\equiv \dfrac{\dot{a}(t)}{a(t)}}

Remark  (I): The Hubble “parameter” is “constant” at the present value (or a given time/cosmological age), i.e., H_0=H(t_0).

Remark (II): The Hubble time defines a Hubble length about L_H=H^{-1}, and it defines the time scale of the Universe and its expasion “rate”.

The critical density of matter is a vital quantity as well:

\boxed{\rho_c=\dfrac{3H^2}{\kappa^2}\vert_{t_0}}

We can also define the density parameters

\Omega_i=\dfrac{\rho_i}{\rho_c}\vert_{t_0}

This quantity represents the amount of substance for certain particle species. The total composition of the Universe is the total density, or equivalently, the sum over all particle species of the density parameters, that is:

\boxed{\displaystyle{\Omega=\sum_i\Omega_i=\dfrac{\displaystyle{\sum_i\rho_i}}{\rho_c}}}

There is a nice correspondence between the sign of the curvature k and that of \Omega-1. Using the Friedmann’s equation

\displaystyle{\dfrac{\dot{a}^2}{a^2}+\dfrac{k}{a^2}=\dfrac{\kappa^2}{3}\sum_i\rho_i}

then we have

\dfrac{k}{H^2a^2}=\dfrac{\displaystyle{\sum_i\rho_i}}{\rho_c}-1=\Omega-1

Thus, we observe that

1st. \Omega>1 if and only if (iff) k=+1, i.e., iff the Universe is spatially closed (spherical/elliptical geometry).

2nd. \Omega=1 if and only if (iff) k=0, i.e., iff the Universe is spatially “flat” (euclidean geometry).

3rd. \Omega<1 if and only if (iff) k=-1, i.e., iff the Universe is spatially “open” (hyperbolic geometry).

In the early Universe, the curvature term is negligible (as far as we know). The reason is as follows:

k/a^2\propto a^{-2}<<\dfrac{\kappa\rho}{3}\propto a^{-3}(MD),a^{-4}(RD) as a goes to zero. MD means matter dominated Universe, and RD means radiation dominated Universe. Then, the Friedmann’s equation at the early time is given by

\boxed{H^2=\dfrac{\kappa^2}{3}\rho}

Furthermore, the evolution of the curvature term

\Omega_k\equiv \Omega-1

is given by

\Omega-1=\dfrac{k}{H^2a^2}\propto \dfrac{1}{\rho a^2}\propto a(MD),a^2(RD)

and thus

\vert \Omega-1\vert=\begin{cases}(1+z)^{-1}, \mbox{if MD}\\ 10^4(1+z)^{-2}, \mbox{if RD}\end{cases}

The spatial curvature will be given by

\boxed{R_{(3)}=\dfrac{6k}{a^2}=6H^2(\Omega-1)}

and the curvature radius will be

\boxed{R=a\vert k\vert ^{-1/2}=H^{-1}\vert \Omega-1\vert ^{-1/2}}

We have arrived at the interesting result that in the early Universe, it was nearly “critical”. The Universe close to the critical density is very flat!

By the other hand, supposing that a_0=1, we can integrate the Friedmann’s equation easily:

\boxed{\displaystyle{\left(\dfrac{\dot{a}}{a}\right)^2+\dfrac{k}{a^2}=\dfrac{\kappa^2}{3}\sum_i\rho_i=\dfrac{\kappa^2}{3}\sum_i\rho_i(0)a^{-3(1+\omega_i)}}}

Then, we obtain

\dot{a}^2=H_0^2\left[-\Omega_k+\sum_i\Omega_ia^{-1-3\omega_i}\right]

We can make an analogy of this equation to certain simple equation from “newtonian Mechanics”:

\dfrac{\dot{a}^2}{2}+V(a)=0

Therefore, if we identify terms, we get that the density parameters work as “potential”, with

\displaystyle{V(a)=\dfrac{1}{2}H_0^2\left[\Omega_k-\sum_i\Omega_ia^{-1-3\omega_i}\right]}

and the total energy is equal to zero (a “machian” behaviour indeed!). In addition to this equation, we also get

\boxed{\displaystyle{H_0t=\int_0^a\left[-\Omega_k+\sum_i\Omega_i\chi^{-1-3\omega_i}\right]^{-1/2}d\chi}}

The age of the Universe can be easily calculated (symbolically and algebraically):

\boxed{t_0=H_0^{-1}f(\Omega_i)}

with

f(\Omega_i)=\int_0^1\left[-\Omega_k+\sum_i\Omega_i\chi^{-1-3\omega_i}\right]^{-1/2}d\chi

This equation can be evaluated for some general and special cases. If we write p=\omega \rho for a single component, then

a\propto t^{2/3(1+\omega)} if \omega\neq -1

Moreover, 3 common cases arise:

1) Matter dominated Universe (MD): a\propto t^{2/3}

2) Radiation dominated Universe (RD): a\propto t^{1/2}

3) Vacuum dominated Universe (VD): e^{H_0t} (w=-1 for the cosmological constant, vacuum energy or dark energy).

THE MATTER CONTENT OF THE UNIVERSE

We can find out how much energy is contributed by the different compoents of the Universe, i.e., by the different density parameters.

Case 1. Photons.

The CMB temperature gives us “photons” with T_\gamma=2\mbox{.}725\pm 0\mbox{.}002K

The associated energy density is given by the Planck law of the blackbody, that is

\rho_\gamma=\dfrac{\pi^2}{15}T^4 and \mu/T<9\cdot 10^{-5}

or equivalently

\Omega_\gamma=\Omega_r=\dfrac{2\mbox{.}47\cdot 10^{-5}}{h^2a^4}

Case 2. Baryons.

There are four established ways of measuring the baryon density:

i) Baryons in galaxies: \Omega_b\sim 0\mbox{.}02

ii) Baryons through the spectra fo distant quasars: \Omega_b h^{1\mbox{.}5}\approx 0\mbox{.}02

iii) CMB anisotropies: \Omega_bh^2=0\mbox{.}024\pm ^{0\mbox{.}004}_{0\mbox{.}003}

iv) Big Bag Nucleosynthesis: \Omega_bh^2=0\mbox{.}0205\pm 0\mbox{.}0018

Note that these results are “globally” compatible!

Case 3. (Dark) Matter/Dust.

The mass-to-light ratio from galactic rotation curves are “flat” after some cut-off is passed. It also works for clusters and other bigger structures. This M/L ratio provides a value about \Omega_m=0\mbox{.}3. Moreover, the galaxy power spectrum is sensitive to \Omega_m h. It also gives \Omega_m\sim 0\mbox{.}2. By the other hand, the cosmic velocity field of galaxies allows us to derive \Omega_m\approx 0\mbox{.}3 as well. Finally, the CMB anisotropies give us the puzzling values:

\Omega_m\sim 0\mbox{.}25

\Omega_b\sim 0\mbox{.}05

We are forced to accept that either our cosmological and gravitational theory is a bluff or it is flawed or the main component of “matter” is not of baryonic nature, it does not radiate electromagnetic radiation AND that the Standard Model of Particle Physics has no particle candidate (matter field) to fit into that non-baryonic dark matter. However, it could be partially formed by neutrinos, but we already know that it can NOT be fully formed by neutrinos (hot dark matter). What is dark matter? We don’t know. Some candidates from beyond standard model physics: axion, new (likely massive or sterile) neutrinos, supersymmetric particles (the lightest supersymmetric particle LSP is known to be stable: the gravitino, the zino, the neutralino,…), ELKO particles, continuous spin particles, unparticles, preons, new massive gauge bosons, or something even stranger than all this and we have not thought yet! Of course, you could modify gravity at large scales to erase the need of dark matter, but it seems it is not easy at all to guess a working Modified Gravitational theory or Modified Newtonian(Einsteinian) dynmanics that avoids the need for dark matter. MOND’s, MOG’s or similar ideas are an interesting idea, but it is not thought to be the “optimal” solution at current time. Maybe gravitons and quantum gravity could be in the air of the dark issues? We don’t know…

Case 4. Neutrinos.

They are NOT observed, but we understand them their physics, at least in the Standard Model and the electroweak sector. We also know they suffer “oscillations”/flavor oscillations (as kaons). The (cosmic) neutrino temperature can be determined and related to the CMB temperature. The idea is simple: the neutrino decoupling in the early Universe implied an electron-positron annihilation! And thus, the (density) entropy dump to the photons, but not to neutrinos. It causes a difference between the neutrino and photon temperature “today”. Please, note than we are talking about “relic” neutrinos and photons from the Big Bang! The (density) entropy before annihilation was:

s(a_1)=\dfrac{2\pi^2}{45}T_1^3\left[2+\dfrac{7}{8}(2\cdot 2+3\cdot 2)\right]=\dfrac{43}{90}\pi^2 T_1^3

After the annihilation, we get

s(a_2)=\dfrac{2\pi^2}{45}\left[2T_\gamma^3+\dfrac{7}{8}(3\cdot 2)T_\nu^3\right]

Therefore, equating

s(a_1)a_1^3=s(a_2)a_2^3 and a_1T_1=a_2T_\nu (a_2)

\dfrac{43}{90}\pi^2(a_1T_1)^3=\dfrac{2\pi^2}{45}\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

\dfrac{43}{2}\pi^2(a_1T_1)^3=2\pi^2\left[2\left(\dfrac{T_\gamma}{T_\nu}\right)^3+\dfrac{42}{8}\right](a_2T_\nu (a_2))^3

and then

\boxed{\left(\dfrac{T_\nu}{T_\gamma}\right)=\left(\dfrac{4}{11}\right)^{1/3}}

or equivalently

\boxed{T_\nu=\sqrt[3]{\dfrac{4}{11}}T_\gamma\approx 1\mbox{.}9K}

In fact, the neutrino energy density can be given in two different ways, depending if it is “massless” or “massive”. For massless neutrinos (or equivalently “relativistic” massless matter particles):

I) Massless neutrinos: \Omega_\nu=\dfrac{1\mbox{.}68\cdot 10^{-5}}{h^2}

2) Massive neutrinos: \Omega_\nu= \dfrac{m_\nu}{94h^2 \; eV}

Case 5. The dark energy/Cosmological constant/Vacuum energy.

The budget of the Universe provides (from cosmological and astrophysical measurements) the shocking result

\Omega\approx 1 with \Omega_M\approx 0\mbox{.}3

Then, there is some missin smooth, unclustered energy-matter “form”/”species”. It is the “dark energy”/vacuum energy/cosmological cosntant! It can be understood as a “special” pressure term in the Einstein’s equations, but one with NEGATIVE pressure! Evidence for this observation comes from luminosity-distance-redshift measurements from SNae, clusters, and the CMB spectrum! The cosmological constant/vacuum energy/dark energy dominates the Universe today, since, it seems, we live in a (positively!) accelerated Universe!!!!! What can dark energy be? It can not be a “normal” matter field. Like the Dark Matter field, we believe that (excepting perhaps the scalar Higgs field/s) the SM has no candidate to explain the Dark Energy. What field could dark matter be? Perhaps an scalar field or something totally new and “unknown” yet.

In short, we are INTO a DARKLY, darkly, UNIVERSE! Darkness is NOT coming, darkness has arrived and, if nothing changes, it will turn our local Universe even darker and darker!

See you in the next cosmological post!


LOG#105. Einstein’s equations.

einsteinfieldeq

In 1905,  one of Einstein’s achievements was to establish the theory of Special Relativity from 2 single postulates and correctly deduce their physical consequences (some of them time later).  The essence of Special Relativity, as we have seen, is that  all the inertial observers must agree on the speed of light “in vacuum”, and that the physical laws (those from Mechanics and Electromagnetism) are the same for all of them.  Different observers will measure (and then they see) different wavelengths and frequencies, but the product of wavelength with the frequency is the same.  The wavelength and frequency are thus Lorentz covariant, meaning that they change for different observers according some fixed mathematical prescription depending on its tensorial character (scalar, vector, tensor,…) respect to Lorentz transformations.  The speed of light is Lorentz invariant.

By the other hand, Newton’s law of gravity describes the motion of planets and terrrestrial bodies.  It is all that we need in contemporary rocket ships unless those devices also carry atomic clocks or other tools of exceptional accuracy.  Here is Newton’s law in potential form:

4\pi G\rho = \nabla ^2 \phi

In the special relativity framework, this equation has a terrible problem: if there is a change in the mass density \rho, then it must propagate everywhere instantaneously.  If you believe in the Special Relativity rules and in the speed of light invariance, it is impossible. Therefore, “Houston, we have a problem”.

Einstein was aware of it and he tried to solve this inconsistency.  The final solution took him ten years .

The apparent silly and easy problem is to develop and describe all physics in the the same way irrespectively one is accelerating or not. However, it is not easy or silly at all. It requires deep physical insight and a high-end mathematical language.  Indeed,  what is the most difficult part are  the details of Riemann geometry and tensor calculus on manifolds.  Einstein got  private aid from a friend called  Marcel Grossmann. In fact, Einstein knew that SR was not compatible with Newton’s law of gravity. He (re)discovered the equivalence principle, stated by Galileo himself much before than him, but he interpreted deeper and seeked the proper language to incorporante that principle in such a way it were compatible (at least locally) with special relativity! His  “journey” from 1907 to 1915 was a hard job and a continuous struggle with tensorial methods…

Today, we are going to derive the Einstein field equations for gravity, a set of equations for the “metric field” g_{\mu \nu}(x). Hilbert in fact arrived at Einstein’s field equations with the use of the variational method we are going to use here, but Einstein’s methods were more physical and based on physical intuitions. They are in fact “complementary” approaches. I urge you to read “The meaning of Relativity” by A.Einstein in order to read a summary of his discoveries.

We now proceed to derive Einstein’s Field Equations (EFE) for General Relativity (more properly, a relativistic theory of gravity):

Step 1. Let us begin with the so-called Einstein-Hilbert action (an ansatz).

S = \int d^4x \sqrt{-g} \left( \dfrac{c^4}{16 \pi G} R + \mathcal{L}_{\mathcal{M}} \right)

Be aware of  the square root of the determinant of the metric as part of the volume element.  It is important since the volume element has to be invariant in curved spacetime (i.e.,in the presence of a metric).  It also plays a critical role in the derivation.

Step 2. We perform the variational variation with respect to the metric field g^{\mu \nu}:

\delta S = \int d^4 x \left( \dfrac{c^4}{16 \pi G} \dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}} + \dfrac{\delta (\sqrt{-g}\mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}} \right) \delta g^{\mu \nu}

Step 3. Extract out  the square root of the metric as a common factor and use the product rule on the term with the Ricci scalar R:

\delta S = \int d^4 x \sqrt{-g} \left( \dfrac{c^4}{16 \pi G} \left ( \dfrac{\delta R}{\delta g^{\mu \nu}} +\dfrac{R}{\sqrt{-g}}\dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}} \right) +\dfrac{1}{\sqrt{-g}}\dfrac{\delta ( \sqrt{-g}\mathcal{L}_{\mathcal{M}})}{\delta g^{\mu\nu}}\right) \delta g^{\mu \nu}

Step 4.  Use the definition of a Ricci scalar as a contraction of the Ricci tensor to calculate the first term:

\dfrac{\delta R}{\delta g^{\mu \nu}} = \dfrac{\delta (g^{\mu \nu}R_{\mu \nu})}{\delta g^{\mu \nu} }= R_{\mu\nu} + g^{\mu \nu}\dfrac{\delta R_{\mu \nu}}{\delta g^{\mu \nu}} = R_{\mu \nu} + \mbox{total derivative}

A total derivative does not make a contribution to the variation of the action principle, so can be neglected to find the extremal point.  Indeed, this is the Stokes theorem in action. To show that the variation in the Ricci tensor is a total derivative, in case you don’t believe this fact, we can proceed as follows:

Check 1. Write  the Riemann curvature tensor:

R^{\rho}_{\, \sigma \mu \nu} = \partial _{\mu} \Gamma ^{\rho}_{\, \sigma \nu} - \partial_{\nu} \Gamma^{\rho}_{\, \sigma \mu}+ \Gamma^{\rho}_{\, \lambda \mu} \Gamma^{\lambda}_{\, \sigma \nu} - \Gamma^{\rho}_{\, \lambda \nu} \Gamma^{\lambda}_{\, \sigma \mu}

Note the striking resemblance with the non-abelian YM field strength curvature two-form

F=dA+A \wedge A = \partial _{\mu} A_{\nu} - \partial _{\nu} A_{\mu} + k \left[ A_\mu , A_{\nu} \right].

There are many terms with indices in the Riemann tensor calculation, but we can simplify stuff.

Check 2. We have to calculate the variation of the Riemann curvature tensor with respect to the metric tensor:

\delta R^{\rho}_{\, \sigma \mu \nu} = \partial _{\mu} \delta \Gamma^{\rho}_{\, \sigma \nu} - \partial_\nu \delta \Gamma^{\rho}_{\, \sigma \mu} + \delta \Gamma ^{\rho}_{\, \lambda \mu} \Gamma^{\lambda}_{\, \sigma \nu} - \delta \Gamma^{\rho}_{\lambda \nu}\Gamma^{\lambda}_{\, \sigma \mu} + \Gamma^{\rho}_{\, \lambda \mu}\delta \Gamma^{\lambda}_{\sigma \nu} - \Gamma^{\rho}_{\lambda \nu} \delta \Gamma^{\lambda}_{\, \sigma \mu}

One cannot calculate the covariant derivative of a connection since it does not transform like a tensor.  However, the difference of two connections does transform like a tensor.

Check 3. Calculate the covariant derivative of the variation of the connection:

\nabla_{\mu} ( \delta \Gamma^{\rho}_{\sigma \nu}) = \partial _{\mu} (\delta \Gamma^{\rho}_{\, \sigma \nu}) + \Gamma^{\rho}_{\, \lambda \mu} \delta \Gamma^{\lambda}_{\, \sigma \nu} - \delta \Gamma^{\rho}_{\, \lambda \sigma}\Gamma^{\lambda}_{\mu \nu} - \delta \Gamma^{\rho}_{\, \lambda \nu}\Gamma^{\lambda}_{\, \sigma \mu}

\nabla_{\nu} ( \delta \Gamma^{\rho}_{\sigma \mu}) = \partial _\nu (\delta \Gamma^{\rho}_{\, \sigma \mu}) + \Gamma^{\rho}_{\, \lambda \nu} \delta \Gamma^{\lambda}_{\, \sigma \mu} - \delta \Gamma^{\rho}_{\, \lambda \sigma}\Gamma^{\lambda}_{\mu \nu} - \delta \Gamma^{\rho}_{\, \lambda \mu}\Gamma^{\lambda}_{\, \sigma \nu}

Check 4. Rewrite the variation of the Riemann curvature tensor as the difference of two covariant derivatives of the variation of the connection written in Check 3, that is, substract the previous two terms in check 3.

\delta R^{\rho}_{\, \sigma \mu \nu} = \nabla_{\mu} \left( \delta \Gamma^{\rho}_{\, \sigma \nu}\right) - \nabla _{\nu} \left(\delta \Gamma^{\rho}_{\, \sigma \mu}\right)

Check 5. Contract the result of Check 4.

\delta R^{\rho}_{\, \mu \rho \nu} = \delta R_{\mu \nu} = \nabla_{\rho} \left( \delta \Gamma^{\rho}_{\, \mu \nu}\right) - \nabla _{\nu} \left(\delta \Gamma^{\rho}_{\, \rho \mu}\right)

Check 6. Contract the result of Check 5:

g^{\mu \nu}\delta R_{\mu \nu} = \nabla_\rho (g^{\mu \nu} \delta \Gamma^{\rho}_{\mu\nu})-\nabla_\nu (g^{\mu \nu}\delta \Gamma^{\rho}_{\rho \mu}) = \nabla _\sigma (g^{\mu \nu}\delta \Gamma^{\sigma}_{\mu \nu}) - \nabla_\sigma (g^{\mu \sigma}\delta \Gamma ^{\rho}_{\rho \mu})

Therefore, we have

g^{\mu \nu}\delta R_{\mu \nu} = \nabla_\sigma (g^{\mu \nu}\delta \Gamma^{\sigma}_{\mu\nu}- g^{\mu \sigma}\delta \Gamma^{\rho}_{\rho\mu})=\nabla_\sigma K^\sigma

Q.E.D.

Step 5. The variation of the second term in the action is the next step.  Transform the coordinate system to one where the metric is diagonal and use the product rule:

\dfrac{R}{\sqrt{-g}} \dfrac{\delta \sqrt{-g}}{\delta g^{\mu \nu}}=\dfrac{R}{\sqrt{-g}} \dfrac{-1}{2 \sqrt{-g}}(-1) g g_{\mu \nu}\dfrac{\delta g^{\mu \nu}}{\delta g^{\mu \nu}} =- \dfrac{1}{2}g_{\mu \nu} R

The reason of the last equalities is that g^{\alpha\mu}g_{\mu \beta}=\delta^{\alpha}_{\; \beta}, and then its variation is

\delta (g^{\alpha\mu}g_{\mu \nu}) = (\delta g^{\alpha\mu}) g_{\mu \nu} + g^{\alpha\mu}(\delta g_{\mu \nu}) = 0

Thus, multiplication by the inverse metric g^{\beta \nu} produces

\delta g^{\alpha \beta} = - g^{\alpha \mu}g^{\beta \nu}\delta g_{\mu \nu}

that is,

\dfrac{\delta g^{\alpha \beta}}{\delta g_{\mu \nu}}= -g^{\alpha \mu} g^{\beta \nu}

By the other hand, using the theorem for the derivation of a determinant we get that:

\delta g = \delta g_{\mu \nu} g g^{\mu \nu}

since

\dfrac{\delta g}{\delta g^{\alpha \beta}}= g g^{\alpha \beta}

because of the classical identity

g^{\alpha \beta}=(g_{\alpha \beta})^{-1}=\left( \det g \right)^{-1} Cof (g)

Indeed

Cof (g) = \dfrac{\delta g}{\delta g^{\alpha \beta}}

and moreover

\delta \sqrt{-g}=-\dfrac{\delta g}{2 \sqrt{-g}}= -g\dfrac{ \delta g_{\mu \nu} g^{\mu \nu}}{2 \sqrt{-g}}

so

\delta \sqrt{-g}=\dfrac{1}{2}\sqrt{-g}g^{\mu \nu}\delta g_{\mu \nu}=\dfrac{1}{2}\sqrt{-g}g_{\mu \nu}\delta g^{\mu \nu}

Q.E.D.

Step 6. Define the stress energy-momentum tensor as the third term in the action (that coming from the matter lagrangian):

T_{\mu \nu} = - \dfrac{2}{\sqrt{-g}}\dfrac{(\sqrt{-g} \mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}}

or equivalently

-\dfrac{1}{2}T_{\mu \nu} = \dfrac{1}{\sqrt{-g}}\dfrac{(\sqrt{-g} \mathcal{L}_{\mathcal{M}})}{\delta g^{\mu \nu}}

Step 7. The extremal principle. The variation of the Hilbert action will be  an extremum when the integrand is equal to zero:

\dfrac{c^4}{16\pi G}\left( R_{\mu \nu} - \dfrac{1}{2} g_{\mu \nu}R\right) - \dfrac{1}{2} T_{\mu \nu} = 0

i.e.,

\boxed{R_{\mu \nu} - \dfrac{1}{2}g_{\mu \nu} R = \dfrac{8\pi G}{c^4}T_{\mu\nu}}

Usually this is recasted and simplified using the Einstein’s tensor

G_{\mu \nu}= R_{\mu \nu} - \dfrac{1}{2}g_{\mu \nu} R

as

\boxed{G_{\mu\nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}}

This deduction has been mathematical. But there is a deep physical picture behind it. Moreover,  there are a huge number of physics issues one could go into. For instance, these equations bind to particles with integral spin which is good for bosons, but there are matter fermions that also participate in gravity coupling to it. Gravity is universal.  To include those fermion fields, one can consider the metric and the connection to be independent of each other.  That is the so-called Palatini approach.

Final remark: you can add to the EFE above a “constant” times the metric tensor, since its “covariant derivative” vanishes. This constant is the cosmological constant (a.k.a. dark energy in conteporary physics). The, the most general form of EFE is:

\boxed{G_{\mu\nu}+\Lambda g_{\mu\nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}}

Einstein’s additional term was added in order to make the Universe “static”. After Hubble’s discovery of the expansion of the Universe, Einstein blamed himself about the introduction of such a term, since it avoided to predict the expanding Universe. However, perhaps irocanilly, in 1998 we discovered that the Universe was accelerating instead of being decelerating due to gravity, and the most simple way to understand that phenomenon is with a positive cosmological constant domining the current era in the Universe. Fascinating, and more and more due to the WMAP/Planck data. The cosmological constant/dark energy and the dark matter we seem to “observe” can not be explained with the fields of the Standard Model, and therefore…They hint to new physics. The character of this  new physics is challenging, and much work is being done in order to find some particle of model in which dark matter and dark energy fit. However, it is not easy at all!

May the Einstein’s Field Equations be with you!


LOG#057. Naturalness problems.

yogurt_berries

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)

There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂


LOG#056. Gravitational alpha(s).

alpha

The topic today is to review a beautiful paper and to discuss its relevance for theoretical physics. The paper is: Comment on the cosmological constant and a gravitational alpha by R.J.Adler. You can read it here: http://arxiv.org/abs/1110.3358

One of the most intriguing and mysterious numbers in Physics is the electromagnetic fine structure constant \alpha_{EM}. Its value is given by

\alpha_{EM}=7.30\cdot 10^{-3}

or equivalenty

\alpha_{EM}^{-1}=\dfrac{1}{\alpha_{EM}}=137

Of course, I am assuming that the coupling constant is measured at ordinary energies, since we know that the coupling constants are not really constant but they vary slowly with energy. However, I am not going to talk about the renormalization (semi)group in this post.

Why is the fine structure constant important? Well, we can undertand it if we insert the values of the constants that made the electromagnetic alpha constant:

\alpha_{EM}=\dfrac{e^2}{\hbar c}

with e being the electron elemental charge, \hbar the Planck’s constant divided by two pi, c is the speed of light and where we are using units with K_C=\dfrac{1}{4\pi \varepsilon_0}=1. Here K_C is the Coulomb constant, generally with a value 9\cdot 10^9Nm^2/C^2, but we rescale units in order it has a value equal to the unit. We will discuss more about frequently used system of units soon.

As the electromagnetic alpha constant depends on the electric charge, the Coulomb’s electromagnetic constant ( rescaled to one in some “clever” units), the Planck’s constant ( rationalized by 2\pi since \hbar=h/2\pi) and the speed of light, it codes some deep information of the Universe inside of it. The electromagnetic alpha \alpha_{EM} is quantum and relativistic itself, and it also is related to elemental charges. Why alpha has the value it has is a complete mystery. Many people has tried to elucidate why it has the value it has today, but there is no reason of why it should have the value it has. Of course, it happens as well with some other constants but this one is particularly important since it is involved in some important numbers in atomic physics and the most elemental atom, the hydrogen atom.

In atomic physics, there are two common and “natural” scales of length. The first scale of length is given by the Compton’s wavelength of electrons. Usint the de Broglie equation, we get that the Compton’s wavelength is the wavelength of a photon whose energy is the same as the rest mass of the particle, or mathematically speaking:

\boxed{\lambda=\dfrac{h}{p}=\dfrac{h}{mc}}

Usually, physicists employ the “reduced” or “rationalized” Compton’s wavelength. Plugging the electron mass, we get the electron reduced Compton’s wavelength:

\boxed{\lambda_C=\dfrac{\lambda}{2\pi}=\dfrac{\hbar}{m_ec}=\dfrac{\hbar}{m_ec}=3.86\cdot 10^{-13}m}

The second natural scale of length in atomic physics is the so-called Böhr radius. It is given by the formula:

\boxed{a_B=\dfrac{\hbar^2}{m_e e^2}=5.29\cdot 10^{-11}m}

Therefore, there is a natural mass ratio between those two length scales, and it shows that it is precisely the electromagnetic fine structure constant alpha \alpha_{EM}:

\boxed{R_\alpha=\dfrac{\mbox{Reduced Compton's wavelength}}{\mbox{B\"{o}hr radius}}=\dfrac{\lambda_C}{a_B}=\dfrac{\left(\hbar/m_e c\right)}{\left(\hbar^2/m_ee^2\right)}=\dfrac{e^2}{\hbar c}=\alpha_{EM}=7.30\cdot 10^{-3}}

Furthermore, we can show that the electromagnetic alpha also is related to the mass ration between the electron energy in the fundamental orbit of the hydrogen atom and the electron rest energy. These two scales of energy are given by:

1) Rydberg’s energy ( electron ground minimal energy in the fundamental orbit/orbital for the hydrogen atom):

\boxed{E_H=\dfrac{m_ee^4}{2\hbar^2}=13.6eV}

2) Electron rest energy:

\boxed{E_0=m_ec^2}

Then, the ratio of those two “natural” energies in atomic physics reads:

\boxed{R'_E=\dfrac{\mbox{Rydberg's energy}}{\mbox{Electron rest energy}}=\dfrac{m_ee^4/2\hbar^2}{m_ec^2}=\dfrac{1}{2}\left(\dfrac{e^2}{\hbar c}\right)^2=\dfrac{\alpha_{EM}^2}{2}=2.66\cdot 10^{-5}}

or equivalently

\boxed{\dfrac{1}{R'_E}=37600=3.76\cdot 10^4}

R.J.Adler’s paper remarks that there is a cosmological/microscopic analogue of the above two ratios, and they involve the infamous Einstein’s cosmological constant. In Cosmology, we have two natural (ultimate?) length scales:

1st. The (ultra)microscopic and ultrahigh energy (“ultraviolet” UV regulator) relevant Planck’s length L_P, or equivalently the squared value L_P^2. Its value is given by:

\boxed{L_P^2=\dfrac{G\hbar}{c^3}\leftrightarrow L_P=\sqrt{\dfrac{G\hbar}{c^3}}=1.62\cdot 10^{35}m}

This natural length can NOT be related to any “classical” theory of gravity since it involves and uses the Planck’s constant \hbar.

2nd. The (ultra)macroscopic and ultra-low-energy (“infrared” IR regulator) relevant cosmological constant/deSitter radius. They are usualy represented/denoted by \Lambda and R_{dS} respectively, and they are related to each other in a simple way. The dimensions of the cosmological constant are given by

\boxed{\left[\Lambda \right]=\left[ L^{-2}\right]=(\mbox{Length})^{-2}}

The de Sitter radius and the cosmological constant are related through a simple equation:

\boxed{R_{dS}=\sqrt{\dfrac{3}{\Lambda}}\leftrightarrow R^2_{dS}=\dfrac{3}{\Lambda}\leftrightarrow \Lambda =\dfrac{3}{R^2_{dS}}}

The de Sitter radius is obtained from cosmological measurements thanks to the so called Hubble’s parameter ( or Hubble’s “constant”, although we do know that Hubble’s “constant” is not such a “constant”, but sometimes it is heard as a language abuse) H. From cosmological data we obtain ( we use the paper’s value without loss of generality):

H=\dfrac{73km/s}{Mpc}

This measured value allows us to derive the Hubble’s length paremeter

L_H=\dfrac{c}{H}=1.27\cdot 10^{26}m

Moreover, the data also imply some density energy associated to the cosmological “constant”, and it is generally called Dark Energy. This density energy from data is written as:

\Omega_\Lambda =\Omega^{data}_{\Lambda}

and from this, it can be also proved that

R_{dS}=\dfrac{L_H}{\sqrt{\Omega_\Lambda}}=1.46\cdot 10^{26}m

where we have introduced the experimentally deduced value \Omega_\Lambda\approx 0.76 from the cosmological parameter global fits. In fact, the cosmological constant helps us to define the beautiful and elegant formula that we can call the gravitational alpha/gravitational cosmological fine structure constant \alpha_G:

\boxed{\alpha_G\equiv \dfrac{\mbox{Planck's length}}{\mbox{normalized de Sitter radius}}=\dfrac{L_P}{\dfrac{R_{dS}}{\sqrt{3}}}=\dfrac{\sqrt{\dfrac{G\hbar}{c^3}}}{\sqrt{\dfrac{1}{\Lambda}}}=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}}

or equivalently, defining the cosmological length associated to the cosmological constant as

L^2_\Lambda=\dfrac{1}{\Lambda}=\dfrac{R^2_{dS}}{3}\leftrightarrow L_\Lambda=\sqrt{\dfrac{1}{\Lambda}}=\dfrac{R_{dS}}{\sqrt{3}}

\boxed{\alpha_G\equiv \dfrac{\mbox{Planck's length}}{\mbox{Cosmological length}}=\dfrac{L_P}{L_\Lambda}=\dfrac{\sqrt{\dfrac{G\hbar}{c^3}}}{\sqrt{\dfrac{1}{\Lambda}}}=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}=L_P\sqrt{\Lambda}=L_P\dfrac{R_{dS}}{\sqrt{3}}}

If we introduce the numbers of the constants, we easily obtaint the gravitational cosmological alpha value and its inverse:

\boxed{\alpha_G=1.91\cdot 10^{-61}\leftrightarrow \alpha_G^{-1}=\dfrac{1}{\alpha_G}=5.24\cdot 10^{60}}

They are really small and large numbers! Following the the atomic analogy, we can also create a ratio between two cosmologically relevant density energies:

1st. The Planck’s density energy.

Planck’s energy is defined as

\boxed{E_P=\dfrac{\hbar c}{L_P}=\sqrt{\dfrac{\hbar c^5}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV}

The Planck energy density \rho_P is defined as the energy density of Planck’s energy inside a Planck’s cube or side L_P, i.e., it is the energy density of Planck’s energy concentrated inside a cube with volume V=L_P^3. Mathematically speaking, it is

\boxed{\rho_P=\dfrac{E_P}{L_P^3}=\dfrac{c^7}{\hbar G^2}=2.89\cdot 10^{123}\dfrac{GeV}{m^3}}

It is an huge density energy!

Remark: Energy density is equivalent to pressure in special relativity hydrodynamics. That is,

\mathcal{P}_P=\rho_P=\tilde{\rho}_P c^2=4.63\cdot 10^{113}Pa

wiht Pa denoting pascals (1Pa=1N/m^2) and where \tilde{\rho}_P represents here matter (not energy) density ( with units in kg/m^3). Of course, turning matter density into energy density requires a multiplication by c^2. This equivalence between vacuum pressure and energy density is one of the reasons because some astrophysicists, cosmologists and theoretical physicists call “vacuum pressure” to the “dark energy/cosmological constant” term in the study of the cosmic components derived from the total energy density \Omega.

2nd. The cosmological constant density energy.

Using the Einstein’s field equations, it can be shown that the cosmological constant gives a contribution to the stress-energy-momentum tensor. The component T^{0}_{\;\; 0} is related to the dark energy ( a.k.a. the cosmological constant) and allow us to define the energy density

\boxed{\rho_\Lambda =T^{0}_{\;\; 0}=\dfrac{\Lambda c^4}{8\pi G}}

Using the previous equations for G as a function of Planck’s length, the Planck’s constant and the speed of light, and the definitions of Planck’s energy and de Sitter radius, we can rewrite the above energy density as follows:

\boxed{\rho_\Lambda=\dfrac{3}{8\pi}\left(\dfrac{E_P}{L_PR^2_{dS}}\right)=4.21 \dfrac{GeV}{m^3}}

Thus, we can evaluate the ration between these two energy densities! It provides

\boxed{R_\rho =\dfrac{\mbox{Planck's energy density}}{\mbox{CC energy density}}=\dfrac{\rho_P}{\rho_\Lambda}=\left( \dfrac{3}{8\pi}\right)\left(\dfrac{L_P}{R_{dS}}\right)^2=\left(\dfrac{1}{8\pi}\right)\alpha_G^2=1.45\cdot 10^{-123}}

and the inverse ratio will be

\boxed{\dfrac{1}{R_\rho}=6.90\cdot 10^{122}}

So, we have obtained two additional really tiny and huge values for R_\rho and its inverse, respectively. Note that the power appearing in the ratios of cosmological lengths and cosmological energy densities match the same scaling property that the atomic case with the electromagnetic alpha! In the electromagnetic case, we obtained R\sim \alpha_{EM} and R_E\sim \alpha_{EM}^2. The gravitational/cosmological analogue ratios follow the same rule R\sim \alpha_G and R_\rho\sim \alpha_G^2 but the surprise comes from the values of the gravitational alpha values and ratios. Some comments are straightforward:

1) Understanding atomic physics involved the discovery of Planck’s constant and the quantities associated to it at fundamental quantum level ( Böhr radius, the Rydberg’s constant,…). Understanding the Cosmological Constant value and the mismatch or stunning ratios between the equivalent relevant quantities, likely, require that \Lambda can be viewed as a new “fundamental constant” or/and it can play a dynamical role somehow ( e.g., varying in some unknown way with energy or local position).

2) Currently, the cosmological parameters and fits suggest that \Lambda is “constant”, but we can not be totally sure it has not varied slowly with time. And there is a related idea called quintessence, in which the cosmological “constant” is related to some dynamical field and/or to inflation. However, present data say that the cosmological constant IS truly constant. How can it be so? We are not sure, since our physical theories can hardly explain the cosmological constant, its value, and why it is current density energy is radically different from the vacuum energy estimates coming from Quantum Field Theories.

3) The mysterious value

\boxed{\alpha_G=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}=1.91\cdot 10^{-61}}

is an equivalent way to express the biggest issue in theoretical physics. A naturalness problem called the cosmological constant problem.

In the literature, there have been alternative definitions of “gravitational fine structure constants”, unrelated with the above gravitational (cosmological) fine structure constant or gravitational alpha. Let me write some of these alternative gravitational alphas:

1) Gravitational alpha prime. It is defined as the ratio between the electron rest mass and the Planck’s mass squared:

\boxed{\alpha'_G=\dfrac{Gm_e^2}{\hbar c}=\left(\dfrac{m_e}{m_P}\right)^2=1.75\cdot 10^{-45}}

\boxed{\alpha_G^{'-1}=\dfrac{1}{\alpha_G^{'}}=5.71\cdot 10^{44}}

Note that m_e=0.511MeV. Since m_{proton}=1836m_e, we can also use the proton rest mass instead of the electron mass to get a new gravitational alpha.

2) Gravitational alpha double prime. It is defined as the ratio between the proton rest mass and the Planck’s mass squared:

\boxed{\alpha''_G=\dfrac{Gm_{prot}^2}{\hbar c}=\left(\dfrac{m_{prot}}{m_P}\right)^2=5.90\cdot 10^{-39}}

and the inverse value

\boxed{\alpha_G^{''-1}=\dfrac{1}{\alpha_G^{''}}=1.69\cdot 10^{38}}

Finally, we could guess an intermediate gravitational alpha, mixing the electron and proton mass.

3) Gravitational alpha triple prime. It is defined as the ration between the product of the electron and proton rest masses with the Planck’s mass squared:

\boxed{\alpha'''_G=\dfrac{Gm_{prot}m_e}{\hbar c}=\dfrac{m_{prot}m_e}{m_P^2}=3.22\cdot 10^{-42}}

and the inverse value

\boxed{\alpha_G^{'''-1}=\dfrac{1}{\alpha^{'''}_G}=3.11\cdot 10^{41}}

We can compare the 4 gravitational alphas and their inverse values, and additionally compare them with \alpha_{EM}. We get

\alpha_G <\alpha_G^{'} <\alpha_G^{'''} < \alpha_G^{''}<\alpha_{EM}

\alpha_{EM}^{-1}<\alpha^{''-1}_G <\alpha^{'''-1}_G <\alpha^{'-1}_G < \alpha^{-1}_G

These inequations mean that the electromagnetic fine structure constant \alpha_{EM} is (at ordinary energies) 42 orders of magnitude bigger than \alpha_G^{'}, 39 orders of magnitude bigger than \alpha_G^{'''}, 36 orders of magnitude bigger than \alpha_G^{''} and, of course, 58 orders of magnitude bigger than \alpha_G. Indeed, we could extend this analysis to include the “fine structure constant” of Quantum Chromodynamics (QCD) as well. It would be given by:

\boxed{\alpha_s=\dfrac{g_s^2}{\hbar c}=1}

since generally we define g_s=1. We note that \alpha_s >\alpha_{EM} by 3 orders of magnitude. However, as strong nuclear forces are short range interactions, they only matter in the atomic nuclei, where confinement, and color forces dominate on every other fundamental interaction. Interestingly, at high energies, QCD coupling constant has a property called asymptotic freedom. But it is another story not to be discussed here! If we take the alpha strong coupling into account the full hierarchy of alphas is given by:

\alpha_G <\alpha_G^{'} <\alpha_G^{'''} < \alpha_G^{''}<\alpha_{EM}<\alpha_s

\alpha_s^{-1}<\alpha_{EM}^{-1}<\alpha^{''-1}_G <\alpha^{'''-1}_G <\alpha^{'-1}_G < \alpha^{-1}_G

Fascinating! Isn’t it? Stay tuned!!!

ADDENDUM: After I finished this post, I discovered a striking (and interesting itself) connection between \alpha_{EM} and \alpha_{G}. The relation or coincidence is the following relationship

\dfrac{1}{\alpha_{EM}}\approx \ln \left( \dfrac {1}{16\alpha_G}\right)

Is this relationship fundamental or accidental? The answer is unknown. However, since the electric charge (via electromagnetic alpha) is not related a priori with the gravitational constant or Planck mass ( or the cosmological constant via the above gravitational alpha) in any known way I find particularly stunning such a coincidence up to 5 significant digits! Any way, there are many unexplained numerical coincidences that are completely accidental and meaningless, and then, it is not clear why this numeral result should be relevant for the connection between electromagnetism and gravity/cosmology, but it is interesting at least as a curiosity and “joke” of Nature.

ADDENDUM (II):

Some quotes about the electromagnetic alpha from wikipedia http://en.wikipedia.org/wiki/Fine-structure_constant

“(…)There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won’t recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly! (…)”. R.P.Feynman, QED: The Strange Theory of Light and Matter, Princeton University Press, p.129.

“(…) If alpha [the fine-structure constant] were bigger than it really is, we should not be able to distinguish matter from ether [the vacuum, nothingness], and our task to disentangle the natural laws would be hopelessly difficult. The fact however that alpha has just its value 1/137 is certainly no chance but itself a law of nature. It is clear that the explanation of this number must be the central problem of natural philosophy.(…)” Max Born, in A.I. Miller’s book Deciphering the Cosmic Number: The Strange Friendship of Wolfgang Pauli and Carl Jung. p. 253. Publisher W.W. Norton & Co.(2009).

“(…)The mystery about α is actually a double mystery. The first mystery – the origin of its numerical value α ≈ 1/137 has been recognized and discussed for decades. The second mystery – the range of its domain – is generally unrecognized.(…)” Malcolm H. Mac Gregor, M.H. MacGregor (2007). The Power of Alpha.


LOG#050. Why riemannium?

TABLE OF CONTENTS


DEDICATORY

1. THE RIEMANN ZETA FUNCTION ζ(s)

2. THE RIEMANN HYPOTHESIS

3. THE HILBERT-POLYA CONJECTURE

4. RANDOM MATRIX THEORY

5. QUANTUM CHAOS AND RIEMANN DYNAMICS

6. THE SPECTRUM OF RIEMANNIUM

7. ζ(s) AND RENORMALIZATION

8. ζ(s) AND QUANTUM STATISTICS

9. ζ(s) AND GROUP ENTROPIES

10. ζ(s) AND THE PRIMON GAS

11. LOG-OSCILLATORS

12. LOG-POTENTIAL AND CONFINEMENT

13. HARMONIC OSCILLATOR AND TSALLIS GAS

14. TSALLIS ENTROPIES IN A NUTSHELL

15. BEYOND QM/QFT: ADELIC WORLDS

16. STRINGS, FIELDS AND VACUUM

17. SUMMARY AND OUTLOOK

DEDICATORY

This special 50th log-entry is dedicated to 2 special people and scientists who inspired (and guided) me in the hard task of starting and writing this blog.

These two people are

1st. John C. Baez, a mathematical physicist. Author of the old but always fresh/brand new This Week Finds in Mathematical Physics, and now involved in the Azimuth blog. You can visit him here

http://johncarlosbaez.wordpress.com/

and here

http://math.ucr.edu/home/baez/

I was a mere undergraduate in the early years of the internet in my country when I began to read his TWF. If you have never done it, I urge to do it. Read him. He is a wonderful teacher and an excellent lecturer. John is now worried about global warming and related stuff, but he keeps his mathematical interests and pedagogical gifts untouched. I miss some topics about he used to discuss often before in his hew blog, but his insights about virtually everything he is involved into are really impressive. He also manages to share his entusiastic vision of Mathematics and Science. From pure mathematics to physics. He is a great blogger and scientist!

2nd. The professor Francis Villatoro. I am really grateful to him. He tries to divulge Science in Spain with his excellent blog ( written in Spanish language)

http://francisthemulenews.wordpress.com/

He is a very active person in the world of Spanish Science (and its divulgation). In his blog, he also tries to explain to the general public the latest news on HEP and other topics related with other branches of Physics, Mathematics or general Science. It is not an easy task! Some months ago, after some time reading and following his blog (as I do now yet, like with Baez’s stuff), I realized that I could not remain as a passive and simple reader or spectator in the web, so I wrote him and I asked him some questions about his experience with blogging and for advice. His comments and remarks were incredibly useful for me, specially during my first logs. I have followed several blogs the last years (like those by Baez or Villatoro), and I had no idea about what kind of style/scheme I should addopt here. I had only some fuzzy ideas about what to do, what to write and, of course, I had no idea if I could explain stuff in a simple way while keeping the physical intuition and the mathematical background I wanted to include. His early criticism was very helpful, so this post is a tribute for him as well. After all, he suggested me the topic of this post! I encourage you to read him and his blog (as long as you know Spanish or you can use a good translator).

Finally, let me express and show my deepest gratitude to John and Francis. Two great and extraordinary people and professionals in their respective fields who inspired (and yet they do) me in spirit and insight in my early and difficult steps of writing this blog. I am just convinced that Science is made of little, ordinary and small contributions like mine, and not only the greatest contributions like those making John and Francis to the whole world. I wish they continue making their contributions in the future for many, many years yet to come.

Now, let me answer the question Francis asked me to explain here with further details. My special post/log-entry number 50…It will be devoted to tell you why this blog is called The Spectrum of Riemannium, and what is behind the greatest unsolved problem in Number Theory, Mathematics and likely Physics/Physmatics as well…Enjoy it!

1. THE RIEMANN ZETA FUNCTION ζ(s)

The Riemann zeta function is a device/object/function related to prime numbers.

In general, it is a function of complex variable s=\sigma+i\tau defined by the next equation:

\boxed{\displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p,\; prime}\dfrac{1}{1-p^{-s}}}}

or

\boxed{\displaystyle{\zeta (s)=\dfrac{1}{1-2^{-s}}\dfrac{1}{1-3^{-s}}\ldots\dfrac{1}{1-137^{-s}}\ldots}}

Generally speaking, the Riemann zeta function extended by analytical continuation to the whole complex plane is “more” than the classical Riemann zeta function that Euler found much before the work of Riemann in the XIX century. The Riemann zeta function for real and entire positive values is a very well known (and admired) series by the mathematicians. \zeta (1)=\infty due to the divergence of the harmonic series. Zeta values at even positive numbers are related to the Bernoulli numbers, and it is still lacking an analytic expression for the zeta values at odd positive numbers.

The Riemann zeta function over the whole complex plane satisfy the following functional equation:

\boxed{\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s)=\pi^{-\frac{(1-s)}{2}}\Gamma \left(\dfrac{1-s}{2}\right)\zeta (1-s)}

Equivalently, it can be also written in a very simple way:

\boxed{\xi (s)=\xi (1-s)}

where we have defined

\xi (s)=\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s)

Riemann zeta values are an example of beautiful Mathematics. From \displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}}, then we have:

1) \zeta (0)=1+1+\ldots=-\dfrac{1}{2}.

2) \zeta (1)=1+\dfrac{1}{2}+\dfrac{1}{3}+\ldots =\infty. The harmonic series is divergent.

3) \zeta (2)=1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\ldots =\dfrac{\pi^2}{6}\approx 1.645. The famous Euler result.

4) \zeta (3)=1+\dfrac{1}{2^3}+\dfrac{1}{3^3}+\ldots \approx 1.202. And odd zeta value called Apery’s constant that we do not know yet how to express in terms of irrational numbers.

5) \zeta (4)=\dfrac{\pi^4}{90}\approx 1.0823.

6) \zeta (-2n)=-\dfrac{\pi^{-n}}{2\Gamma (-n+1)}=0,\;\;\forall n=1,2,\ldots ,\infty. Trivial zeroes of zeta.

7) \zeta (2n)=\dfrac{(-1)^{n+1}(2\pi)^{2n}B_{2n}}{2(2n)!}\;\;\forall n=1,2,\ldots ,\infty, where B_{2n} are the Bernoulli numbers. The first 13 Bernoulli numbers are:

B_0=1, B_1=-\dfrac{1}{2}, B_2=\dfrac{1}{6}, B_3=0, B_4=-\dfrac{1}{30}, B_5=0, B_6=\dfrac{1}{42}

B_7=0, B_8=-\dfrac{1}{30}, B_9=0, B_{10}=\dfrac{5}{66}, B_{11}=0, B_{12}=-\dfrac{691}{2730}, B_{13}=0

8) We note that B_{2n+1}=0,\;\; \forall n\geq 1.

9) \zeta (-2n+1)=-\dfrac{B_{2n}}{2n}, \;\; \forall n=1,2,\ldots ,\infty.

For instance, \zeta (-1)=-\dfrac{1}{12}=1+2+3+\ldots, \zeta (-3)=\dfrac{1}{120}, and \zeta (-5)=-\dfrac{1}{252}. Indeed, \zeta (-1) arises in string theory trying to renormalize the vacuum energy of an infinite number of harmonic oscillators. The result in the bosonic string is \dfrac{2}{2-D}. In order to match with Riemann zeta function regularization of the above series, the bosonic string is asked to live in an ambient spacetime of D=26 dimensions. We also have that

\sum \vert n\vert^3=-\dfrac{1}{60}

10) \zeta (\infty)=1. The Riemann zeta value at the infinity is equal to the unit.

11) The derivative of the zeta function is \displaystyle{\zeta '(s)=-\sum_{n=1}^{\infty}\dfrac{\log n}{n^s}}. Particularly important of this derivative are:

\displaystyle{\zeta '(0)=-\sum_{n=1}^\infty \log n=-\log \prod_{n=1}^\infty n=\zeta (0)\log (2\pi)=-\dfrac{1}{2}\log (2\pi)=-\log \sqrt{2\pi}=\log \dfrac{1}{\sqrt{2\pi}}}

or \zeta '(0)=\log \sqrt{\dfrac{1}{2\pi}}

This allow us to define the factorial of the infinity as

\displaystyle{\infty !=\prod_{n=1}^{\infty}n=1\cdot 2\cdots \infty=e^{-\zeta '(0)}=\sqrt{2\pi}}

and the renormalized infinite dimensional determinant of certain operator A as:

\det _\zeta (A)=a_1\cdot a_2\cdots=\exp \left(-\zeta_A '(0)\right), with \displaystyle{\zeta _A (s)=\sum_{n=1}^\infty \dfrac{1}{a_n^s}}

12) \zeta (1+\varepsilon )=\dfrac{1}{\varepsilon}+\gamma_E +\mathcal{O} (\varepsilon ). This is a result used by theoretical physicists in dimensional renormalization/regularization. \gamma_E\approx 0.577 is the so-called Euler-Mascheroni constant.

The alternating zeta function, called Dirichlet eta function, provides interesting values as well. Dirichlet eta function is defined and related to the Riemann zeta fucntion as follows:

\boxed{\displaystyle{\eta (s)=\sum_{n=1}^\infty \dfrac{(-1)^{n+1}}{n^s}=\left(1-2^{1-s}\right)\zeta (s)}}

This can be thought as “bosons made of fermions” or “fermions made of bosons” somehow. Special values of Dirichlet eta function are given by:

\eta (0)=-\zeta (0)=\dfrac{1}{2} \eta (1)=\log 2 \eta (2)=\dfrac{1}{2}\zeta (2)=\dfrac{\pi^2}{12}

\eta (3)=\dfrac{3}{4}\zeta (3)\approx \dfrac{3}{4}(1.202) \eta (4)=\dfrac{7}{8}\zeta (4)=\dfrac{7}{8}\left(\dfrac{\pi^4}{90}\right)

Remark(I): \zeta(2) is important in the physics realm, since the spectrum of the hydrogen atom has the following aspect

E(n)=-\dfrac{K}{n^2}

and the Balmer formula is, as every physicist knows

\Delta E(n,m)=K\left(\dfrac{1}{n^2}-\dfrac{1}{m^2}\right)

Remark (II): The fact that \zeta (2) is finite implies that the energy level separation of the hydrogen atom in the Böhr level tends to zero AND that the sum of ALL the possible energy levels in the hydrogen atom is finite since \zeta (2) is finite.

Remark(III): What about an “atom”/system with spectrum E(n)=\kappa n^{-s}? If s=2, we do know that is the case of the Kepler problem. Moreover, it is easy to observe that s=-1 corresponds to tha harmonic oscillator, i.e., E(n)=\hbar \omega n. We also know that s=-2 is the infinite potential well. So the question is, what about a n^{-3} spectrum and so on?

In summary, does the following spectrum

\boxed{E=\mathbb{K}\dfrac{1}{n^{s}}}

with energy separation/splitting

\boxed{\Delta E(n,m;s)=\mathbb{K}\left(\dfrac{1}{n^{s}}-\dfrac{1}{m^{s}}\right)}

exist in Nature for some physical system beyond the infinite potential well, the harmonic oscillator or the hydrogen atom, where s=-2, s=-1 and s=2 respectively?

It is amazing how Riemann zeta function gets involved with a common origin of such a different systems and spectra like the Kepler problem, the harmonic oscillator and the infinite potential well!

 

2. THE RIEMANN HYPOTHESIS

The Riemann Hypothesis (RH) is the greatest unsolved problem in pure Mathematics, and likely, in Physics too. It is the statement that the only non-trivial zeroes of the Riemann zeta function, beyond the trivial zeroes at s=-2n,\;\forall n=1,2,\ldots,\infty have real part equal to 1/2. In other words, the equation or feynmanity has only the next solutions:

\boxed{\mbox{RH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1}{2}\pm i\lambda_n, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}}

I generally prefer the following projective-like version of the RH (PRH):

\boxed{\mbox{PRH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1\pm i\overline{\lambda}_n}{2}, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}}

The Riemann zeta function can be sketched on the whole complex plane, in order to obtain a radiography about the RH and what it means. The mathematicians have studied the critical strip with ingenious tools an frameworks. The now terminated ZetaGrid project proved that there are billions of zeroes IN the critical line. No counterexample has been found of a non-trivial zeta zero outside the critical line (and there are some arguments that make it very unlikely). The RH says that primes “have music/order/pattern” in their interior, but nobody has managed to prove the RH. The next picture shows you what the RH “say” graphically:

If you want to know how the Riemann zeroes sound, M. Watkins has done a nice audio file to see their music.

You can learn how to make “music” from Riemann zeroes here http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/munafo-zetasound.htm

And you can listen their sound here

http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/zeta.mp3

Riemann zeroes are connected with prime numbers through a complicated formula called “the explicit formula”. The next equation holds  \forall x\geq 2 integer numbers, and non-trivial Riemann zeroes in the complex (upper) half-plane with \tau>0:

\boxed{\displaystyle{\pi (x)+\sum_{n=2}^\infty \dfrac{\pi \left( x^{1/n}\right)}{n}=\text{Li} (x)-\sum_{\lambda =\sigma+i\tau }\left(\text{Li}(x^\lambda)+\text{Li}\left( x^{1-\lambda}\right)\right)+\int_x^\infty\dfrac{du}{u(u^2-1)\ln u}-\ln 2}}

and where \pi (x) is the celebrated Gauss prime number counting function, i.e., \pi (x) represents the prime numbers that are equal than x or below. This explicit formula was proved by Hadamard. The explicit formula follows from both product representations of \zeta (s), the Euler product on one side and the Hadamard product on the other side.

The function \text{Li} (x), sometimes written as \text{li} (x), is the logarithmic integral

\displaystyle{\text{Li} (x) =\text{li} (x)= \int_2^x\dfrac{du}{\ln x}}

The explicit formula comes in some cool variants too. For instance, we can write

\pi (x)=\pi_0 (x)+\pi_1 (x)=\pi_{\mbox{smooth}}+\pi_{\mbox{osc-chaotic}}

where

\displaystyle{\pi_0 (x)=\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\left[\mbox{Li}(x^{1/n})-\sum_{k=1}^\infty\mbox{Li}(x^{-2k/n})\right]}

and

\displaystyle{\pi_1 (x)=-2\mbox{Re}\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\sum_{\alpha=1}^\infty\mbox{Li}(x^{(\sigma_\alpha+i\tau_\alpha)/n})}

For large values of x, we have the asymptotics

\pi_0 (x)\approx \mbox{Li} (x)

and

\displaystyle{\pi_1 (x)\approx -\dfrac{2}{\ln x}\sum_{\alpha=1}^\infty\dfrac{x^{\sigma_\alpha}}{\sigma_\alpha^2+\tau_\alpha^2}\left(\sigma_\alpha\cos (\tau_\alpha \ln x)+\tau_\alpha \sin (\tau_\alpha \ln x)\right)}

Remark: Please, don’t confuse the logarithmic integral with the polylogarithm function \text{Li}_x (s).

Gauss also conjectured that

\pi (x)\sim \text{Li} (x)

3. THE HILBERT-POLYA CONJECTURE

Date: January 3, 1982. Andrew Odlyzko wrote a letter to George Pólya about the physical ground/basis of the Riemann Hypothesis and the conjecture associated to Polya himself and David Hilbert. Polya answered and told Odlyzko that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann Hypothesis should be true, and suggested that this would be the case if the imaginary parts, say T of the non-trivial zeros

\dfrac{1}{2}+iT

of the Riemann zeta function corresponded to eigenvalues of an unbounded and unknown self adjoint operator \hat{T}. That statement was never published formally, but  it was remembered after all, and it was transmitted from one generation to another. At the time of Pólya’s conversation with Landau, there was little basis for such speculation. However, Selberg, in the early 1950s, proved a duality between the length spectrum of a Riemann surface and the eigenvalues of its Laplacian. This so-called Selberg trace formula shared a striking resemblance to the explicit formula of certain L-function, which gave credibility to the speculation of Hilbert and Pólya.

 

4. RANDOM MATRIX THEORY

Dialogue(circa 1970). “(…)Dyson: So tell me, Montgomery, what have you been up to? Montgomery: Well, lately I’ve been looking into the distribution of the zeros of the Riemann zeta function.  Dyson: Yes? And?  Montgomery: It seems the two-point correlations go as….(…) Dyson: Extraordinary! Do you realize that’s the pair-correlation function for the eigenvalues of a random Hermitian matrix? It’s also a model of the energy levels in a heavy nucleus, say U-238.(…)”

A step further was given in the 1970s, by the mathematician Hugh Montgomery. He investigated and found that the statistical distribution of the zeros on the critical line has a certain property, now called Montgomery’s pair correlation conjecture. The Riemann zeros tend not to cluster too closely together, but to repel. During a visit to the Institute for Advanced Study (IAS) in 1972, he showed this result to Freeman Dyson, one of the founders of the theory of random matrices. Dyson realized that the statistical distribution found by Montgomery appeared to be the same as the pair correlation distribution for the eigenvalues of a random and “very big/large” Hermitian matrix with size NxN. These distributions are of importance in physics and mathematics. Why? It is simple. The eigenstates of a Hamiltonian, for example the energy levels of an atomic nucleus, satisfy such statistics. Subsequent work has strongly borne out the connection between the distribution of the zeros of the Riemann zeta function and the eigenvalues of a random Hermitian matrix drawn from the theoyr of the so-calle Gaussian unitary ensemble, and both are now believed to obey the same statistics. Thus the conjecture of Pólya and Hilbert now has a more solid fundamental link to QM, though it has not yet led to a proof of the Riemann hypothesis. The pair-correlation function of the zeros is given by the function:

R_2(x)=1-\left(\dfrac{\sin \pi x}{\pi x}\right)^2

In a posterior development that has given substantive force to this approach to the Riemann hypothesis through functional analysis and operator theory, the mathematician Alain Connes has formulated a “trace formula” using his non-commutative geometry framework that is actually equivalent to certain generalized Riemann hypothesis. This fact has therefore strengthened the analogy with the Selberg trace formula to the point where it gives precise statements. However, the mysterious operator believed to provide the Riemann zeta zeroes remain hidden yet. Even worst, we don’t even know on which space the Riemann operator is acting on.

However, some trials to guess the Riemann operator has been given from a semiclassical physical environtment as well. Michael Berry  and Jon Keating have speculated that the Hamiltonian/Riemann operator H is actually some kind of quantization of the classical Hamiltonian XP where P is the canonical momentum associated with the position operator X. If that Berry-Keating conjecture is true. The simplest Hermitian operator corresponding to XP is

H = \dfrac1{2} (xp+px) = - i \left( x \dfrac{\mathrm{d}}{\mathrm{d} x} + \dfrac{1}{2} \right)

At current time, it is still quite inconcrete, as it is not clear on which space this operator should act in order to get the correct dynamics, nor how to regularize it in order to get the expected logarithmic corrections. Berry and Germán Sierra, the latter in collaboration with P.K.Townsed, have conjectured that since this operator is invariant under dilatations perhaps the boundary condition f(nx)=f(x) for integer n may help to get the correct asymptotic results valid for big n. That it, in the large n we should obtain

s_n=\dfrac{1}{2} + i \dfrac{ 2\pi n}{\log n}

 

5. QUANTUM CHAOS AND RIEMANN DYNAMICS

Indeed, the Berry-Keating conjecture opened another striking attack to prove the RH. A topic that was popular in the 80’s and 90’s in the 20th century. The weird subject of “quantum chaos”. Quantum chaos is the subject devoted to the study of quantum systems corresponding to classically chaotic systems. The Berry-Keating conjecture shed light further into the Riemann dynamics, sketching some of the properties of the dynamical system behind the Riemann Hypothesis.

In summary, the dynamics of the Riemann operator should provide:

1st. The quantum hamiltonian operator behind the Riemann zeroes, in addition to the classical counterpart, the classical hamiltonian H, has a dynamics containing the scaling symmetry. As a consequence, the trajectories are the same at all energy scale.
2nd. The classical system corresponding to the Riemann dynamics is chaotic and unstable.
3rd. The dynamics lacks time-reversal symmetry.
4th. The dynamics is quasi one-dimensional.

A full dictionary translating the whole correspondence between the chaotic system corresponding to the Riemann zeta function and its main features is presented in the next table:

 

6. THE SPECTRUM OF RIEMANNIUM

In 2001, the following paper emerged, http://arxiv.org/abs/nlin/0101014. The Riemannium arxiv paper was published later (here: Reg. Chaot. Dyn. 6 (2001) 205-210). After that, Brian Hayes  wrote a really beautiful, wonderful and short paper titled The Spectrum of Riemannium in 2003 (American Scientist, Volume 91, Number 4 July–August, 2003,pages 296–300). I remember myself reading the manuscript and being totally surprised. I was shocked during several weeks. I decided that I would try to understand that stuff better and better, and, maybe, make some contribution to it. The Spectrum of Riemannium was an amazing name, an incredible concept. So, I have been studying related stuff during all these years. And I have my own suspitions about what the riemannium and the zeta function are, but this is not a good place to explain all of them!

The riemannium is the mysterious physical system behind the RH. Its spectrum, the spectrum of riemannium, are given by the RH and its generalizations.

Moreover, the following sketch from Hayes’ paper is also very illustrative:

What do you think? Isn’t it suggestive? Is it amazing?

 

7. ζ(s) AND RENORMALIZATION

Riemann zeta function also arises in the renormalization of the Standard Model and the regularization of determinants with “infinite size” (i.e., determinants of differential operators and/or pseudodifferential operators). For instance, the \infty-dimensional regularized determinant is defined through the Riemann zeta function as follows:

\displaystyle{\det _\zeta \mathcal{P}=e^{-\zeta_{\mathcal{P}}^{'}(0)}}

The dimensional renormalization/regularization of the SM makes use of the Riemann zeta function as well. It is ubiquitous in that approach, but, as far as I know, nobody has asked why is that issue important, as I have suspected from long time ago.

 

8. ζ(s) AND QUANTUM STATISTICS

Riemann zeta function is also used in the theory of Quantum Statistics. Quantum Statistics are important in Cosmology and Condensed Matter, so it is really striking that Riemann zeta values are related to phenomena like Bose-Einstein condensation or the Cosmic Microwave Background and also the yet to be found Cosmic Neutrino Background!

Let me begin with the easiest quantum (indeed classical) statistics, the Maxwell-Boltzmann (MB) statistics. In 3 spatial dimensions (3d) the MB distribution arises ( we will work with units in which \hbar =1):

f(p)_{MB}=\dfrac{1}{(2\pi)^3}e^{\frac{\mu -E}{k_BT}}

Usually, there are 3 thermodynamical quantities that physicists wish to compute with statistical distributions: 1) the number density of particles n=N/V, 2) the energy density \varepsilon=U/V and 3) the pressure P. In the case of a MB distribution, we have the following definitions:

\displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{\mu -E}{k_BT}}}

\displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p Ee^{\frac{\mu -E}{k_BT}}}

\displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p \dfrac{1}{3}\dfrac{\vert\mathbf{p}\vert^2}{E}e^{\frac{\mu -E}{k_BT}}}

We can introduce the dimensionless variables $late z=\dfrac{mc^2}{k_BT}$, \tau =\dfrac{E}{k_BT}=\dfrac{\sqrt{p^2+m^2c^4}}{k_BT}. In this way,

\vert p\vert=\dfrac{k_BT}{c}\sqrt{\tau^2-z^2}

c^2\vert\mathbf{p}\vert d\vert \mathbf{p}\vert=k_B^2T^2\tau d\tau

c^3\vert\mathbf{p}\vert^2d\vert\mathbf{p}\vert=k_B^3T^3\tau\sqrt{\tau^2-z^2}d\tau

With these definitions, the particle density becomes

\displaystyle{n=\dfrac{4\pi k_B^3T^3}{(2\pi)^3}e^{\frac{\mu}{k_BT}}\int_z^\infty d\tau (\tau^2-z^2)^{1/2}\tau e^{-\tau}}

This integral can be calculated in closed form with the aid of modified Bessel functions of the 2th kind:

K_n (z)=\dfrac{2^nn!}{(2n)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-1/2}e^{-\tau} or equivalently

K_n (z)=\dfrac{2^{n-1}(n-1)!}{(2n-2)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-3/2}\tau e^{-\tau}

K_{n+1} (z)=\dfrac{2nK_n (z)}{z}+K_{n-1} (z)

\displaystyle{K_2 (x)=\dfrac{1}{z^2}\int_z^\infty (\tau^2-z^2)^{1/2}\tau e^{-\tau}d\tau}

And thus, we have the next results (setting c=1 for simplicity):

\mbox{Particle number density}\equiv n=\dfrac{N}{V}=\dfrac{k_B^3T^3}{2\pi^2}z^2K_2 (z)=\dfrac{k_B^3T^3}{2\pi^2}\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)e^{\frac{\mu}{k_BT}}

\mbox{Energy density}\equiv\varepsilon=\dfrac{k_B^4T^4}{2\pi^2}\left[ 3\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)+\left(\dfrac{m}{k_BT}\right)^3K_1\left(\dfrac{m}{k_BT}\right)\right]e^{\frac{\mu}{k_BT}}

\mbox{Pressure}\equiv P=\dfrac{k_B^4T^4}{2\pi^2}\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)e^{\frac{\mu}{k_BT}}

Even entropy density is easiy to compute:

\mbox{Entropy density}\equiv s=\dfrac{m^3}{2\pi^2}e^{\frac{\mu}{k_BT}}\left[ K_1\left(\dfrac{m}{k_BT}\right)+\dfrac{4k_BT-\mu}{m}K_2\left(\dfrac{m}{k_BT}\right)\right]

These results can be simplified in some limit cases. For instance, in the massless limit z=m/k_BT\rightarrow 0. Moreover, we also know that \displaystyle{\lim_{z\rightarrow 0}z^nK_n (z)=2^{n-1}(n-1)!}. In such a case, we obtain:

n\approx \dfrac{k_B^3T^3}{\pi^2}e^{\frac{\mu}{k_BT}}

\varepsilon \approx \dfrac{3k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}}

P\approx \dfrac{k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}}

We note that \varepsilon=3P in this massless limit.

Remark (I): In the massless limit, and whenever there is no degeneracy, \varepsilon =3P holds.

Remark (II): If there is a quantum degeneracy in the energy levels, i.e., if g\neq 1, we must include an extra factor of g_j=2j+1 for massive particles of spin j. For massless photons with helicity, there is a g=2 degeneracy.

Remark (III): In the D-dimensional (D=d+1) Bose gas with dispersion relationship \varepsilon_p=cp^{s}, it can be shown that the pressure is related with the energy density in the following way

\mbox{Pressure}\equiv P=\dfrac{s}{d}\dfrac{U}{V}=\dfrac{s}{d}\varepsilon

Remark (IV): Let us define p^s (n) as the number of ways an integer number can be expressed as a sum of the sth powers of integers. For instance,

p^1 (5)=7 because 5=4+1=3+2=3+1+1=2+2+1=2+1+1+1=1+1+1+1+1

p^2 (5)=2 because 5=2^2+1^2=1^2+1^2+1^2+1^2+1^2

If E_n=n^s with n\geq 1 and s>0, then x=e^{-\beta} and the partition function is

\displaystyle{Z=\prod_{k}\left( 1+e^{\frac{\mu-E}{k_BT}}\right)}

We will see later that \displaystyle{\sum_{N=0}^\infty x^N=\begin{cases}1+x, FD \\ \dfrac{1}{1-x}, BE\end{cases}}

with \mu =0 is nothing but the generatin function of the partitions p^s (n)

\displaystyle{Z(x=e^{-\beta})=\prod_{n=1}^\infty \dfrac{1}{1-x^{n^s}}=\sum_{n=1}^\infty p^s (n) x^n\approx \int_1^\infty dn p^s (n) e^{-\beta n}}

The Hardy-Ramanujan inversion formula reads (for the case s=1 only):

p(n) \approx \dfrac{1}{4\sqrt{3}N}e^{\pi\sqrt{2N/3}}

Remark (V): There are some useful integrals in quantum statistics. They are the so-called Bose-Einstein/Fermi-Dirac integrals

\displaystyle{\int_0^\infty dx \dfrac{x^{n-1}}{e^x\mp 1}=\begin{cases}\Gamma (n) \zeta (n), \;\; BE\\ \Gamma (n)\eta (n)=\Gamma (n) (1-2^{1-n})\zeta (n),\;\; FD\end{cases}}

The BE-FD quantum distributions in 3d are defined as follows:

\displaystyle{f(p)=\dfrac{1}{(2\pi)^3}\sum_{n=1}^{\infty}(\mp)^{n+1}e^{-n\frac{(E-\mu)}{k_BT}}}

where the minus sign corresponds to FD and the plus sign to BE.

We will firstly study the BE distribution in 3d. We have:

\displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p \left(e^{\frac{\mu-E}{k_BT}}-1\right)^{-1}=\dfrac{1}{(2\pi)^3}\int d^3p \sum_{n=1}^{\infty}(+1)^{n+1}e^{\frac{n\mu-nE}{k_BT}}}

Introducing a scaled temperature T'=T/n, we get

\displaystyle{n=\sum_{n=1}^{\infty}\left[\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{n\mu-nE}{k_BT'}}\right]=\sum_{n=1}^{\infty}\dfrac{k_B^3T^3}{2\pi^2}\dfrac{1}{n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{n\mu}{k_BT}}}

\displaystyle{\varepsilon=\sum_{n=1}^{\infty}\dfrac{k_B^4T^4}{n^4(2\pi^2)}\left[3\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)+\left(\dfrac{nm}{k_BT}\right)^3K_1\left(\dfrac{nm}{k_BT}\right)\right]e^{\frac{n\mu}{k_BT}}}

\displaystyle{P=\sum_{n=1}^{\infty}\dfrac{k_B^4T^4}{n^4(2\pi^2)}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{n\mu}{k_BT}}}

Again, we can study a particularly simple case: the massless limit m\rightarrow 0 with \mu\rightarrow 0. In this case, we get:

\displaystyle{n=\dfrac{k_B^3T^3}{\pi^2}\sum_{n=1}^\infty \dfrac{1}{n^3}=\dfrac{k_B^3T^3}{\pi^2}\zeta (3)\approx 1.202\dfrac{k_B^3T^3}{\pi^2}}

\displaystyle{\varepsilon=\sum_{n=1}^\infty\dfrac{3(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{3(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2}{30}(k_BT)^4}

\displaystyle{P=\sum_{n=1}^\infty\dfrac{(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2(k_BT)^4}{90}}

The FD distribution in 3d can be studied in a similar way. Following the same approach as the BE distribution, we deduce that:

\displaystyle{n=\sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^3}{2\pi^2n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{\mu n}{k_BT}}}

\displaystyle{\varepsilon= \sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^4}{2\pi^2}\left[3\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)+\left(\dfrac{nm}{k_BT}\right)^3K_1\left(\dfrac{nm}{k_BT}\right)\right]e^{\frac{\mu n}{k_BT}}}

\displaystyle{P=\sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^4}{2\pi^2}\dfrac{1}{n^4}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{n\mu}{k_BT}}}

and again the massless limit m=0 and \mu\rightarrow 0 provide

\displaystyle{n\approx \dfrac{(k_BT)^3}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^3}=\dfrac{(k_BT)^3}{\pi^2}\eta (3)=\dfrac{(k_BT)^3}{\pi^2}\left(\dfrac{3}{4}\right)\zeta (3)}

\displaystyle{\varepsilon\approx \dfrac{3(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=3(k_BT)^4\eta (4)=3(k_BT)^4\dfrac{7}{8}\zeta (4)=\dfrac{\pi^2(k_BT)^4}{30}\left(\dfrac{7}{8}\right)}

\displaystyle{P\approx \dfrac{(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=\left(\dfrac{7}{8}\right)\dfrac{\pi^2(k_BT)^4}{90}}

Remark (I): For photons \gamma with degeneracy g=2 we obtain

n_\gamma =\dfrac{2\zeta (3) (k_BT)^3}{\pi^2}

\varepsilon_\gamma= 3P_\gamma =\dfrac{\pi^2 (k_BT)^4}{15}

s_\gamma =P'(T)=\dfrac{4}{3}\left(\dfrac{\pi^2}{15}\right)(k_BT)^3=\dfrac{2\pi^4}{45\zeta (3)}n

Remark (II): In Cosmology, Astrophysics and also in High Energy Physics, the following units are used

1eV=1.602\cdot 10^{-19}J

\hbar=1=6.58\cdot 10^{-22}MeVs=7.64\cdot 10^{-12}Ks

\hbar c=1=0.19733GeV\cdot fm=0.2290 K\cdot cm

1 K=0.1532\cdot 10^{-36}g\cdot c^2

The Cosmic Microwave Background is the relic photon radiation of the Big Bang, and thus it has a temperature due to photons in the microwave band of the electromagnetic spectrum. Its value is:

T_\gamma \approx 2.725K

Indeed, it also implies that the relic photon density is about n_\gamma =410\dfrac{1}{cm^3}

It is also speculated that there has to be a Cosmic Neutrino Background relic from the Big Bang. From theoretical Cosmology, it is related to the photon CMB temperature in the following way:

T_\nu =\left(\dfrac{4}{11}\right)^{1/3}2.7K or equivalently

T_\nu\approx 1.9K

This temperature implies a relic neutrino density (per species, i.e., with g_\nu=1) about

n_\nu=54\dfrac{1}{cm^3}

The cosmological density entropy due to these particles is

s_0=\dfrac{S_0}{V}=\dfrac{4\pi^2}{45}\left[1+\dfrac{2\cdot 3}{2}\left(\dfrac{7}{8}\right)\left(\dfrac{4}{11}\right)\right]T_{0\gamma}^3=2810\dfrac{1}{cm^3}\left( \dfrac{T_{0\gamma}}{2.7K}\right)^3

and then we get

s_0\approx 7.03n_{0\gamma}

Remark (III): In Cosmology, for fermions in 3d ( note that BE implies \varepsilon=3P, and that we must drop the factors \left( 7/8\right), \left( 3/4\right), \left( 7/6\right) in the next numerical values) we can compute

n=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)\dfrac{2\zeta (3)}{\pi^2}(k_BT)^3\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)31.700\left(\dfrac{k_BT}{GeV}\right)^3\dfrac{1}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)20.288\left(\dfrac{T}{K}\right)^3\dfrac{1}{cm^3}\end{cases}

\varepsilon=3P=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{\pi^2}{15}\right)(k_BT)^4\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)(85.633)\left(\dfrac{k_BT}{GeV}\right)\dfrac{GeV}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(0.841\cdot 10^{-36}\right)\left(\dfrac{T}{K}\right)^4\dfrac{g}{cm^3}\end{cases}

s=\dfrac{S}{V}=\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{4\pi^2}{45}\right)(k_BT)^3=\dfrac{7}{6}\left[\dfrac{2\pi^4}{45\zeta (3)}\right] n

Remark (IV): An example of the computation of degeneracy factor is the quark-gluon plasma degeneracy g_{QGP}. Firstly we compute the gluon and quark degeneracies

g_g=(\mbox{color})(\mbox{spin})=2^3\cdot 2=8\cdot 2=16

g_q=(p\overline{p})(\mbox{spin})(\mbox{color})(\mbox{flavor})=2\cdot 2\cdot 3\cdot N_f=12N_f

Then, the QG plasma degeneracy factor is

g_{QGP}=g_g+\dfrac{7}{8}g_q=16+\dfrac{7}{8}12N_f=16+\dfrac{21}{2}N_f \leftrightarrow \boxed{g_{QGP}=16+\dfrac{21}{2}N_f}

In general, for charged leptons and nucleons g=2, g=1 for neutrinos (per species, of course), and g=2 for gluons and photons. Remember that massive particles with spin j will have g_j=2j+1.

Remark (V): For the Planck distribution, we also get the known result for the thermal distribution of the blackbody radiation

\displaystyle{I(T)=\int_0^\infty f(\nu ,T)d\nu=\dfrac{8\pi h}{c^3}\int_0^\infty \dfrac{\nu^3d\nu}{e^{\frac{h\nu}{k_BT}}-1}=\dfrac{8\pi^5k_B^4T^4}{15c^3h^3}}

Remark (VI): Sometimes the following nomenclature is used

i) Extremely degenerated gas if \mu>>k_BT

ii) Non-degenerated gas if \mu <<-k_BT

iii) Extremely relativistic gas ( or ultra-relativistic gas) if p>> mc

iv) Non-relativistic gas if p<<mc

 

 

9. ζ(s) AND GROUP ENTROPIES

Let us define the following shift operator \hat{T}:

\hat{T}f(x)=f(x+\sigma)

where \sigma\in \mathbb{R}. Moreover, there is certain isomorphism  between the shift operator space and the space of functions through the map \hat{T}\leftrightarrow x^\sigma.

We define the generalized logarithm as the image under the previous map of \hat{T}. That is:

\displaystyle{\mbox{Log}_G(x)\equiv \dfrac{1}{\sigma}\sum_{n=l}^{m}k_n x^{\sigma n}}

where l,m\in \mathbb{Z}, with l<m, m-l=r and x>0. Furthermore, the next contraints are also given for every generalized logarithm:

1st. \displaystyle{\sum_{n=1}^m k_n=0}.

2nd. \displaystyle{\sum_{n=l}^m nk_n=c}, k_m\neq 0, and k_l\neq 0.

3rd. \displaystyle{\sum_{n=l}^m\vert n\vert^l k_n=K_l}, \forall l=2,3,\ldots ,m-l and where K_l \in \mathbb{R}.

With these definitions we also have that

A) \mbox{Log}_G(x)=\ln (x)

B) \mbox{Log}_G(1)=0

Examples of generalized logarithms are:

1) The Tsallis logarithm.

\mbox{Log}_T(x)=\dfrac{x^{1-q}-1}{1-q}

2) The Kaniadakis logarithm.

\mbox{Log}_K(x)=\dfrac{x^\kappa-x^{-\kappa}}{2\kappa}

3) The Abe logarithm.

\mbox{Log}_A(x)=\dfrac{x^{\sigma -1}-x^{\sigma^{-1}-1}}{\sigma-\sigma^{-1}}

4) The biparametric logarithm.

\mbox{Log}_B(x)=\dfrac{x^a-x^b}{a-b}

with a=\sigma-1 and b=\sigma^{-1}-1 in the case of the Abe logarithm.

Group entropies are defined through the use of generalized logarithms. Define some discrete probability distribution \left[ p_i\right]_{i=1,\ldots,W} with normalization \displaystyle{\sum_{i=1}^Wp_i=1}. Therefore, the group entropy is the following functional sum:

\boxed{\displaystyle{S_G=-k_B\sum_{i=1}^{W}p_i \mbox{Log}_G \left(\dfrac{1}{p_i}\right)}}

where we have used the previous definition of generalized logarithm and the Boltzmann’s constant k_B is a real number. It is called group entropy due to the fact that S_G is connected to some universal formal group. This formal group will determine some correlations for the class of physical systems under study and its invariant properties. In fact, the Tsallis logarithm itself is related to the Riemann zeta function through a beautiful equation! Under the Tsallis group exponential, the isomorphism x\leftrightarrow e^t is defined to be e_G^t=\dfrac{e^{(1-q)t}-1}{1-q}, and thus we easily get:

\displaystyle{\dfrac{1}{\Gamma (s)}=\int_0^\infty\dfrac{1}{\dfrac{e^{(1-q)t}-1}{1-q}}t^{s-1}dt=\dfrac{\zeta (s)}{(1-q)^{s-1}}}

\forall s such as Re (s)>1 and q<1.

 

10. ζ(s) AND THE PRIMON GAS

The primon gas/free Riemann gas is a statistical mechanics toy model illustrating in a simple way some correspondences between number theory and concepts in statistical physics, quantum mechanics, quantum field theory and dynamical systems.

The primon gas IS  a quantum field theory (QFT) of a set of non-interacting particles, called the “primons”. It is also named a gas or a free model because the particles are non-interacting. There is no potential. The idea of the primon gas was independently discovered  by Donald Spector (D. Spector, Supersymmetry and the Möbius Inversion Function, Communications in Mathemtical Physics 127 (1990) pp. 239-252) and Bernard Julia (Bernard L. Julia, Statistical theory of numbers, in Number Theory and Physics, eds. J. M. Luck, P. Moussa, and M. Waldschmidt, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990, pp. 276-293). There have been later works by Bakas and Bowick (I. Bakas and M.J. Bowick, Curiosities of Arithmetic Gases, J. Math. Phys. 32 (1991) p. 1881) and Spector (D. Spector, Duality, Partial Supersymmetry, and Arithmetic Number Theory, J. Math. Phys. 39 (1998) pp.1919-1927) in which it was explored the connection of such systems to string theory.

This model is based on some simple hypothesis:

1st. Consider a simple quantum Hamiltonian, H, having eigenstates \vert p\rangle labelled by the prime numbers “p”.

2nd. The eigenenergies or spectrum are given by E_p and they have energies proportional to \log p. Mathematically speaking,

H\vert p\rangle = E_p \vert p\rangle with E_p=E_0 \log p

Please, note the natural emergence of a “free” scale of energy E_0. What is this scale of energy? We do not know!

3rd. The second quantization/second-quantized version of this Hamiltonian converts states into particles, the “primons”. Multi-particle states are defined in terms of the numbers k_p of primons in the single-particle states p:

|N\rangle = |k_2, k_3, k_5, k_7, k_{11}, \ldots, k_{137},\ldots, k_p \ldots\rangle

This corresponds to the factorization of N into primes:

N = 2^{k_2} \cdot 3^{k_3} \cdot 5^{k_5} \cdot 7^{k_7} \cdot 11^{k_{11}} \cdots 137^{k_{137}}\cdots p^{k_p} \cdots

The labelling by the integer “N” is unique, since every number has a unique factorization into primes.

The energy of such a multi-particle state is clearly

\displaystyle{E(N) = \sum_p k_p E_p = E_0 \cdot \sum_p k_p \log p = E_0 \log N}

4th. The statistical mechanics partition function Z IS, for the (bosonic) primon gas, the Riemann zeta function!

\displaystyle{Z_B(T) \equiv\sum_{N=1}^\infty \exp \left(-\dfrac{E(N)}{k_B T}\right) = \sum_{N=1}^\infty \exp \left(-\dfrac{E_0 \log N}{k_B T}\right) = \sum_{N=1}^\infty \dfrac{1}{N^s} = \zeta (s)}

with s=E_0/k_BT=\beta E_0, and where k_B is the Boltzmann’s constant and T is the absolute temperature. The divergence of the zeta function at the value s=1 (corresponding to the harmonic sum) is due to the divergence of the partition function at certain temperature, usually called Hagedorn temperature. The Hagedorn temperature is defined by:

T_H=\dfrac{E_0}{k_B}

This temperature represents a limit beyond the system of (bosonic) primons can not be heated up. To understand why, we can calculate the energy

E=-\dfrac{\partial}{\partial \beta}\ln Z_B=-\dfrac{E_0}{\zeta (\beta E_0)}\dfrac{\partial \zeta (\beta E_0)}{\partial \beta}\approx \dfrac{E_0}{s-1}

A similar treatment can be built up for fermions rather than bosons, but here the Pauli exclusion principle has to be taken into account, i.e. two primons cannot occupy the same single particle state. Therefore m_i can be 0 or 1 for all single particle state. As a consequence, the many-body states are labeled not by the natural numbers, but by the square-free numbers. These numbers are sieved from the natural numbers by the Möbius function. The calculation is a bit more complex, but the partition function for a non-interacting fermion primon gas reduces to the relatively simple form

Z_F(T)=\dfrac{\zeta (s)}{\zeta (2s)}

The canonical ensemble is of course not the only ensemble used in statistical physics. Julia extended the Riemann gas approach to the grand canonical ensemble by introducing a chemical potential \mu (Julia, B. L., 1994, Physica A 203(3-4), 425), and thus, he replaced the primes p with new primes pe^{-\mu}. This generalisation of the Riemann gas is called the Beurling gas, after the Swedish mathematician Beurling who had generalised the notion of prime numbers. Examining a boson primon gas with fugacity -1, it shows that its partition function becomes

\overline{Z}_B=\dfrac{\zeta (2s)}{\zeta (s)}

Remarkable interpretation: pick a system, formed by two sub-systems not interacting with each other, the overall partition function is simply the product of the individual partition functions of the subsystems. From the previous equation of the free fermionic riemann gas we get exactly this structure, and so there are two decoupled systems. Firstly, a fermionic “ghost” Riemann gas at zero chemical potential and, secondly, a boson Riemann gas with energy-levels given by E(N)=2E_0\ln p_N. Julia also calculated the appropriate Hagedorn temperatures and analysed how the partition functions of two different number theoretical gases, the Riemann gas and the “log-gas” behave around the Hagedorn temperature. Although the divergence of the partition function hints the breakdown of the canonical ensemble, Julia also claims that the continuation across or around this critical temperature can help understand certain phase transitions in string theory or in the study of quark confinement. The Riemann gas, as a mathematically tractable model, has been followed with much attention because the asymptotic density of states grows exponentially, \rho (E)\sim e^E, just as in string theory. Moreover, using arithmetic functions it is not extremely hard to define a transition between bosons and fermions by introducing an extra parameter, called kappa \kappa, which defines an imaginary particle, the non-interacting parafermions of order \kappa. This order parameter counts how many parafermions can occupy the same state, i.e. the occupation number of any state falls into the interval \left[0,\kappa-1\right], and therefore \kappa=2 belongs to normal fermions, while \kappa\rightarrow\infty are the usual bosons. Furthermore, the partition function of a free, non-interacting κ-parafermion gas can be defined to be (Bakas and Bowick,1991, in the paper Bakas, I., and M. J. Bowick, 1991, J. Math. Phys. 32(7), 1881):

Z_\kappa=\dfrac{\zeta (s)}{\zeta (\kappa s)}

Indeed, Bakas et al. proved, using the Dirichlet convolution \star, how one can introduce free mixing of parafermions with different orders which do not interact with each other

\displaystyle{f\star g=\sum_{d\vert n}f(d)g\left(\dfrac{n}{d}\right)}

where the symbol d\vert n means d is a divisor of n. This operation preserves the multiplicative property of the classically defined partition functions, i.e., Z_{\kappa_1\star \kappa_2}=Z_{\kappa_1}\star Z_{\kappa_2}. It is even more intriguing how interaction can be incorporated into the mixing by modifying the Dirichlet convolution with a kernel function or twisting factor

\displaystyle{f\odot g=\sum_{d\vert n}f(d)g\left( \dfrac{n}{d}\right) K(n,d)}

Using the unitary convolution Bakas establishes a pedagogically illuminating case, the mixing of two identical boson Riemann gases. He shows that

Z_\infty\star Z_\infty=\zeta ^2(s)\zeta(2s)=\dfrac{\zeta (s)}{\zeta(2s)}\zeta (s)=Z_2Z_\infty=Z_FZ_B

This result has an amazing meaning. Two identical boson Riemann gases interacting with each other through the unitary twisting, are equivalent to mixing a fermion Riemann gas with a boson Riemann gas which do not interact with each other. Therefore, one of the original boson components suffers a transmutation/mutation into a fermion gas!

Remark (I): the Möbius function, which is the identity function with respect to the  \star operation (i.e. free mixing), reappears in supersymmetric quantum field theories as a possible representation of the (-1)^F operator, where F is the fermion number operator!  In this context, the fact that \mu (n)=0 for square-free numbers is the manifestation of the Pauli exclusion principle itself! In any QFT with fermions, (-1)^F is a unitary, hermitian, involutive operator where F is the fermion number operator and is equal to the sum of the lepton number plus the baryon number, i.e., F=B+L, for all particles in the Standard Model and some (most of) SUSY QFT.  The action of this operator is to multiply bosonic states by 1 and fermionic states by -1. This is always a global internal symmetry of any QFT with fermions and corresponds to a rotation by an angle 2\pi. This splits the Hilbert space into two superselection sectors. Bosonic operators commute with (-1)^F whereas fermionic operators anticommute with it. This operator really is, therefore, more useful in supersymmetric field theories.

Remark (II): potential attacks on the Riemann Hypothesis  may lead to advances in physics and/or mathematics, i.e., progress in Physmatics!

Remark (III): the energy of the ground state is taken to be zero and the energy spectrum of the excited state is E(n)=E_0\ln (p_n), where p_n, n=2,3,5,\ldots, runs over the prime numbers. Let N and E denote now the number of particles in the ground state and the total energy of the system, respectively. The fundamental theorem of arithmetic allows only one excited state configuration for a given energy

E=\ln (n) \;\; mod E_0

where n is an integer. It immediately means that this gas preserves its quantum nature at any temperature, since only one quantum state is permitted to be occupied. The number fluctuation of any state (even the ground state) is therefore zero. In contrast, the changes in the number of particles in the ground state \delta n_0 predicted by the canonical ensemble is a smooth non-vanishing function of the temperature, while the grand-canonical ensemble still exhibits a divergence. This discrepancy between the microcanonical (combinatorial) and the other two ensembles remains even in the thermodynamic limit.

One could argue that the Riemann gas is fictitious/unreal and its spectrum is unrealisable/unphysical. However, we, physicists, think otherwise, since the spectrum E_N=\ln (N) does not increase with N more rapidly than n^2, therefore the existence of a quantum mechanical potential supporting this spectrum is possible (e.g., via inverse scattering transform or supplementary tools). And of course the question is: what kind of system has such an spectrum?

Some temptative ideas for the potential based on elementary Quantum Mechanics will be given in the next section.

 

11. LOG-OSCILLATORS

Instead of considering the free Riemann gas, we could ask to Quantum Mechanics if there is some potential providing the logarithmic spectrum of the previous section. Indeed, there exists such a potential. Let us factorize any natural number in terms of its prime “atoms”:

N=p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m}

Take the logarithm

\log N=\log \left(p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m}\right)=n_1\log p_1+n_2\log p_2+\ldots+n_m\log p_m

\displaystyle{\log N=\sum_{i=1}^{m}n_i\log p_i}

where p_i are prime numbers (note that if we include “1” as a prime number it gives a zero contribution to the sum).

Now, suppose a logarithmic oscillator spectrum, i.e.,

\varepsilon_i=\log p_i with p_i=(1),2,3,5,7,11,13,\ldots,137,\ldots,\infty

with i=0,1,2,3,4,\ldots,\infty. In order to have a “riemann gas”/riemannium, we impose an spectrum labelled in the following fashion

\varepsilon_s =\log (2s+1) \forall s=0,1,2,3,\ldots,\infty

Equivalently, we could also define the spectrum of interacting riemannium gas as

\varepsilon_s=\log (s) \forall s=1,2,3,\ldots,\infty

In addition to this, suppose the next quantum postulates:

1st. Logarithmic potential:

V(x)=V_0\ln\dfrac{\vert x\vert}{L} with positive constants V_0, L>0

From the physical viewpoint, the positive constant V_0 means repulsive interaction (force).

2nd. Bohr-Sommerfeld quantization rule:

a) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar \left(s+\dfrac{1}{2}\right)}\; \forall s=0,1,\ldots,\infty

or equivalently we could also get

b) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar s}\; \forall s=1,2,\ldots,\infty

3rd. Turning point condition:

x_s=L\exp \left(\dfrac{\varepsilon_s}{V_0}\right)

In the case of 2a) we would deduce that

\displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_s}dx\sqrt{2m\left(\varepsilon_s-V_0\ln \dfrac{x}{L}\right)}}

so

\displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_x}dx\sqrt{-\ln \left(\dfrac{x}{x_s}\right)}=\sqrt{2mV_0}x_s\Gamma \left(\dfrac{3}{2}\right)}

and then

x_s=\sqrt{\dfrac{\pi}{2mV_0}}\hbar \left( s+\dfrac{1}{2}\right)

Then, using the turning point condition in this equation, we finally obtain

\boxed{\dfrac{\varepsilon_s}{V_0}=\ln (2s+1)+\ln \left(\dfrac{\hbar}{2L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=0,1,\ldots,\infty

In the case of 2b) we would obtain

\boxed{\dfrac{\varepsilon_s}{V_0}=\ln (s)+\ln \left(\dfrac{\hbar}{L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=1,2,\ldots,\infty

In summary, the logarithmic potential provides a model for the interacting Riemann gas!

 

12. LOG-POTENTIAL AND CONFINEMENT

Massive elementary particles (with mass m) can be understood as composite particles made of confined particles moving with some energy pc inside a sphere of radius R. We note that we do not define futher properties of the constituent particles (e.g., if they are rotating strings, particles, extended objects like branes, or some other exotic structure moving in circular orbits or any other pattern as trajectory inside the composite particle).

Let us make the hypothesis that there is some force F needed to counteract the centrifugal force F_c=\dfrac{\kappa c^2}{R}. The centrifugal force is equal to pc/R, i.e., the balancing force F is F=pc/R. Then, assuming the two forces are equal in magnitude, we get

F=F_c=\dfrac{A_1}{R}

where A_1 is some constant, and that equation holds regardless the origin of the interaction. The potentail energy U necessary to confine a constituent particle will be, in that case,

\displaystyle{U=\int \dfrac{A_1}{R}dR=A_1\int \dfrac{1}{R}dR=A_1\ln \dfrac{R}{R_\star}}

with R_\star some integration constant to be determined later. The center of mass of the “elementary particle”, truly a composite particle, from the external observer and the mass assinged to the composited system is:

m=\dfrac{\hbar}{cR}

The logarithmic potential energy is postulated to be proportional to m/R, and it provides

U=\dfrac{A_2 m}{R}

with A_2 is another constant. In fact, A_1, A_2 are parameters that don’t depend, a priori, on the radius R but on the constitutent particle properties and coupling constants, respectively. Indeed, for instance, we could set and fix the ratio A_2/A_1 to the constant c^2/G_N, where G_N is the gravitational constant. However, such a constraint is not required from first principles or from a clear physical reason. From the following equations:

m=\dfrac{\hbar}{cR} and U=\dfrac{A_2 m}{R}

we get \boxed{U=\dfrac{A_2 \hbar}{cR^2}}

Quantum Mechanics implies that the angular momentum should be quantized, so we can make the following generalization

U=\dfrac{A_2 m}{cR^2}\rightarrow U_n=\dfrac{A_2 \hbar}{cR_n^2}=\dfrac{A_2 (n+1)\hbar}{cR_0^2}

\forall n=0,1,2,\ldots,\infty

so R_n^2=\dfrac{R_0^2}{n+1}\leftrightarrow R_n=\dfrac{R_0}{\sqrt{n+1}}

Using the previous integral and this last result, we obtain

\ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2}

This is due to the fact that U_n=A_2\dfrac{\hbar}{cR_n^2}=\dfrac{A_2\hbar (n+1)}{cR_0^2} and U=A_2\ln \dfrac{R}{R_\star}

Combining these equations, we deduce the value of R_\star as a function of the parameters A_1,A_2

\boxed{R_\star=\sqrt{\dfrac{A_2\hbar}{A_1 c}}}

The ratio R_\star/R_0 can be calculated from the above equations as well, since

\ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2} for the case n=0 implies that

\ln \left(\dfrac{R_\star}{R_0}\right)=-\dfrac{R_\star^2}{R_0^2}, and after exponentiation, it yields

\dfrac{R_\star}{R_0}=e^{-\frac{R_\star^2}{R_0^2}}

Introducing the variable x=\dfrac{R_\star}{R_0} we have to solve the equation x=e^{-x^2}

The solution is \phi=\dfrac{1}{x}=1.53158 from which the relationship between R_\star and R_0 can be easily obtained. Indeed, we can make more deductions from this result. From \ln \phi=1/\phi^2, then

R_n=R_\star e^{(n+1)\ln\phi}

If we take R_\star=\alpha R_0, with R_0=\hbar/mc, then

\alpha=m_0\sqrt{\dfrac{A_2 c}{A_1\hbar}} so

R_n=R_0e^{K\varphi_n} with K=\dfrac{1}{2\pi}\ln \phi and \varphi_n=2\pi (n+1)+\varphi_s \varphi_s=2\pi \left(\dfrac{\ln \alpha}{\ln \phi}\right)

Equivalently, the masses would be dynamically generated from the above equations, since

m_n=\dfrac{\hbar}{R_nc} and m_0=\dfrac{\hbar}{R_0c}

so we would deduce a particle spectrum given by a logarithmic spiral, through the equation

\boxed{m_n=m_0e^{K\varphi_n}}

Remark: The shift K\rightarrow -K implies that the spiral would begin with m_0 as the lowest mass and not the biggest mass, turning the spiral from inside to the outside region and vice versa.

In summary, the logarithmic oscillator is also related to some kind of confined particles and it provides a toy model of confinement!

 

13. HARMONIC  OSCILLATOR AND TSALLIS GAS

Is the link between classical statistical mechanics and Riemann zeta function unique or is it something more general? C. Tsallis explained long ago the connection of non-extensive Tsallis entropies an the Riemann zeta function, given supplementary arguments to support the idea of a physical link between Physics, Statistical Mechanics and the Riemann hypothesis. His idea is the following.

A) Consider the harmonic oscillator with spectrum

E_n=\hbar\omega n

E(n),\;\forall n=0,1,2,\ldots,\infty, are the H.O. eigenenergies.

B) Consider the Tsallis partition function

\displaystyle{Z_q (\beta )=\sum_{n=0}^{\infty}e_q^{-\beta E_n}=\sum_{n=0}^{\infty}e_q^{-\beta\hbar\omega n}}

where q>1 and the deformed q-exponential is defined as

e_q^z\equiv \left[1+(q-1)z\right]_+^{\frac{1}{1-q}}

and \left[\alpha\right]=\begin{cases}\alpha, \alpha>0\\ 0,otherwise\end{cases}

and the inverse of the deformed exponential is the q-logarithm

\ln_q z=\dfrac{z^{1-q}-1}{1-q}

It implies that

\boxed{\displaystyle{Z_q=\sum_{n=0}^{\infty}\dfrac{1}{\left[1+(q-1)\beta\hbar\omega n\right]^{\frac{1}{q-1}}}=\dfrac{1}{\left[(q-1)\beta\hbar \omega\right]^{\frac{1}{q-1}}}\sum_{n=0}^{\infty}\dfrac{1}{\left[\left(\dfrac{1}{(q-1)\beta\hbar\omega}\right)+n\right]^{\frac{1}{q-1}}}}}

Now, defining the Hurwitz zeta function as:

\displaystyle{\zeta (s,Q)=\sum_{n=0}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}=\dfrac{1}{Q^s}+\sum_{n=1}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}}

the last equation can be rewritten in a simple and elegant way:

\boxed{\displaystyle{Z_q=\dfrac{1}{\left[(q-1)\beta\hbar\omega\right]^{\frac{1}{q-1}}}\zeta \left(\dfrac{1}{q-1},\dfrac{1}{(q-1)\beta\hbar\omega}\right)}}

This system can be called the Tsallis gas or the Tsallisium. It is a q-deformed version (non-extensive) of the free Riemann gas. And it is related to the harmonic oscillator! The issue, of course, is the problematic limit q\rightarrow 1.

In the limit Q\rightarrow 1 we get the Riemann zeta function from the Hurwitz zeta function:

\displaystyle{\zeta (s,1)\equiv \zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p}\dfrac{1}{1-p^{-s}}}

or

\displaystyle{\zeta (s)=\dfrac{1}{1-2^{-s}}\dfrac{1}{1-3^{-s}}\ldots\dfrac{1}{1-137^{-s}}\ldots}

The above equation, the partition function of the Tsallis gas/Tsallisium, connects directly the Riemann zeta function with Physics and non-extensive Statistical Mechanics. Indeed, C.Tsallis himself dedicated a nice slide with this theme to M.Berry:

Remark (I): The link between Riemann zeta function and the free Riemann gas/the interacting Riemann gas goes beyond classical statistical mechanics and it also appears in non-extensive statistical mechanics!

Remark (II): In general, the Riemann hypothesis is entangled to the theory of harmonic oscillators with non-extensive statistical mechanics!

 

14. TSALLIS ENTROPIES IN A NUTSHELL

For readers not familiarized with Tsallis generalized entropies, I would like to expose you the main definitions of such a generalization of classical statistical entropy (Boltzmann-Gibbs-Shannon), in a nutshell! I have to discuss more about this kind of statistical mechanics in the future, but today, I will only anticipate some bits of it.

Tsallis entropy (and its Statistical Mechanics/Thermodynamics) is based on the following entropy functionals:

1st. Discrete case.

\boxed{\displaystyle{S_q=k_B\dfrac{1-\displaystyle{\sum_{i=1}^W p_i^q}}{q-1}=-k_B\sum_{i=1}^Wp_i^q\ln_q p_i=k_B\sum_{i=1}^Wp_i\ln_q \left(\dfrac{1}{p_i}\right)}}

plus the normalization condition \boxed{\displaystyle{\sum_{i=1}^Wp_i=1}}

2nd. Continuous case.

\boxed{\displaystyle{S_q=-k_B\int dX\left[p(X)\right]^q\ln_q p(X)=k_B\int dX p(X)\ln_q\dfrac{1}{p(X)}}}

plus the normalization condition \boxed{\displaystyle{\int dX p(X)=1}}

3rd. Quantum case. Tsallis matrix density.

\boxed{\displaystyle{S_q=-k_BTr\rho^q\ln _q\rho\equiv k_BTr\rho \ln_q\dfrac{1}{\rho}}}

plus the normatlization condition \boxed{Tr\rho=1}

In all the three cases above, we have defined the q-logarithm as \ln_q z\equiv\dfrac{z^{1-q}-1}{q-1}, \ln_1 z\equiv \ln z, and the 3 Tsallis entropies satisfy the non-additive property:

\boxed{\dfrac{S_q(A+B)}{k_B}=\dfrac{S_q (A)}{k_B}+\dfrac{S_q (B)}{k_B}+(1-q)\dfrac{S_q (A)}{k_B}\dfrac{S_q (B)}{k_B}}

15. BEYOND QM/QFT: ADELIC WORLDS

Theoretical physicsts suspect that Physics of the spacetime at the Planck scale or beyond will change or will be meaningless. There, the spacetime notion we are familiarized to loose its meaning. Even more, we could find those changes in the fundamental structure of the Polyverse to occur a higher scales of length. Really, we don’t know yet where the spacetime “emerges” as an effective theory of something deeper, but it is a natural consequence from our current limited knowledge of fundamental physics.  Indeed, it is thought that the experimental device making measurements and the experimenter can not be distinguished at Planck scale. At Planck scale, we can not know at this moment how the framework of cosmology and the Hilbert space tool of Quantum Mechanics could be obtained with some unified formalism. It is one of the challenges of Quantum Gravity.

Many people and scientists think that geometry and topology of sub-Planckian lengths should not have any relation with our current geometry or topology. We say and believe that geometry, topology, fields and the main features of macroscopic bodies “emerge” from the ultra-Planckian and “subquantum” realm. It is an analogue to the colours of the rainbow emerging from the atoms or how Thermodynamics emerge from Statistical Mechanics.

There are many proposed frameworks to go beyond the usual notions of space and time, but the p-adic analysis approach is a quite remarkable candidate, having several achievements in its favor.

Motivations for a p-adic and adelic approaches as the ultimate substructure of the microscopic world arise from:

1) Divergences of QFT are believed to be absent with such number structures. Renormalization can be found to be unnecessary.

2) In an adelic approach, where there is no prime with special status in p-adic analysis, it might be more natural and instructive to work with adeles instead a pure p-adic approach.

3) There are two paths for a p-adic/adelic QM/QFT theory. The first path considers particles in a p-adic potential well, and the goal is to find solutions with smoothly varying complex-valued wavefunctions. There, the solutions share  certain kind of familiarity from ordinary life and ordinary QM. The second path allows particles in p-adic potential wells, and the goal is to find p-adic valued wavefunctions. In this case, the physical interpretation is harder. Yet the math often exhibits surprising features and properties, and some people are trying to explores those novel and striking aspects.

Ordinary real (or even complex as well) numbers are familiar to everyone. Ostroswski’s theorem states that there are essentially only two possible completions of the rational numbers ( “fractions” you do know very well). The two options depend on the metric we consider:

1) The real numbers. One completes the rationals by adding the limit of all Cauchy sequences to the set. Cauchy sequences are series of numbers whose elements can be arbitrarily close to each other as the sequence of numbers progresses. Mathematically speaking, given any small positive distance, all but a finite number of elements of the sequence are less than that given distance from each other. Real numbers satisfy the triangle inequality \vert x+y\vert \leq \vert x\vert +\vert y\vert.

2) The p-adic numbers. The completions are different because of the two different ways of measuring distance. P-adic numbers satisfy an stronger version of the triangle inequality, called ultrametricity. For any p-adic number is shows

\vert x+y\vert _p\leq \mbox{max}\{\vert x\vert_p ,\vert y \vert_p\}

Spaces where the above enhanced triangle inequality/ultrametricity arises are called ultrametric spaces.

In summary, there exist two different types of algebraic number systems. There is no other posible norm beyond the real (absolute) norm or the p-adic norm. It is the power of Mathematics in action.

Then, a question follows inmediately. How can we unify such two different notions of norm, distance and type of numbers. After all, they behave in a very different way. Tryingo to answer this questions is how the concept adele emerges. The ring of adeles is a framework where we consider all those different patterns to happen at equal footing, in a same mathematical language. In fact, it is analogue to the way in which we unify space and time in relativistic theories!

Adele numbers are an array consisting of both real (complex) and p-adic numbers! That is,

\mathbb{A}=\left( x_\infty, x_2,x_3,x_5,\ldots,x_p,\ldots\right)

where x_\infty is a real number and the x_p are p-adic numbers living in the p-adic field \mathbb{Q}_p. Indeed, the infinity symbol is just a consequence of the fact that real numbers can be thought as “the prime at infinity”. Moreover, it is required that all but finitely many of the p-adic numbers x_p lie in the entire p-adic set \mathbb{Z}_p. The adele ring is therefore a restricted direct (cartesian) product. The idele group is defined as the essentially invertible elements of the adelic ring:

\mathbb{I}=\mathbb{A}^\star =\{ x\in \mathbb{A}, \mbox{where}\;\; x_\infty \in \mathbb{R}^{\star} \;\; \mbox{and} \;\; \vert x_p\vert _p=1,\; \mbox{for all but finitely many primes p.}\}

We can define the calculus over the adelic ring in a very similar way to the real or complex case. For instance, we define trigonometric functions, e^X, logarithms \log (x) and special functions like the Riemann zeta function. We can also perform integral transforms like the Mellin of the Fourier transformation over this ring. However, this ring has many interesting properties. For example, quadratic polynomials obey the Hasse local-global principle: a rational number is the solution of a quadratic polynomial equation if and only if it has a solution in \mathbb{R} and \mathbb{Q}_p for all primes p. Furthermore, the real and p-adic norms are related to each other by the remarkable adelic product formula/identity:

\displaystyle{\vert x\vert_\infty \prod_p\vert x\vert_p=1}

and where x is a nonzero rational number.

Beyond complex QM, where we can study the particle in a box or in a ring array of atoms, p-adic QM can be used to handle fractal potential wells as well. Indeed, the analogue Schrödinger equation can be solved and it has been useful, for instance, in the design of microchips and self-similar structures. It has been conjectured by Wu and Sprung, Hutchinson and van Zyl,here http://arXiv.org/abs/nlin/0304038v1 , that the potential constructed from the non-trivial Riemann zeroes and prime number sequences has fractal properties. They have suggested that D=1.5 for the Riemann zeroes and D=1.8 for the prime numbers. Therefore,  p-adic numbers are an excellent method for constructing fractal potential wells.

By the other hand, following Feynman, we do know that path integrals for quantum particles/entities manifest fractal properties. Indeed we can use path integrals in the absence of a p-adic Schrödinger equation. Thus, defining the adelic version of Feynman’s path integral is a necessary a fundamental object for a general quantum theory beyond the common textbook version. However, we need to be very precise with certain details. In particular, we have to be careful with the definition of derivatives and differentials in order to do proper calculations. Indeed we can do it since both, the adelic and idelic rings have a well defined translation-invariant Haar measure

Dx=dx_\infty dx_2dx_3\cdots dx_p\cdots and Dx^\star=dx_\infty^\star dx_2^\star dx_3^\star\cdots dx_p^\star\cdots

These measures provide a way to compute Feynman path integrals over adelic/idelic spaces.  It turns out that Gaussian integrals satisfy a generalization of the adelic product formula introduced before, namely:

\displaystyle{\int_{\mathbb{Q}_p}\chi_\infty (ax_\infty^2+bx_\infty)dx_\infty \prod_p \int_{\mathbb{Q}_p}\chi_p (ax_p^2+bx_p)dx_p=1}

where \chi is an additive character from the adeles to complex numbers \mathbb{C} given by the map:

\displaystyle{\chi (x)=\chi_\infty (x_\infty)\prod_p \chi_p (x_p)\rightarrow e^{-2\pi ix_\infty}\prod_p e^{2\pi i\{p\}_p}}

and  \{x_p\}_p is the fractional part of x_p in the ordinary p-adic expression for x. This can be thought of as a strong generalization of the homomorphism \mathbb{Z}/\mathbb{Z}_n\rightarrow e^{2\pi i/n}.Then, the adelic path integral, with input parameters in the adelic ring \mathbb{A}  and generating complex-valued wavefunctions follows up:

\displaystyle{K_{\mathbb{A}} (x'',t'';x',t') =\prod_\alpha \int_{(x' _\alpha ,t' _\alpha)}^{(x'' _\alpha ,t'' _\alpha)}\chi_\alpha \left(-\dfrac{1}{h}\int_{t' _\alpha}^{t''_\alpha}L(\dot{q} _\alpha ,q_\alpha ,t_\alpha )dt_\alpha \right) Dq_\alpha}

The eigenvalue problem over the adelic ring is given by:

U(t) \psi_\alpha (x)=\chi (E_\alpha (t))\psi_\alpha (x)

where U is the time-development operator, \psi_\alpha are adelic eigenfunctions, and E_\alpha is the adelic energy. Here the notation has been simplified by using the subscript \alpha, which stands for all primes including the prime at infinity. One notices the additive character \chi which allows these to be complex-valued integrals. The path integral can be generalized to p-adic time as well, i.e., to paths with fractal behaviour!

How is this p-adic/adelic stuff connected to the Riemannium an the Riemann zeta function? It can be shown that ground state of adelic quantum harmonic oscillator is

\displaystyle{\vert 0\rangle =\Psi_0 (x)=2^{1/4}e^{-\pi x_\infty^2}\prod_p \Omega (\vert x_p\vert_p)}

where \Omega \left(\vert x_p \vert _p\right)  is 1 if \vert x_p\vert_p is a p-adic integer and 0 otherwise. This result is strikingly similar to the ordinary complex-valued ground state. Applying the adelic Mellin transform, we can deduce that

\Phi (\alpha)=\sqrt{2}\Gamma \left(\dfrac{\alpha}{2}\right)\pi^{-\alpha/2}\zeta (\alpha)

where \Gamma, \zeta are, respectively, the gamma function and the Riemann zeta function. Due to the Tate formula, we get that

\Phi (\alpha)=\Phi (1-\alpha).

and from this the functional equation for the Riemann zeta function naturally emerges.

In conclusion: it is fascinating that such simple physical system as the (adelic) harmonic oscillator is related to so significant mathematical object as the Riemann zeta function.

 

16. STRINGS, FIELDS AND VACUUM

The Veneziano amplitude is also related to the Riemann zeta function and string theory. A nice application of the previous adelic formalism involves the adelic product formula in a different way. In string theory, one computes crossing symmetric Veneziano amplitudesA(a,b) describing the scattering of four tachyons in the 26d open bosonic string. Indeed, the Veneziano amplitude can be written in terms of Riemann zeta function in this way:

A_\infty (a,b)=g_\infty^2 \dfrac{\zeta (1-a)}{\zeta (a)}\dfrac{\zeta (1-b)}{\zeta (b)}\dfrac{\zeta (1-c)}{\zeta (c)}

These amplitudes are not easy to calculate. However, in 1987, an amazingly simple adelic product formula for this tachyonic scattering was found to be:

\displaystyle{A_\infty (a,b)\prod_p A_p (a,b)=1}

Using this formula, we can compute and calculate the four-point amplitudes/interacting vertices at the tree level exactly, as the inverse of the much simpler p-adic amplitudes. This discovery has generated a quite a bit of activity in string theory, somewhat unknown, although it is not very popular as far as I know. Moreover, the whole landscape of the p-adic/adelic framework is not as easy for the closed bosonic string as the open bosonic strings (note that in a p-adic world, there is no “closure” but “clopen” segments instead of naive closed intervals). It has also been a source of controversy what is the role of the p-adic/adelic stuff at the level of the string worldsheet. However, there is some reasearch along these lines at current time.

Another nice topic is the vacuum energy and its physical manifestations. There are some very interesting physical effects involving the vacuum energy in both classical and quantum physics. The most important effects are the Casimir effect (vacuum repulsion between “plates”) , the Schwinger effect ( particle creation in strong fields) , the Unruh effect ( thermal effects seen by an uniformly accelerated observer/frame) , the Hawking effect (particle creation by Black Holes, due to Black Hole Thermodynamcis in the corresponding gravitational/accelerated environtment) , and the cosmological constant effect (or vacuum energy expanding the Universe at increasing rate on large scales. Itself, does it gravitate?). Riemann zeta function and its generalizations do appear in these 4 effects. It is not a mere coincidence. It is telling us something deeper we can not understand yet. As an example of why zeta function matters in, e.g., the Casimir effect, let me say that zeta function regularizes the following general sum:

\boxed{\displaystyle{\sum_{n\in \mathbb{Z}}\vert n\vert^d =2\zeta (-d)}}

Remark: I do know that I should have likely said “the cosmological constant problem”. But as it should be solved in the future, we can see the cosmological constant we observe ( very, very smaller than our current QFT calculations say) as “an effect” or “anomaly” to be explained. We know that the cosmological constant drives the current positive acceleration of the Universe, but it is really tiny. What makes it so small? We don’ t know for sure.

Remark(II): What are the p-adic strings/branes? I. Arefeva, I. Volovich and B. Dravogich, between other physicists from Russia and Eastern Europe, have worked about non-local field theories and cosmologies using the Riemann zeta function as a model. It is a relatively unknown approach but it is remarkable, very interesting and uncommon.  I have to tell you about these works but not here, not today. I went too far, far away in this log. I apologize…

 

17. SUMMARY AND OUTLOOK

I have explained why I chose The Spectrum of Riemannium as my blog name here and I used the (partial) answer to explain you some of the multiple connections and links of the Riemann zeta function (and its generalizations) with Mathematics and Physics. I am sure that solving the Riemann Hypothesis will require to answer the question of what is the vibrating system behind the spectral properties of Riemann zeroes. It is important for Physmatics! I would say more, it is capital to theoretical physics as well.

Let me review what and where are the main links of the Riemann zeta function and zeroes to Physmatics:

1) Riemann zeta values appear in atomic Physics and Statistical Physics.

2) The Riemannium has spectral properties similar to those of Random Matrix Theory.

3) The Hilbert-Polya conjecture states that there is some mysterious hamiltonian providing the zeroes. The Berry-Keating conjecture states that the “quantum” hamiltonian corresponding to the Riemann hypothesis is the corresponding or dual hamiltonian to a (semi)classical hamiltonian providing a classically chaotic dynamics.

4) The logarithmic potential provides a realization of certain kind of spectrum asymptotically similar to that of the free Riemann gas. It is also related to the issue of confinement of “fundamental” constituents inside “elementary” particles.

5) The primon gas is the Riemann gas associated to the prime numbers in a (Quantum) Statistical Mechanics approach. There are bosonic, fermionic and parafermionic/parabosonic versions of the free Riemann gas and some other generalizations using the Beurling gas and other tools from number theory.

6) The non-extensive Statistical Mechanics studied by C. Tsallis (and other people) provides a link between the harmonic oscillator and the Riemann hypothesis as well. The Tsallisium is the physical system obtained when we study the harmonic oscillator with a non-extensive Tsallis approach.

7) An adelic approach to QM and the harmonic oscillator produces the Riemann’s zeta function functional equation via the Tate formula. The link with p-adic numbers and p-adic zeta functions reveals certain fractal patterns in the Riemann zeroes, the prime numbers and the theory behind it. The periodicity or quasiperiodicity also relates it with some kind of (quasi)crystal and maybe it could be used to explain some behaviour or the prime numbers, such as the one behind the Goldbach’s conjecture.

8) A link between entropy, information theory and Riemann zeta function is done through the use of the notion of group entropy.  Connections between the Veneziano amplitudes, tachyons, p-adic numbers and string theory arise after the Veneziano amplitude in a natural way.

9) Riemann zeta function also is used in the regularization/definition of infinite determinants arising in the theory of differential operators and similar maps. Even the generalization of this framework is important in number theory through the uses of generalizations of the Riemann zeta function and other arithmetical functions similar to it. Riemann zeta function is, thus, one of the simplest examples of arithmetical functions.

10) There are further links of the Riemann zeta function and “vacuum effects” like the Schwinger effect ( pair creating in strong fields) or the Casimir effect ( repulsive/atractive forces between close objects with “nothing” between them). Riemann zeta function is also related to SUSY somehow, either by the striking similarity between the Dirichlet eta function used in Fermi-Dirac statistics or directly with the explicit relationship between the Möbius function and the (-1)^F operator appearing in supersymmetric field theories.

In summary, Riemann zeta function is ubiquitious and it appears alone or with its generalizations in very different fields: number theory, quantum physics, (semi)classical physics/dynamics, (quantum) chaos theory, information theory, QFT, string theory, statistical physics, fractals, quasicrystals, operator theory, renormalization and many other places. Is it an accident or is it telling us something more important? I think so. Zeta functions are fundamental objects for the future of Physmatics and the solution of Riemann Hypothesis, perhaps, would provide such a guide into the ultimate quest of both Physics and Mathematics (Physmatics) likely providing a complete and consistent description of the whole Polyverse.

Then, the main unanswered questions to be answered are yet:

A) What is the Riemann zeta function? What is the riemannium/tsallisium and what kind of physical system do they represent really? What is the physical system behind the Riemann non-trivial zeroes? What does it mean for the Riemann zeroes arising from the Riemann zeta function  generalizations in form of L-functions?

B) What is the Riemann-Hilbert-Polya operator? What is the space over the Riemann operator is acting?

C) Are Riemann zeta function and its generalization everywhere as they seem to be inside the deepest structures of the microscopic/macroscopic entities of the Polyverse?

I suppose you will now understand better why I decided to name my blog as The Spectrum of Riemannium…And there are many other reasons I will not write you here since I could reveal my current research.

However, stay tuned!

Physmatics is out there and everywhere, like fractals, zeta functions and it is full of lots of wonderful mathematical structures and simple principles!