LOG#115. Bohr’s legacy (III).

Dedicated to Niels Bohr

and his atomic model

(1913-2013)

3rd part:

From gravatoms to dark matter

500px-GalacticRotation2.svg

Gravatoms

Imagine a proton an an electron were bound together in a hydrogen atom by gravitational forces and not by electric forces. We have two interesting problems to solve here:

1st. Find the formula for the spectrum (energy levels) of such a gravitational atom (or gravatom), and the radius of the ground state for the lowest level in this gravitational Bohr atom/gravatom.

2nd. Find the numerical value of the Bohr radius for the gravitational atom, the “rydberg”, and the “largest” energy separation between the energy levels found in the previous calculation.

We will take the values of the following fundamental constants:

\hbar=1\mbox{.}06\cdot 10^{-34}Js, the reduced Planck constant.

m_p=1\mbox{.}67\cdot 10^{-27}kg, the proton mass.

m_e=9\mbox{.}11\cdot 10^{-31}kg, the electron mass.

G_N=6\mbox{.}67\cdot 10^{-11}Nm^2/kg^2, the gravitational Newton constant.

Let R be the radius of any electron orbit. The gravitational force between the electron and the proton is equal to:

(1) F_g=G_N\dfrac{m_pm_e}{R^2}

The centripetal force is necessary to keep the electron in any circular orbit. According to the gravatom hypothesis, it yields the value of the gravitational force (the electric force is neglected):

(2) F_c=\dfrac{mv^2}{R}

(3) F_c=F_g\leftrightarrow \boxed{\dfrac{mv^2}{R}=G_N\dfrac{m_pm_e}{R^2}}

Using the hypothesis of the Bohr atomic model in this point, i.e., that “the allowed orbits are those for whihc the electron’s orbital angular momentum about the nucleus is an integral multiple of \hbar“, we get

(4) L=m_evR=n\hbar \forall n=1,2,\ldots,\infty

Then,

(5) v=\dfrac{n\hbar}{m_eR} and v^2=\dfrac{n^2\hbar^2}{m_e^2R^2}

From (3), we obtain

(6) \boxed{v^2=G_N\dfrac{m_p}{R}}

Comparing (5) with (6), we deduce that

(7) G_N\dfrac{m_p}{R}=\dfrac{n^2\hbar^2}{m_e^2R^2}

and thus

(8) \boxed{R_n=R(n)=n^2\dfrac{\hbar^2}{G_Nm_pm_e^2}}

This is the gravatom equivalent of Bohr radius in the common Bohr model for the hydrogen atom. To get the spectrum, we recall that total energy is the sum of kinetic and potential energy:

E=T+U=\dfrac{1}{2}m_ev^2-G_N\dfrac{m_pm_e}{R}

Using the value we obtained in (5), by direct substitution, we have

(9) E=\dfrac{1}{2}m_ev^2-G_N\dfrac{m_pm_e}{R}=-G_N\dfrac{m_pm_e}{2R}

and then

(10) E=-\dfrac{G_Nm_em_p}{2}\dfrac{G_Nm_pm_e^2}{n^2\hbar^2}

and so the spectrum of this gravatom is given by

(11) \boxed{E_n=E(n)=-G_N^2\dfrac{m_p^2m_e^3}{2n^2\hbar^2}}

For n=1 (the ground state), we have the analogue of the Bohr radius in the gravatom to be:

R_1=\dfrac{\hbar^2}{G_Nm_pm_e^2}=1\mbox{.}20\cdot 10^{29}m

For comparison, the radius of the known Universe is about R_U=4\mbox{.}4\cdot 10^{26}m. Therefore, R(gravatom)>R_U!!!!!! R_1 is very huge because gravitational forces are much much weaker than electrostatic forces! Moreover, the energy in the ground state n=1 for this gravatom is:

E_1=-G_N^2\dfrac{m_p^2m_e^2}{2\hbar^2}=-4\mbox{.}23\cdot 10^{-97}J

The energy separation between this and the next gravitational level would be about 1-1/4=3/4 this quantity in absolute value, i.e.,

\Delta E=\vert E_2-E_1\vert =3\mbox{.}18\cdot 10^{-97}J=1\mbox{.}99\cdot 10^{-78}eV

This really tiny energy separation is beyond any current possible measurement. Therefore, we can not measure energy splittings in “gravatoms” with known techniques. Of course, gravatoms are a “toy-model” or hypothetical systems (bubble Universes?).

Remark (I): The quantization of angular momentum provided the above gravatom spectrum. It is likely that a full Quantum Gravity theory provides additional corrections to the quantum potential, just in the same way that QED introduces logarithmic (vacuum polarization) corrections and others (due to relativity or additional quantum effects).

Remark (II): Variations in the above quantization rules can modify the spectrum.

Remark (III): In theories with extra dimensions, G_N is changed by a higher value G_N^{eff} as a function of the compactification radius. So, the effect of large enough extra dimensions could be noticed as “dark matter” if it is “big enough”. Can you estimate how large could the compactification radius be in such a way that the separation between n=1 and n=2 for the gravatom could be measured with current technology? Hint: you need to know what is the tiniest energy separation we can measure with current experimental devices.

Remark (IV): In  Verlinde’s entropic approach to gravity, extra corrections arise due to the change of the functional entropy we choose. It can be  due to extra dimensions and the (stringy) Generalized Uncertainty Principle as well.

Gravatoms and Dark Matter: a missing link

I will end this thread of 3 posts devoted to Bohr’s centenary model to recall a connection between atomic physics and the famous Dark Matter problem! The calculations I performed above (and which anyone with a solid, yet elementary, ground knowledge in physics can do) reveals a surprising link between microscopic gravity and the dark matter problem. I mean, the problem of gravatoms can be matched to the problem of dark matter if we substitute the proton mass by the mass of a galaxy! It is not an unlikely option that the whole Dark Matter problem shows to be related to a right infrared/long scale modified gravitational theory induced by quantum gravity. Of course, this claim is quite an statement! I work on this path since months ago…Even when MOND (MOdified Newtonian Dynamics) or MOG (MOdified Gravity) have been seen as controversial since Milgrom’s and Moffat’s pioneer works, I believe it is yet to come its “to be or not to be” biggest test. Yes, even when some measurements like the Bullet Cluster observations and current simulations of galaxy formation requires a component of dark matter, I firmly believe (similarly, I think, to V. Rubin’s opinion) that if the current and the next generation of experiments trying to discover the “dark matter particle/family of particles” fails, we should take this option more seriously than some people are able to accept at current time.

May the Bohr model and gravatoms be with you!

Advertisements

LOG#074. Dual units.

The last of my three posts tonight is a mysterious dual system of units developed my rival blog (in Spanish language mainly):

http://tardigrados.wordpress.com/

His author launched an interesting but speculative dual system of units in which every physical quantity seems to have a lower and maximal bound. I have never seen such an idea published before, so I translated to English the table he posted here in Spanish language

DualTableAlbert1

DualTable-Albert2

Is duality the principle/symmetry  behind this table? I don’t know for sure, but I came to some similar ideas in my own thoughts about “enhanced relativities”, so I found this table as mysterious as the \kappa_0 constant from Pavšič units.

What do you think?

See you soon in a new blog post!


LOG#073. The G2 system.

The second paper I am going to discuss today is this one:

http://inspirehep.net/record/844954?ln=en

In Note on the natural system of units, Sudarshan, Boya and Rivera introduce a new kind of “fundamental system of units”, that we could call G2 system  or the Boya-Rivera-Sudarshan system (BRS system for short). After a summary of the Gamov-Ivanenko-Landau-Okun cube (GILO cube) and the Planck natural units, they make the following question:

Can we change the gravitational constant G_N for something else?

They ask this question due to the fact the G_N seems to be a little different from h, c. Indeed, many researchers in quantum gravity use to change G_N with the Planck length as fundamental unit! The G2 system proposal is based in some kind of twodimensional world. Sudarshan, Boya and Rivera search for a “new constant” G_2 such as G_2/r substitutes G_N/r^2 in the Newton’s gravitational law. \left[G_2\right]=L in this new “partial” fundamental system. Therefore, we have

F_N=G_2Mm/r

and the physical dimensions of time, length and mass are expressed in terms of G_2 as follows (we could use \hbar instead of h, that is not essential here as we do know from previous discussions) :

T=c^{-4}hG_2

L=c^{-3}hG_2

M=c^2/G_2

In fact, they remark that since G_2 derives from a 2+1 dimensional world and Einstein Field equations are generally “trivial” in 2+1 spacetime, G_2, surprisingly, is not related to gravitation at all! We are almost “free” to fix G_2 with some alternative procedure. As we wish to base the G2 system in well known physics, the election they do for G_2 is the trivial one ( however I am yet thinking about what we could obtain with some non-trivial alternative definition of $lates G_2$):

\boxed{G_2=\dfrac{c^2}{M_P}=G_N/L_P \approx 4.1\cdot 10^{24}MKS=4.1\cdot 10^{25}CGS}

and any other equivalent expression to it. Please, note that if we fix the Planck length to unit, we get G_N=G_2, so it is equivalent to speak about G_2 or G_N in a system of units where Planck length is set to the unit. However, the proposal is independent of this fact, since, as we said above, we could choose some other non-trivial definition for G_2, although I don’t know what kind of guide we could follow in those alternative and non-trivial definition.

The final remark I would like to make here is that, whatever we choose instead of G_N, it is ESSENTIAL to a quantum theory of gravity, provided it exists, it works and it is “clear” from its foundational principles.

See you in my next blog post!


LOG#072. The hG system.

Brazil is experimenting an increase of scientific production. Today, I am going to explain this brazilian paper http://arxiv.org/abs/0711.4276v2 concerning the number of fundamental constants.

The Okun cube of fundamental constant, firstly introduced by Gamov, Ivanenko and Landau, has raised some questions about what we should consider as fundamental “unit” we already had but now with more intensity. I mentioned the trialogue of fundamental constants between Veneziano, Duff and Okun himself more than a decade ago. Veneziano argued that 2 fundamental constants were well enought to fix everything. However, it is not the “accepted” and “more popular” approach in these days, but the brazilian paper about defends such a claim!

What do they claim? They basically argue that what we need is a convention for space and time measurements and nothing else. Specifically, they say that every physical observable \mathcal{O}_i with i=1,2,\ldots can be expressed as follows:

(1) \boxed{\mathcal{O}_i=\Omega_i \sigma^{\alpha_i}\tau^{\beta_i}}

and where \alpha_i,\beta_i,\Omega_i are pure dimensionless numbers, while \sigma, \tau denote “basic units” of space and time. We could argue that these two last “fundamental units” of “space and time” were “quanta” of “space” and “time”, the mythical “choraons” and “chronons” some speculative theories of Quantum Gravity seem to suggest, but it would be another different story not related to this post!

After introducing the above statement, they discuss 2 procedures to measure with clocks and rulers, what they call G-protocol and h-protocol. They begin assuming some quantity in the CGS system (note that the idea is completely general and they could use the MKSA or any other traditional system of units):

(2) \boxed{\mathcal{D}_i= \Delta_i T^{\alpha_i}L^{\beta_i}M^{\gamma_i}}

where \Delta_i,\alpha_i,\beta_i,\gamma_i are dimensionless constants. And then, the 2 protocols are defined:

1st. G-protocol. Multiply the above equation (2) by G^{\gamma_i} and identify \mathcal{O}_i with \mathcal{D}_i G^{\gamma_i} or \mathcal{O}_i^{(G)}. Rewriting all the physical quantities and laws in terms of this protocol in terms of \mathcal{O}_i^{(G)} instead \mathcal{D}_i  we gain some bonuses:

i) The unit M from CGS “vanishes” or is “erased” from physical observables.

ii) G disappear from every physical law.

iii) Masses being measured in cm^3/s^2 imply that from Newton’s gravitational law g=-G_Nm/d^2 we deduce that

\boxed{g=-m^{(G)}/d^2}

where m^{(G)}=mG are units with physical dimension L^3T^{-2}. G_N, the gravitational constant, is some kind of conversion factor between mass and “volume acceleration” L^3/T^2. This G-protocol applied to the Planck constant provides

\boxed{h^G=hG}

and it has dimensions of L^5/T^3.

2nd. h-protocol.  From equation (2), if we divide by h^{\gamma_i} and we identigy \mathcal{D}_i/h^{\gamma_i} with \mathcal{O}_i^{(h)} we get the so-called h-protocol. The consequences are:

i) M units disappear from physical laws and quantities, as before.

ii) h is erased and vanishes from every equation, law and quantity.

iii) Masses are measured in units of s/cm^2, e.g., from the Compton equation we get in the h-protocol

\Delta \lambda= \dfrac{1}{m^{(h)}c}\left(1-\cos\theta\right)

and where m^h=m/h are units of mass in the h-protocol with dimensions T/L^2. Therefore, h is the conversion factor between inverse areolar velocity s/cm^2 and mass g. In this protocol the inverse of the Compton length measures “inertia”, and indeed this fact fits with some recent proposals to determine a definition of kg independent from the old MKSA pattern (the famours iridium “thing”, which is know now not to have a 1 kg mass). Moreover, we also get that

G^{(h)}=Gh

and

G^{(h)}=h^{(G)}

The two protocols can be summarized in a nice table

Table1hgsystemThey also derive the mysterious relations between charge and mass that we saw in the previous post about Pavšič units, i.e., they also derive

e^{(G)}=2\cdot 10^{21}m_e^{(G)}

and it is equivalent to e=\kappa_0 m_e. Somehow, and electron is more electrical/capacitive than gravitational/elastic!

Finally, in their conclusions, they remark that two constants, (c, h^{(G)}) instead three (c,h,G_N) seems to be well enough for physical theories, and it squashes or squeezes the Gamov-Ivanenko-Landau-Okun (GILO) cube to a nice plane. I include the two final figure for completion, but I urge you to read their whole paper to get a global view before you look at them.

Fig1hgSystem

Fig2hgSystem

Are 2 fundamental constants enough? Are Veneziano (from a completely different viewpoint) and these brazilian physicists right? Time will tell, but I find interesting these thoughts!

See you soon in another wonderful post about Physmatics and system of units!


LOG#070. Natural Units.

NaturalCup

Happy New Year 2013 to everyone and everywhere!

Let me apologize, first of all, by my absence… I have been busy, trying to find my path and way in my field, and I am busy yet, but finally I could not resist without a new blog boost… After all, you should know the fact I have enough materials to write many new things.

So, what’s next? I will dedicate some blog posts to discuss a nice topic I began before, talking about a classic paper on the subject here:

https://thespectrumofriemannium.wordpress.com/2012/11/18/log054-barrow-units/

The topic is going to be pretty simple: natural units in Physics.

First of all, let me point out that the election of any system of units is, a priori, totally conventional. You are free to choose any kind of units for physical magnitudes. Of course, that is not very clever if you have to report data, so everyone can realize what you do and report. Scientists have some definitions and popular systems of units that make the process pretty simpler than in the daily life. Then, we need some general conventions about “units”. Indeed, the traditional wisdom is to use the international system of units, or S (Iabbreviated SI from French language: Le Système international d’unités). There, you can find seven fundamental magnitudes and seven fundamental (or “natural”) units:

1) Space: \left[ L\right]=\mbox{meter}=m

2) Time: \left[ T\right]=\mbox{second}=s

3) Mass: \left[ M\right]=\mbox{kilogram}=kg

4) Temperature: \left[ t\right]=\mbox{Kelvin degree}= K

5) Electric intensity: \left[ I\right]=\mbox{ampere}=A

6) Luminous intensity: \left[ I_L\right]=\mbox{candela}=cd

7) Amount of substance: \left[ n\right]=\mbox{mole}=mol(e)

The dependence between these 7 great units and even their definitions can be found here http://en.wikipedia.org/wiki/International_System_of_Units and references therein. I can not resist to show you the beautiful graph of the 7 wonderful units that this wikipedia article shows you about their “interdependence”:

SI_base_unit.svg

In Physics, when you build a radical new theory, generally it has the power to introduce a relevant scale or system of units. Specially, the Special Theory of Relativity, and the Quantum Mechanics are such theories. General Relativity and Statistical Physics (Statistical Mechanics) have also intrinsic “universal constants”, or, likely, to be more precise, they allow the introduction of some “more convenient” system of units than those you have ever heard ( metric system, SI, MKS, cgs, …). When I spoke about Barrow units (see previous comment above) in this blog, we realized that dimensionality (both mathematical and “physical”), and fundamental theories are bound to the election of some “simpler” units. Those “simpler” units are what we usually call “natural units”. I am not a big fan of such terminology. It is confusing a little bit. Maybe, it would be more interesting and appropiate to call them “addapted X units” or “scaled X units”, where X denotes “relativistic, quantum,…”. Anyway, the name “natural” is popular and it is likely impossible to change the habits.

In fact, we have to distinguish several “kinds” of natural units. First of all, let me list “fundamental and universal” constants in different theories accepted at current time:

1. Boltzmann constant: k_B.

Essential in Statistical Mechanics, both classical and quantum. It measures “entropy”/”information”. The fundamental equation is:

\boxed{S=k_B\ln \Omega}

It provides a link between the microphysics and the macrophysics ( it is the code behind the equation above). It can be understood somehow as a measure of the “energetic content” of an individual particle or state at a given temperature. Common values for this constant are:

k_B=1.3806488(13)\times 10^{-23}J/K = 8.6173324(78)\times 10^{-5}eV/K

k_B=1.3806488(13)\times 10^{-16}erg/K

Statistical Physics states that there is a minimum unit of entropy or a minimal value of energy at any given temperature. Physical dimensions of this constant are thus entropy, or since E=TS, \left[ k_B\right] =E/t=J/K, where t denotes here dimension of temperature.

2. Speed of light.  c.

From classical electromagnetism:

\boxed{c^2=\dfrac{1}{\sqrt{\varepsilon_0\mu_0}}}

The speed of light, according to the postulates of special relativity, is a universal constant. It is frame INDEPENDENT. This fact is at the root of many of the surprising results of special relativity, and it took time to be understood. Moreover, it also connects space and time in a powerful unified formalism, so space and time merge into spacetime, as we do know and we have studied long ago in this blog. The spacetime interval in a D=3+1 dimensional space and two arbitrary events reads:

\Delta s^2=\Delta x^2+\Delta y^2+\Delta z^2-c^2\Delta t^2

In fact, you can observe that “c” is the conversion factor between time-like and space-like coordinates.  How big the speed of light is? Well, it is a relatively large number from our common and ordinary perception. It is exactly:

\boxed{c=299,792,458m/s}

although you often take it as c\approx 3\cdot 10^{8}m/s=3\cdot 10^{10}cm/s.  However, it is the speed of electromagnetic waves in vacuum, no matter where you are in this Universe/Polyverse. At least, experiments are consistent with such an statement. Moreover, it shows that c is also the conversion factor between energy and momentum, since

\mathbf{P}^2c^2-E^2=-m^2c^4

and c^2 is the conversion factor between rest mass and pure energy, because, as everybody knows,  E=mc^2! According to the special theory of relativity, normal matter can never exceed the speed of light. Therefore, the speed of light is the maximum velocity in Nature, at least if specially relativity holds. Physical dimensions of c are \left[c\right]=LT^{-1}, where L denotes length dimension and T denotes time dimension (please, don’t confuse it with temperature despite the capital same letter for both symbols).

3. Planck’s constant. h or generally rationalized \hbar=h/2\pi.

Planck’s constant (or its rationalized version), is the fundamental universal constant in Quantum Physics (Quantum Mechanics, Quantum Field Theory). It gives

\boxed{E=h\nu=\hbar \omega}

Indeed, quanta are the minimal units of energy. That is, you can not divide further a quantum of light, since it is indivisible by definition! Furthermore, the de Broglie relationship relates momentum and wavelength for any particle, and it emerges from the combination of special relativity and the quantum hypothesis:

\lambda=\dfrac{h}{p}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{p}

In the case of massive particles, it yields

\lambda=\dfrac{h}{Mv}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{Mv}

In the case of massless particles (photons, gluons, gravitons,…)

\lambda=\dfrac{hc}{E} or \bar{\lambda}=\dfrac{\hbar c}{E}

Planck’s constant also appears to be essential to the uncertainty principle of Heisenberg:

\boxed{\Delta x \Delta p\geq \hbar/2}

\boxed{\Delta E \Delta t\geq \hbar/2}

\boxed{\Delta A\Delta B\geq \hbar/2}

Some particularly important values of this constant are:

h=6.62606957(29)\times 10^{-34} J\cdot s
h=4.135667516(91)\times 10^{-15}eV\cdot s
h=6.62606957(29)\times 10^{-27} erg\cdot s
\hbar =1.054571726(47)\times 10^{-34} J\cdot s
\hbar =6.58211928(15)\times 10^{-16} eV\cdot s
\hbar= 1.054571726(47)\times 10^{-27}erg\cdot s

It is also useful to know that
hc=1.98644568\times 10^{-25}J\cdot m
hc=1.23984193 eV\cdot \mu m

or

\hbar c=0.1591549hc or \hbar c=197.327 eV\cdot nm

Planck constant has dimension of \mbox{Energy}\times \mbox{Time}=\mbox{position}\times \mbox{momentum}=ML^2T^{-1}. Physical dimensions of this constant coincide also with angular momentum (spin), i.e., with L=mvr.

4. Gravitational constant. G_N.

Apparently, it is not like the others but it can also define some particular scale when combined with Special Relativity. Without entering into further details (since I have not discussed General Relativity yet in this blog), we can calculate the escape velocity of a body moving at the speed of light

\dfrac{1}{2}mv^2-G_N\dfrac{Mm}{R}=0 with v=c implies a new length scale where gravitational relativistic effects do appear, the so-called Schwarzschild radius R_S:

\boxed{R_S=\dfrac{2G_NM}{c^2}=\dfrac{2G_NM_{\odot}}{c^2}\left(\dfrac{M}{M_{\odot}}\right)\approx 2.95\left(\dfrac{M}{M_{\odot}}\right)km}

5. Electric fundamental charge. e.

It is generally chosen as fundamental charge the electric charge of the positron (positive charged “electron”). Its value is:

e=1.602176565(35)\times 10^{-19}C

where C denotes Coulomb. Of course, if you know about quarks with a fraction of this charge, you could ask why we prefer this one. Really, it is only a question of hystory of Science, since electrons were discovered first (and positrons). Quarks, with one third or two thirds of this amount of elementary charge, were discovered later, but you could define the fundamental unit of charge as multiple or entire fraction of this charge. Moreover, as far as we know, electrons are “elementary”/”fundamental” entities, so, we can use this charge as unit and we can define quark charges in terms of it too. Electric charge is not a fundamental unit in the SI system of units. Charge flow, or electric current, is.

An amazing property of the above 5 constants is that they are “universal”. And, for instance, energy is related with other magnitudes in theories where the above constants are present in a really wonderful and unified manner:

\boxed{E=N\dfrac{k_BT}{2}=Mc^2=TS=Pc=N\dfrac{h\nu}{2}=N\dfrac{\hbar \omega}{2}=\dfrac{R_Sc^4}{2G_N}=\hbar c k=\dfrac{hc}{\lambda}}

Caution: k is not the Boltzmann constant but the wave number.

There is a sixth “fundamental” constant related to electromagnetism, but it is also related to the speed of light, the electric charge and the Planck’s constant in a very sutble way. Let me introduce you it too…

6. Coulomb constant. k_C.

This is a second constant related to classical electromagnetism, like the speed of light in vacuum. Coulomb’s constant, the electric force constant, or the electrostatic constant (denoted k_C) is a proportionality factor that takes part in equations relating electric force between  point charges, and indirectly it also appears (depending on your system of units) in expressions for electric fields of charge distributions. Coulomb’s law reads

F_C=k_C\dfrac{Qq}{r^2}

Its experimental value is

k_C=\dfrac{1}{4\pi \varepsilon_0}=\dfrac{c^2\mu_0}{4\pi}=c^2\cdot 10^{-7}H\cdot m^{-1}= 8.9875517873681764\cdot 10^9 Nm^2/C^2

Generally, the Coulomb constant is dropped out and it is usually preferred to express everything using the electric permitivity of vacuum \varepsilon_0 and/or numerical factors depending on the pi number \pi if you choose the gaussian system of units  (read this wikipedia article http://en.wikipedia.org/wiki/Gaussian_system_of_units ), the CGS system, or some hybrid units based on them.

H.E.P. units

High Energy Physicists use to employ units in which the velocity is measured in fractions of the speed of light in vacuum, and the action/angular momentum is some multiple of the Planck’s constant. These conditions are equivalent to set

\boxed{c=1_c=1} \boxed{\hbar=1_\hbar=1}

Complementarily, or not, depending on your tastes and preferences, you can also set the Boltzmann’s constant to the unit as well

k_B=1_{k_B}=1

and thus the complete HEP system is defined if you set

\boxed{c=\hbar=k_B=1}

This “natural” system of units is lacking yet a scale of energy. Then, it is generally added the electron-volt eV as auxiliary quantity defining the reference energy scale. Despite the fact that this is not a “natural unit” in the proper sense because it is defined by a natural property, the electric charge,  and the anthropogenic unit of electric potential, the volt. The SI prefixes multiples of eV are used as well: keV, MeV, GeV, etc. Here, the eV is used as reference energy quantity, and with the above election of “elementary/natural units” (or any other auxiliary unit of energy), any quantity can be expressed. For example, a distance of 1 m can be expressed in terms of eV, in natural units, as

1m=\dfrac{1m}{\hbar c}\approx 510eV^{-1}

This system of units have remarkable conversion factors

A) 1 eV^{-1} of length is equal to 1.97\cdot 10^{-7}m =(1\text{eV}^{-1})\hbar c

B) 1 eV of mass is equal to 1.78\cdot 10^{-36}kg=1\times \dfrac{eV}{c^2}

C) 1 eV^{-1} of time is equal to 6.58\cdot 10^{-16}s=(1\text{eV}^{-1})\hbar

D) 1 eV of temperature is equal to 1.16\cdot 10^4K=1eV/k_B

E) 1 unit of electric charge in the Lorentz-Heaviside system of units is equal to 5.29\cdot 10^{-19}C=e/\sqrt{4\pi\alpha}

F) 1 unit of electric charge in the Gaussian system of units is equal to 1.88\cdot 10^{-18}C=e/\sqrt{\alpha}

This system of units, therefore, leaves free only the energy scale (generally it is chosen the electron-volt) and the electric measure of fundamentl charge. Every other unit can be related to energy/charge. It is truly remarkable than doing this (turning invisible the above three constants) you can “unify” different magnitudes due to the fact these conventions make them equivalent. For instance, with natural units:

1) Length=Time=1/Energy=1/Mass.

It is due to x=ct, E=Mc^2 and E=hc/\lambda equations. Setting c and h or \hbar provides

x=t, E=M and E=1/\lambda.

Note that natural units turn invisible the units we set to the unit! That is the key of the procedure. It simplifies equations and expressions. Of course, you must be careful when you reintroduce constants!

2) Energy=Mass=Momemntum=Temperature.

It is due to E=k_BT, E=Pc and E=Mc^2 again.

One extra bonus for theoretical physicists is that natural units allow to build and write proper lagrangians and hamiltonians (certain mathematical operators containing the dynamics of the system enconded in them), or equivalently the action functional, with only the energy or “mass” dimension as “free parameter”. Let me show how it works.

Natural units in HEP identify length and time dimensions. Thus \left[L\right]=\left[T\right]. Planck’s constant allows us to identify those 2 dimensions with 1/Energy (reciprocals of energy) physical dimensions. Therefore, in HEP units, we have

\boxed{\left[L\right]=\left[T\right]=\left[E\right]^{-1}}

The speed of light identifies energy and mass, and thus, we can often heard about “mass-dimension” of a lagrangian in the following sense. HEP units can be thought as defining “everything” in terms of energy, from the pure dimensional ground. That is, every physical dimension is (in HEP units) defined by a power of energy:

\boxed{\left[E\right]^n}

Thus, we can refer to any magnitude simply saying the power of such physical dimension (or you can think logarithmically to understand it easier if you wish). With this convention, and recalling that energy dimension is mass dimension, we have that

\left[L\right]=\left[T\right]=-1 and \left[E\right]=\left[M\right]=1

Using these arguments, the action functional is a pure dimensionless quantity, and thus, in D=4 spacetime dimensions, lagrangian densities must have dimension 4 ( or dimension D is a general spacetime).

\displaystyle{S=\int d^4x \mathcal{L}\rightarrow \left[\mathcal{L}\right]=4}

\displaystyle{S=\int d^Dx \mathcal{L}\rightarrow \left[\mathcal{L}\right]=D}

In D=4 spacetime dimensions, it can be easily showed that

\left[\partial_\mu\right]=\left[\Phi\right]=\left[A^\mu\right]=1

\left[\Psi_D\right]=\left[\Psi_M\right]=\left[\chi\right]=\left[\eta\right]=\dfrac{3}{2}

where \Phi is a scalar field, A^\mu is a vector field (like the electromagnetic or non-abelian vector gauge fields), and \Psi_D, \Psi_M, \chi, \eta are a Dirac spinor, a Majorana spinor, and \chi, \eta are Weyl spinors (of different chiralities). Supersymmetry (or SUSY) allows for anticommuting c-numbers (or Grassmann numbers) and it forces to introduce auxiliary parameters with mass dimension -1/2. They are the so-called SUSY transformation parameters \zeta_{SUSY}=\epsilon. There are some speculative spinors called ELKO fields that could be non-standandard spinor fields with mass dimension one! But it is an advanced topic I am not going to discuss here today. In general D spacetime dimensions a scalar (or vector) field would have mass dimension (D-2)/2, and a spinor/fermionic field in D dimensions has generally (D-1)/2 mass dimension (excepting the auxiliary SUSY grassmanian fields and the exotic idea of ELKO fields).  This dimensional analysis is very useful when theoretical physicists build up interacting lagrangians, since we can guess the structure of interaction looking at purely dimensional arguments every possible operator entering into the action/lagrangian density! In summary, therefore, for any D:

\boxed{\left[\Phi\right]=\left[A_\mu\right]=\dfrac{D-2}{2}\equiv E^{\frac{D-2}{2}}=M^{\frac{D-2}{2}}}

\boxed{\left[\Psi\right]=\dfrac{D-1}{2}\equiv E^{\frac{D-1}{2}}=M^{\frac{D-1}{2}}}

Remark (for QFT experts only): Don’t confuse mass dimension with the final transverse polarization degrees or “degrees of freedom” of a particular field, i.e., “components” minus “gauge constraints”. E.g.: a gauge vector field has D-2 degrees of freedom in D dimensions. They are different concepts (although both closely related to the spacetime dimension where the field “lives”).

In summary:

i) HEP units are based on QM (Quantum Mechanics), SR (Special Relativity) and Statistical Mechanics (Entropy and Thermodynamics).

ii) HEP units need to introduce a free energy scale, and it generally drives us to use the eV or electron-volt as auxiliary energy scale.

iii) HEP units are useful to dimensional analysis of lagrangians (and hamiltonians) up to “mass dimension”.

Stoney Units

In Physics, the Stoney units form a alternative set of natural units named after the Irish physicist George Johnstone Stoney, who first introduced them as we know it today in 1881. However, he presented the idea in a lecture entitled “On the Physical Units of Nature” delivered to the British Association before that date, in 1874. They are the first historical example of natural units and “unification scale” somehow. Stoney units are rarely used in modern physics for calculations, but they are of historical interest but some people like Wilczek has written about them (see, e.g., http://arxiv.org/abs/0708.4361). These units of measurement were designed so that certain fundamental physical constants are taken as reference basis without the Planck scale being explicit, quite a remarkable fact! The set of constants that Stoney used as base units is the following:

A) Electric charge, e=1_e.

B) Speed of light in vacuum, c=1_c.

C) Gravitational constant, G_N=1_{G_N}.

D) The Reciprocal of Coulomb constant, 1/k_C=4\pi \varepsilon_0=1_{k_C^{-1}}=1_{4\pi \varepsilon_0}.

Stony units are built when you set these four constants to the unit, i.e., equivalently, the Stoney System of Units (S) is determined by the assignments:

\boxed{e=c=G_N=4\pi\varepsilon_0=1}

Interestingly, in this system of units, the Planck constant is not equal to the unit and it is not “fundamental” (Wilczek remarked this fact here ) but:

\hbar=\dfrac{1}{\alpha}\approx 137.035999679

Today, Planck units are more popular Planck than Stoney units in modern physics, and even there are many physicists who don’t know about the Stoney Units! In fact, Stoney was one of the first scientists to understand that electric charge was quantized!; from this quantization he deduced the units that are now named after him.

The Stoney length and the Stoney energy are collectively called the Stoney scale, and they are not far from the Planck length and the Planck energy, the Planck scale. The Stoney scale and the Planck scale are the length and energy scales at which quantum processes and gravity occur together. At these scales, a unified theory of physics is thus likely required. The only notable attempt to construct such a theory from the Stoney scale was that of H. Weyl, who associated a gravitational unit of charge with the Stoney length and who appears to have inspired Dirac’s fascination with the large number hypothesis. Since then, the Stoney scale has been largely neglected in the development of modern physics, although it is occasionally discussed to this day. Wilczek likes to point out that, in Stoney Units, QM would be an emergent phenomenon/theory, since the Planck constant wouldn’t be present directly but as a combination of different constants. By the other hand, the Planck scale is valid for all known interactions, and does not give prominence to the electromagnetic interaction, as the Stoney scale does. That is, in Stoney Units, both gravitation and electromagnetism are on equal footing, unlike the Planck units, where only the speed of light is used and there is no more connections to electromagnetism, at least, in a clean way like the Stoney Units do. Be aware, sometimes, rarely though, Planck units are referred to as Planck-Stoney units.

What are the most interesting Stoney system values? Here you are the most remarkable results:

1) Stoney Length, L_S.

\boxed{L_S=\sqrt{\dfrac{G_Ne^2}{(4\pi\varepsilon)c^4}}\approx 1.38\cdot 10^{-36}m}

2) Stoney Mass, M_S.

\boxed{M_S=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.86\cdot 10^{-9}kg}

3) Stoney Energy, E_S.

\boxed{E_S=M_Sc^2=\sqrt{\dfrac{e^2c^4}{G_N(4\pi\varepsilon_0)}}\approx 1.67\cdot 10^8 J=1.04\cdot 10^{18}GeV}

4) Stoney Time, t_S.

\boxed{t_S=\sqrt{\dfrac{G_Ne^2}{c^6(4\pi\varepsilon_0)}}\approx 4.61\cdot 10^{-45}s}

5) Stoney Charge, Q_S.

\boxed{Q_S=e\approx 1.60\cdot 10^{-19}C}

6) Stoney Temperature, T_S.

\boxed{T_S=E_S/k_B=\sqrt{\dfrac{e^2c^4}{G_Nk_B^2(4\pi\varepsilon_0)}}\approx 1.21\cdot 10^{31}K}

Planck Units

The reference constants to this natural system of units (generally denoted by P) are the following 4 constants:

1) Gravitational constant. G_N

2) Speed of light. c.

3) Planck constant or rationalized Planck constant. \hbar.

4) Boltzmann constant. k_B.

The Planck units are got when you set these 4 constants to the unit, i.e.,

\boxed{G_N=c=\hbar=k_B=1}

It is often said that Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even features of any fundamental particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of classical spacetime in the relativistic theory of gravitation, also known as general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of elementary quantum mechanics. This is the reason why Planck units particularly useful and common in theories of quantum gravity, including string theory or loop quantum gravity.

This system defines some limit magnitudes, as follows:

1) Planck Length, L_P.

\boxed{L_P=\sqrt{\dfrac{G_N\hbar}{c^3}}\approx 1.616\cdot 10^{-35}s}

2) Planck Time, t_P.

\boxed{t_P=L_P/c=\sqrt{\dfrac{G_N\hbar}{c^5}}\approx 5.391\cdot 10^{-44}s}

3) Planck Mass, M_P.

\boxed{M_P=\sqrt{\dfrac{\hbar c}{G_N}}\approx 2.176\cdot 10^{-8}kg}

4) Planck Energy, E_P.

\boxed{E_P=M_Pc^2=\sqrt{\dfrac{\hbar c^5}{G_N}}\approx 1.96\cdot 10^9J=1.22\cdot 10^{19}GeV}

5) Planck charge, Q_P.

In Lorentz-Heaviside electromagnetic units

\boxed{Q_P=\sqrt{\hbar c \varepsilon_0}=\dfrac{e}{\sqrt{4\pi\alpha}}\approx 5.291\cdot 10^{-19}C}

In Gaussian electromagnetic units

\boxed{Q_P=\sqrt{\hbar c (4\pi\varepsilon_0)}=\dfrac{e}{\sqrt{\alpha}}\approx 1.876\cdot 10^{-18}C}

6) Planck temperature, T_P.

\boxed{T_P=E_P/k_B=\sqrt{\dfrac{\hbar c^5}{G_Nk_B^2}}\approx 1.417\cdot 10^{32}K}

From these “fundamental” magnitudes we can build many derived quantities in the Planck System:

1) Planck area.

A_P=L_P^2=\dfrac{\hbar G_N}{c^3}\approx 2.612\cdot 10^{-70}m^2

2) Planck volume.

V_P=L_P^3=\left(\dfrac{\hbar G_N}{c^3}\right)^{3/2}\approx 4.22\cdot 10^{-105}m^3

3) Planck momentum.

P_P=M_Pc=\sqrt{\dfrac{\hbar c^3}{G_N}}\approx 6.52485 kgm/s

A relatively “small” momentum!

4) Planck force.

F_P=E_P/L_P=\dfrac{c^4}{G_N }\approx 1.21\cdot 10^{44}N

It is independent from Planck constant! Moreover, the Planck acceleration is

a_P=F_P/M_P=\sqrt{\dfrac{c^7}{G_N\hbar}}\approx 5.561\cdot 10^{51}m/s^2

5) Planck Power.

\mathcal{P}_P=\dfrac{c^5}{G_N}\approx 3.628\cdot 10^{52}W

6) Planck density.

\rho_P=\dfrac{c^5}{\hbar G_N^2}\approx 5.155\cdot 10^{96}kg/m^3

Planck density energy would be equal to

\rho_P c^2=\dfrac{c^7}{\hbar G_N^2}\approx 4.6331\cdot 10^{113}J/m^3

7) Planck angular frequency.

\omega_P=\sqrt{\dfrac{c^5}{\hbar G_N}}\approx 1.85487\cdot 10^{43}Hz

8) Planck pressure.

p_P=\dfrac{F_P}{A_P}=\dfrac{c^7}{G_N^2\hbar}=\rho_P c^2\approx 4.6331\cdot 10^{113}Pa

Note that Planck pressure IS the Planck density energy!

9) Planck current.

I_P=Q_P/t_P=\sqrt{\dfrac{4\pi\varepsilon_0 c^6}{G_N}}\approx 3.4789\cdot 10^{25}A

10) Planck voltage.

v_P=E_P/Q_P=\sqrt{\dfrac{c^4}{4\pi\varepsilon_0 G_N}}\approx 1.04295\cdot 10^{27}V

11) Planck impedance.

Z_P=v_P/I_P=\dfrac{\hbar^2}{Q_P}=\dfrac{1}{4\pi \varepsilon_0 c}\approx 29.979\Omega

A relatively small impedance!

12) Planck capacitor.

C_P=Q_P/v_P=4\pi\varepsilon_0\sqrt{\dfrac{\hbar G_N}{ c^3}} \approx 1.798\cdot 10^{-45}F

Interestingly, it depends on the gravitational constant!

Some Planck units are suitable for measuring quantities that are familiar from daily experience. In particular:

1 Planck mass is about 22 micrograms.

1 Planck momentum is about 6.5 kg m/s

1 Planck energy is about 500kWh.

1 Planck charge is about 11 elementary (electronic) charges.

1 Planck impendance is almost 30 ohms.

Moreover:

i) A speed of 1 Planck length per Planck time is the speed of light, the maximum possible speed in special relativity.

ii) To understand the Planck Era and “before” (if it has sense), supposing QM holds yet there, we need a quantum theory of gravity to be available there. There is no such a theory though, right now. Therefore, we have to wait if these ideas are right or not.

iii) It is believed that at Planck temperature, the whole symmetry of the Universe was “perfect” in the sense the four fundamental foces were “unified” somehow. We have only some vague notios about how that theory of everything (TOE) would be.

The physical dimensions of the known Universe in terms of Planck units are “dramatic”:

i) Age of the Universe is about t_U=8.0\cdot 10^{60} t_P.

ii) Diameter of the observable Universe is about d_U=5.4\cdot 10^{61}L_P

iii) Current temperature of the Universe is about 1.9 \cdot 10^{-32}T_P

iv) The observed cosmological constant is about 5.6\cdot 10^{-122}t_P^{-2}

v) The mass of the Universe is about 10^{60}m_p.

vi) The Hubble constant is 71km/s/Mpc\approx 1.23\cdot 10^{-61}t_P^{-1}

Schrödinger Units

The Schrödinger Units do not obviously contain the term c, the speed of light in a vacuum. However, within the term of the Permittivity of Free Space [i.e., electric constant or vacuum permittivity], and the speed of light plays a part in that particular computation. The vacuum permittivity results from the reciprocal of the speed of light squared times the magnetic constant. So, even though the speed of light is not apparent in the Schrödinger equations it does exist buried within its terms and therefore influences the decimal placement issue within square roots. The essence of Schrödinger units are the following constants:

A) Gravitational constant G_N.

B) Planck constant \hbar.

C) Boltzmann constant k_B.

D) Coulomb constant or equivalently the electric permitivity of free space/vacuum k_C=1/4\pi\varepsilon_0.

E) The electric charge of the positron e.

In this sistem \psi we have

\boxed{G_N=\hbar =k_B =k_C =1}

1) Schrödinger Length L_{Sch}.

L_\psi=\sqrt{\dfrac{\hbar^4 G_N(4\pi\varepsilon_0)^3}{e^6}}\approx 2.593\cdot 10^{-32}m

2) Schrödinger time t_{Sch}.

t_\psi=\sqrt{\dfrac{\hbar^6 G_N(4\pi\varepsilon_0)^5}{e^{10}}}\approx 1.185\cdot 10^{-38}s

3) Schrödinger mass M_{Sch}.

M_\psi=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.859\cdot 10^{-9}kg

4) Schrödinger energy E_{Sch}.

E_\psi=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_N}}\approx 8890 J=5.55\cdot 10^{13}GeV

5) Schrödinger charge Q_{Sch}.

Q_\psi =e=1.602\cdot 10^{-19}C

6) Schrödinger temperature T_{Sch}.

T_\psi=E_\psi/k_B=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_Nk_B^2}}\approx 6.445\cdot 10^{26}K

Atomic Units

There are two alternative systems of atomic units, closely related:

1) Hartree atomic units: 

\boxed{e=m_e=\hbar=k_B=1} and \boxed{c=\alpha^{-1}}

2) Rydberg atomic units:

\boxed{\dfrac{e}{\sqrt{2}}=2m_e=\hbar=k_B=1} and \boxed{c=2\alpha^{-1}}

There, m_e is the electron mass and \alpha is the electromagnetic fine structure constant. These units are designed to simplify atomic and molecular physics and chemistry, especially the quantities related to the hydrogen atom, and they are widely used in these fields. The Hartree units were first proposed by Doublas Hartree, and they are more common than the Rydberg units.

The units are adapted to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Böhr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy equal to 1/2, and so on.

Some quantities in the Hartree system of units are:

1) Atomic Length (also called Böhr radius):

L_A=a_0=\dfrac{\hbar^2 (4\pi\varepsilon_0)}{m_ee^2}\approx 5.292\cdot 10^{-11}m=0.5292\AA

2) Atomic Time:

t_A=\dfrac{\hbar^3(4\pi\varepsilon_0)^2}{m_ee^4}\approx 2.419\cdot 10^{-17}s

3) Atomic Mass:

M_A=m_e\approx 9.109\cdot 10^{-31}kg

4) Atomic Energy:

E_A=m_ec^2=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2} \approx 4.36\cdot 10^{ -18}J=27.2eV=2\times(13.6)eV=2Ry

5) Atomic electric Charge:

Q_A=q_e=e\approx 1.602\cdot 10^{-19}C

6) Atomic temperature:

T_A=E_A/k_B=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2k_B}\approx 3.158\cdot 10^5K

The fundamental unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant  is extremely small in atomic units (about 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force . The unit length, LA, is the so-called and well known Böhr radius, a0.

The values of c and e shown above imply that e=\sqrt{\alpha \hbar c}, as in Gaussian units, not Lorentz-Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units. Be aware of these issues!

QCD Units

In the framework of Quantum Chromodynamics, a quantum field theory (QFT) we know as QCD, we can define the QCD system of units based on:

1) QCD Length L_{QCD}.

L_{QCD}=\dfrac{\hbar}{m_pc}\approx 2.103\cdot 10^{-16}m

and where m_p is the proton mass (please, don’t confuse it with the Planck mass M_P).

2) QCD Time t_{QCD}.

t_{QCD}=\dfrac{\hbar}{m_pc^2}\approx 7.015\cdot 10^{-25}s

3) QCD Mass M_{QCD}.

M_{QCD}=m_p\approx 1.673\cdot 10^{-27}kg

4) QCD Energy E_{QCD}.

E_{QCD}=M_{QCD}c^2=m_pc^2\approx 1.504\cdot 10^{-10}J=938.6MeV=0.9386GeV

Thus, QCD energy is about 1 GeV!

5) QCD Temperature T_{QCD}.

T_{QCD}=E_{QCD}/k_B=\dfrac{m_pc^2}{k_B}\approx 1.089\cdot 10^{13}K

6) QCD Charge Q_{QCD}.

In Heaviside-Lorent units:

Q_{QCD}=\dfrac{1}{\sqrt{4\pi\alpha}}e\approx 5.292\cdot 10^{-19}C

In Gaussian units:

Q_{QCD}=\dfrac{1}{\sqrt{\alpha}}e\approx 1.876\cdot 10^{-18}C

Geometrized Units

The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units. That is, we set:

\boxed{G_N=c=1}

and the remaining constants are set to the unit according to your needs and tastes.

Conversion Factors

This table from wikipedia is very useful:

ConversionTableNatUnits

where:

i) \alpha is the fine-structure constant, approximately 0.007297.

ii) \alpha_G=\dfrac{m_e^2}{M_P^2}\approx 1.752\cdot 10^{-45} is the gravitational fine-structure constant.

Some conversion factors for geometrized units are also available:

Conversion from kg, s, C, K into m:

G_N/c^2  [m/kg]

c [m/s]

\sqrt{G_N/(4\pi\varepsilon_0)}/c^2 [m/C]

G_Nk_B/c^2 [m/K]

Conversion from m, s, C, K into kg:

c^2/G_N [kg/m]

c^3/G_N [kg/s]

1/\sqrt{G_N4\pi\varepsilon_0} [kg/C]

k_B/c^2[kg/K]

Conversion from m, kg, C, K into s

1/c [s/m]

G_N/c^3[s/kg]

\sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}/c^3 [s/C]

G_Nk_B/c^5 [s/K]

Conversion from m, kg, s, K into C

c^2/\sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}[C/m]

(G_N4\pi\varepsilon_0)^{1/2} [C/kg]

c^3/(G_N/(4\pi\varepsilon_0))^{1/2}[C/s]

k_B\sqrt{G_N4\pi\varepsilon_0}/c^2   [C/K]

Conversion from m, kg, s, C into K

c^4/(G_Nk_B)[K/m]

c^2/k_B [K/kg]

c^5/(G_Nk_B) [K/s]

c^2/(k_B\sqrt{G_N4\pi\varepsilon_0}) [K/C]

Or you can read off factors from this table as well:

GeosystemConvFactors1

and

GeomSystemEMfactors

Advantages and Disadvantages of Natural Units

Natural units have some advantages (“Pro”):

1) Equations and mathematical expressions are simpler in Natural Units.

2) Natural units allow for the match between apparently different physical magnitudes.

3) Some natural units are independent from “prototypes” or “external patterns” beyond some clever and trivial conventions.

4) They can help to unify different physical concetps.

However, natural units have also some disadvantages (“Cons”):

1) They generally provide less precise measurements or quantities.

2) They can be ill-defined/redundant and own some ambiguity. It is also caused by the fact that some natural units differ by numerical factors of pi and/or pure numbers, so they can not help us to understand the origin of some pure numbers (adimensional prefactors) in general.

Moreover, you must not forget that natural units are “human” in the sense you can addapt them to your own needs, and indeed,you can create your own particular system of natural units! However, said this, you can understand the main key point: fundamental theories are who finally hint what “numbers”/”magnitudes” determine a system of “natural units”.

Remark: the smart designer of a system of natural unit systems must choose a few of these constants to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized: if the mass of an electron is defined to be 1, then the mass of a proton has to be \approx 6\pi^5\approx 1936. In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants through a very known equation:

\alpha=\dfrac{k_Ce^2}{\hbar c}

where k_C is the Coulomb constant, e is the positron electric charge (elementary charge), ℏ is the reduced Planck constant, and c is the again the speed of light in vaccuum. It is believed that in a normal theory is not possible to simultaneously normalize all four of the constants c, ℏ, e, and kC.

Fritzsch-Xing  plot

Fritzsch and Xing have developed a very beautiful plot of the fundamental constants in Nature (those coming from gravitation and the Standard Model). I can not avoid to include it here in the 2 versions I have seen it. The first one is “serious”, with 29 “fundamental constants”:

Fritsch-XingPlotFundConstants

However, I prefer the “fun version” of this plot. This second version is very cool and it includes 28 “fundamental constants”:

fritz-xingPlotFromTalk

The Okun Cube

Long ago, L.B. Okun provided a very interesting way to think about the Planck units and their meaning, at least from current knowledge of physics! He imagined a cube in 3d in which we have 3 different axis. Planck units are defined as we have seen above by 3 constants c, \hbar, G_N plus the Boltzmann constant. Imagine we arrange one axis for c-Units, one axis for \hbar-units and one more for G_N-units. The result is a wonderful cube:

OkunCube2

Or equivalently, sometimes it is seen as an equivalent sketch ( note the Planck constant is NOT rationalized in the next cube, but it does not matter for this graphical representation):

Okuncube

Classical physics (CP) corresponds to the vanishing of the 3 constants, i.e., to the origin (0,0,0).

Newtonian mechanics (NM) , or more precisely newtonian gravity plus classical mechanics, corresponds to the “point” (0,0,G_N).

Special relativity (SR) corresponds to the point (0,1/c,0), i.e., to “points” where relativistic effects are important due to velocities close to the speed of light.

Quantum mechanics (QM) corresponds to the point (h,0,0), i.e., to “points” where the action/angular momentum fundamental unit is important, like the photoelectric effect or the blackbody radiation.

Quantum Field Theory (QFT) corresponds to the point (h,1/c,0), i.e, to “points” where both, SR and QM are important, that is, to situations where you can create/annihilate pairs, the “particle” number is not conserved (but the particle-antiparticle number IS), and subatomic particles manifest theirselves simultaneously with quantum and relativistic features.

Quantum Gravity (QG) would correspond to the point (h,0,G_N) where gravity is quantum itself. We have no theory of quantum gravity yet, but some speculative trials are effective versions of (super)-string theory/M-theory, loop quantum gravity (LQG) and some others.

Finally, the Theory Of Everything (TOE) would be the theory in the last free corner, that arising in the vertex (h,1/c,G_N). Superstring theories/M-theory are the only serious canditate to TOE so far. LQG does not generally introduce matter fields (some recent trials are pushing into that direction, though) so it is not a TOE candidate right now.

Some final remarks and questions

1) Are fundamental “constants” really constant? Do they vary with energy or time?

2) How many fundamental constants are there? This questions has provided lots of discussions. One of the most famous was this one:

http://arxiv.org/abs/physics/0110060

The trialogue (or dialogue if you are precise with words) above discussed the opinions by 3 eminent physicists about the number of fundamental constants: Michael Duff suggested zero, Gabriel Veneziano argued that there are only 2 fundamental constants while L.B. Okun defended there are 3 fundamental constants

3) Should the cosmological constant be included as a new fundamental constant? The cosmological constant behaves as a constant from current cosmological measurements and cosmological data fits, but is it truly constant? It seems to be…But we are not sure. Quintessence models (some of them related to inflationary Universes) suggest that it could vary on cosmological scales very slowly. However, the data strongly suggest that

P_\Lambda=-\rho c^2

It is simple, but it is not understood the ultimate nature of such a “fluid” because we don’t know what kind of “stuff” (either particles or fields) can make the cosmological constant be so tiny and so abundant (about the 72% of the Universe is “dark energy”/cosmological constant) as it seems to be. We do know it can not be “known particles”. Dark energy behaves as a repulsive force, some kind of pressure/antigravitation on cosmological scales. We suspect it could be some kind of scalar field but there are many other alternatives that “mimic” a cosmological constant. If we identify the cosmological constant with the vacuum energy we obtain about 122 orders of magnitude of mismatch between theory and observations. A really bad “prediction”, one of the worst predictions in the history of physics!

Be natural and stay tuned!


LOG#057. Naturalness problems.

yogurt_berries

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)

There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂


LOG#056. Gravitational alpha(s).

alpha

The topic today is to review a beautiful paper and to discuss its relevance for theoretical physics. The paper is: Comment on the cosmological constant and a gravitational alpha by R.J.Adler. You can read it here: http://arxiv.org/abs/1110.3358

One of the most intriguing and mysterious numbers in Physics is the electromagnetic fine structure constant \alpha_{EM}. Its value is given by

\alpha_{EM}=7.30\cdot 10^{-3}

or equivalenty

\alpha_{EM}^{-1}=\dfrac{1}{\alpha_{EM}}=137

Of course, I am assuming that the coupling constant is measured at ordinary energies, since we know that the coupling constants are not really constant but they vary slowly with energy. However, I am not going to talk about the renormalization (semi)group in this post.

Why is the fine structure constant important? Well, we can undertand it if we insert the values of the constants that made the electromagnetic alpha constant:

\alpha_{EM}=\dfrac{e^2}{\hbar c}

with e being the electron elemental charge, \hbar the Planck’s constant divided by two pi, c is the speed of light and where we are using units with K_C=\dfrac{1}{4\pi \varepsilon_0}=1. Here K_C is the Coulomb constant, generally with a value 9\cdot 10^9Nm^2/C^2, but we rescale units in order it has a value equal to the unit. We will discuss more about frequently used system of units soon.

As the electromagnetic alpha constant depends on the electric charge, the Coulomb’s electromagnetic constant ( rescaled to one in some “clever” units), the Planck’s constant ( rationalized by 2\pi since \hbar=h/2\pi) and the speed of light, it codes some deep information of the Universe inside of it. The electromagnetic alpha \alpha_{EM} is quantum and relativistic itself, and it also is related to elemental charges. Why alpha has the value it has is a complete mystery. Many people has tried to elucidate why it has the value it has today, but there is no reason of why it should have the value it has. Of course, it happens as well with some other constants but this one is particularly important since it is involved in some important numbers in atomic physics and the most elemental atom, the hydrogen atom.

In atomic physics, there are two common and “natural” scales of length. The first scale of length is given by the Compton’s wavelength of electrons. Usint the de Broglie equation, we get that the Compton’s wavelength is the wavelength of a photon whose energy is the same as the rest mass of the particle, or mathematically speaking:

\boxed{\lambda=\dfrac{h}{p}=\dfrac{h}{mc}}

Usually, physicists employ the “reduced” or “rationalized” Compton’s wavelength. Plugging the electron mass, we get the electron reduced Compton’s wavelength:

\boxed{\lambda_C=\dfrac{\lambda}{2\pi}=\dfrac{\hbar}{m_ec}=\dfrac{\hbar}{m_ec}=3.86\cdot 10^{-13}m}

The second natural scale of length in atomic physics is the so-called Böhr radius. It is given by the formula:

\boxed{a_B=\dfrac{\hbar^2}{m_e e^2}=5.29\cdot 10^{-11}m}

Therefore, there is a natural mass ratio between those two length scales, and it shows that it is precisely the electromagnetic fine structure constant alpha \alpha_{EM}:

\boxed{R_\alpha=\dfrac{\mbox{Reduced Compton's wavelength}}{\mbox{B\"{o}hr radius}}=\dfrac{\lambda_C}{a_B}=\dfrac{\left(\hbar/m_e c\right)}{\left(\hbar^2/m_ee^2\right)}=\dfrac{e^2}{\hbar c}=\alpha_{EM}=7.30\cdot 10^{-3}}

Furthermore, we can show that the electromagnetic alpha also is related to the mass ration between the electron energy in the fundamental orbit of the hydrogen atom and the electron rest energy. These two scales of energy are given by:

1) Rydberg’s energy ( electron ground minimal energy in the fundamental orbit/orbital for the hydrogen atom):

\boxed{E_H=\dfrac{m_ee^4}{2\hbar^2}=13.6eV}

2) Electron rest energy:

\boxed{E_0=m_ec^2}

Then, the ratio of those two “natural” energies in atomic physics reads:

\boxed{R'_E=\dfrac{\mbox{Rydberg's energy}}{\mbox{Electron rest energy}}=\dfrac{m_ee^4/2\hbar^2}{m_ec^2}=\dfrac{1}{2}\left(\dfrac{e^2}{\hbar c}\right)^2=\dfrac{\alpha_{EM}^2}{2}=2.66\cdot 10^{-5}}

or equivalently

\boxed{\dfrac{1}{R'_E}=37600=3.76\cdot 10^4}

R.J.Adler’s paper remarks that there is a cosmological/microscopic analogue of the above two ratios, and they involve the infamous Einstein’s cosmological constant. In Cosmology, we have two natural (ultimate?) length scales:

1st. The (ultra)microscopic and ultrahigh energy (“ultraviolet” UV regulator) relevant Planck’s length L_P, or equivalently the squared value L_P^2. Its value is given by:

\boxed{L_P^2=\dfrac{G\hbar}{c^3}\leftrightarrow L_P=\sqrt{\dfrac{G\hbar}{c^3}}=1.62\cdot 10^{35}m}

This natural length can NOT be related to any “classical” theory of gravity since it involves and uses the Planck’s constant \hbar.

2nd. The (ultra)macroscopic and ultra-low-energy (“infrared” IR regulator) relevant cosmological constant/deSitter radius. They are usualy represented/denoted by \Lambda and R_{dS} respectively, and they are related to each other in a simple way. The dimensions of the cosmological constant are given by

\boxed{\left[\Lambda \right]=\left[ L^{-2}\right]=(\mbox{Length})^{-2}}

The de Sitter radius and the cosmological constant are related through a simple equation:

\boxed{R_{dS}=\sqrt{\dfrac{3}{\Lambda}}\leftrightarrow R^2_{dS}=\dfrac{3}{\Lambda}\leftrightarrow \Lambda =\dfrac{3}{R^2_{dS}}}

The de Sitter radius is obtained from cosmological measurements thanks to the so called Hubble’s parameter ( or Hubble’s “constant”, although we do know that Hubble’s “constant” is not such a “constant”, but sometimes it is heard as a language abuse) H. From cosmological data we obtain ( we use the paper’s value without loss of generality):

H=\dfrac{73km/s}{Mpc}

This measured value allows us to derive the Hubble’s length paremeter

L_H=\dfrac{c}{H}=1.27\cdot 10^{26}m

Moreover, the data also imply some density energy associated to the cosmological “constant”, and it is generally called Dark Energy. This density energy from data is written as:

\Omega_\Lambda =\Omega^{data}_{\Lambda}

and from this, it can be also proved that

R_{dS}=\dfrac{L_H}{\sqrt{\Omega_\Lambda}}=1.46\cdot 10^{26}m

where we have introduced the experimentally deduced value \Omega_\Lambda\approx 0.76 from the cosmological parameter global fits. In fact, the cosmological constant helps us to define the beautiful and elegant formula that we can call the gravitational alpha/gravitational cosmological fine structure constant \alpha_G:

\boxed{\alpha_G\equiv \dfrac{\mbox{Planck's length}}{\mbox{normalized de Sitter radius}}=\dfrac{L_P}{\dfrac{R_{dS}}{\sqrt{3}}}=\dfrac{\sqrt{\dfrac{G\hbar}{c^3}}}{\sqrt{\dfrac{1}{\Lambda}}}=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}}

or equivalently, defining the cosmological length associated to the cosmological constant as

L^2_\Lambda=\dfrac{1}{\Lambda}=\dfrac{R^2_{dS}}{3}\leftrightarrow L_\Lambda=\sqrt{\dfrac{1}{\Lambda}}=\dfrac{R_{dS}}{\sqrt{3}}

\boxed{\alpha_G\equiv \dfrac{\mbox{Planck's length}}{\mbox{Cosmological length}}=\dfrac{L_P}{L_\Lambda}=\dfrac{\sqrt{\dfrac{G\hbar}{c^3}}}{\sqrt{\dfrac{1}{\Lambda}}}=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}=L_P\sqrt{\Lambda}=L_P\dfrac{R_{dS}}{\sqrt{3}}}

If we introduce the numbers of the constants, we easily obtaint the gravitational cosmological alpha value and its inverse:

\boxed{\alpha_G=1.91\cdot 10^{-61}\leftrightarrow \alpha_G^{-1}=\dfrac{1}{\alpha_G}=5.24\cdot 10^{60}}

They are really small and large numbers! Following the the atomic analogy, we can also create a ratio between two cosmologically relevant density energies:

1st. The Planck’s density energy.

Planck’s energy is defined as

\boxed{E_P=\dfrac{\hbar c}{L_P}=\sqrt{\dfrac{\hbar c^5}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV}

The Planck energy density \rho_P is defined as the energy density of Planck’s energy inside a Planck’s cube or side L_P, i.e., it is the energy density of Planck’s energy concentrated inside a cube with volume V=L_P^3. Mathematically speaking, it is

\boxed{\rho_P=\dfrac{E_P}{L_P^3}=\dfrac{c^7}{\hbar G^2}=2.89\cdot 10^{123}\dfrac{GeV}{m^3}}

It is an huge density energy!

Remark: Energy density is equivalent to pressure in special relativity hydrodynamics. That is,

\mathcal{P}_P=\rho_P=\tilde{\rho}_P c^2=4.63\cdot 10^{113}Pa

wiht Pa denoting pascals (1Pa=1N/m^2) and where \tilde{\rho}_P represents here matter (not energy) density ( with units in kg/m^3). Of course, turning matter density into energy density requires a multiplication by c^2. This equivalence between vacuum pressure and energy density is one of the reasons because some astrophysicists, cosmologists and theoretical physicists call “vacuum pressure” to the “dark energy/cosmological constant” term in the study of the cosmic components derived from the total energy density \Omega.

2nd. The cosmological constant density energy.

Using the Einstein’s field equations, it can be shown that the cosmological constant gives a contribution to the stress-energy-momentum tensor. The component T^{0}_{\;\; 0} is related to the dark energy ( a.k.a. the cosmological constant) and allow us to define the energy density

\boxed{\rho_\Lambda =T^{0}_{\;\; 0}=\dfrac{\Lambda c^4}{8\pi G}}

Using the previous equations for G as a function of Planck’s length, the Planck’s constant and the speed of light, and the definitions of Planck’s energy and de Sitter radius, we can rewrite the above energy density as follows:

\boxed{\rho_\Lambda=\dfrac{3}{8\pi}\left(\dfrac{E_P}{L_PR^2_{dS}}\right)=4.21 \dfrac{GeV}{m^3}}

Thus, we can evaluate the ration between these two energy densities! It provides

\boxed{R_\rho =\dfrac{\mbox{Planck's energy density}}{\mbox{CC energy density}}=\dfrac{\rho_P}{\rho_\Lambda}=\left( \dfrac{3}{8\pi}\right)\left(\dfrac{L_P}{R_{dS}}\right)^2=\left(\dfrac{1}{8\pi}\right)\alpha_G^2=1.45\cdot 10^{-123}}

and the inverse ratio will be

\boxed{\dfrac{1}{R_\rho}=6.90\cdot 10^{122}}

So, we have obtained two additional really tiny and huge values for R_\rho and its inverse, respectively. Note that the power appearing in the ratios of cosmological lengths and cosmological energy densities match the same scaling property that the atomic case with the electromagnetic alpha! In the electromagnetic case, we obtained R\sim \alpha_{EM} and R_E\sim \alpha_{EM}^2. The gravitational/cosmological analogue ratios follow the same rule R\sim \alpha_G and R_\rho\sim \alpha_G^2 but the surprise comes from the values of the gravitational alpha values and ratios. Some comments are straightforward:

1) Understanding atomic physics involved the discovery of Planck’s constant and the quantities associated to it at fundamental quantum level ( Böhr radius, the Rydberg’s constant,…). Understanding the Cosmological Constant value and the mismatch or stunning ratios between the equivalent relevant quantities, likely, require that \Lambda can be viewed as a new “fundamental constant” or/and it can play a dynamical role somehow ( e.g., varying in some unknown way with energy or local position).

2) Currently, the cosmological parameters and fits suggest that \Lambda is “constant”, but we can not be totally sure it has not varied slowly with time. And there is a related idea called quintessence, in which the cosmological “constant” is related to some dynamical field and/or to inflation. However, present data say that the cosmological constant IS truly constant. How can it be so? We are not sure, since our physical theories can hardly explain the cosmological constant, its value, and why it is current density energy is radically different from the vacuum energy estimates coming from Quantum Field Theories.

3) The mysterious value

\boxed{\alpha_G=\sqrt{\dfrac{G\hbar\Lambda}{c^3}}=1.91\cdot 10^{-61}}

is an equivalent way to express the biggest issue in theoretical physics. A naturalness problem called the cosmological constant problem.

In the literature, there have been alternative definitions of “gravitational fine structure constants”, unrelated with the above gravitational (cosmological) fine structure constant or gravitational alpha. Let me write some of these alternative gravitational alphas:

1) Gravitational alpha prime. It is defined as the ratio between the electron rest mass and the Planck’s mass squared:

\boxed{\alpha'_G=\dfrac{Gm_e^2}{\hbar c}=\left(\dfrac{m_e}{m_P}\right)^2=1.75\cdot 10^{-45}}

\boxed{\alpha_G^{'-1}=\dfrac{1}{\alpha_G^{'}}=5.71\cdot 10^{44}}

Note that m_e=0.511MeV. Since m_{proton}=1836m_e, we can also use the proton rest mass instead of the electron mass to get a new gravitational alpha.

2) Gravitational alpha double prime. It is defined as the ratio between the proton rest mass and the Planck’s mass squared:

\boxed{\alpha''_G=\dfrac{Gm_{prot}^2}{\hbar c}=\left(\dfrac{m_{prot}}{m_P}\right)^2=5.90\cdot 10^{-39}}

and the inverse value

\boxed{\alpha_G^{''-1}=\dfrac{1}{\alpha_G^{''}}=1.69\cdot 10^{38}}

Finally, we could guess an intermediate gravitational alpha, mixing the electron and proton mass.

3) Gravitational alpha triple prime. It is defined as the ration between the product of the electron and proton rest masses with the Planck’s mass squared:

\boxed{\alpha'''_G=\dfrac{Gm_{prot}m_e}{\hbar c}=\dfrac{m_{prot}m_e}{m_P^2}=3.22\cdot 10^{-42}}

and the inverse value

\boxed{\alpha_G^{'''-1}=\dfrac{1}{\alpha^{'''}_G}=3.11\cdot 10^{41}}

We can compare the 4 gravitational alphas and their inverse values, and additionally compare them with \alpha_{EM}. We get

\alpha_G <\alpha_G^{'} <\alpha_G^{'''} < \alpha_G^{''}<\alpha_{EM}

\alpha_{EM}^{-1}<\alpha^{''-1}_G <\alpha^{'''-1}_G <\alpha^{'-1}_G < \alpha^{-1}_G

These inequations mean that the electromagnetic fine structure constant \alpha_{EM} is (at ordinary energies) 42 orders of magnitude bigger than \alpha_G^{'}, 39 orders of magnitude bigger than \alpha_G^{'''}, 36 orders of magnitude bigger than \alpha_G^{''} and, of course, 58 orders of magnitude bigger than \alpha_G. Indeed, we could extend this analysis to include the “fine structure constant” of Quantum Chromodynamics (QCD) as well. It would be given by:

\boxed{\alpha_s=\dfrac{g_s^2}{\hbar c}=1}

since generally we define g_s=1. We note that \alpha_s >\alpha_{EM} by 3 orders of magnitude. However, as strong nuclear forces are short range interactions, they only matter in the atomic nuclei, where confinement, and color forces dominate on every other fundamental interaction. Interestingly, at high energies, QCD coupling constant has a property called asymptotic freedom. But it is another story not to be discussed here! If we take the alpha strong coupling into account the full hierarchy of alphas is given by:

\alpha_G <\alpha_G^{'} <\alpha_G^{'''} < \alpha_G^{''}<\alpha_{EM}<\alpha_s

\alpha_s^{-1}<\alpha_{EM}^{-1}<\alpha^{''-1}_G <\alpha^{'''-1}_G <\alpha^{'-1}_G < \alpha^{-1}_G

Fascinating! Isn’t it? Stay tuned!!!

ADDENDUM: After I finished this post, I discovered a striking (and interesting itself) connection between \alpha_{EM} and \alpha_{G}. The relation or coincidence is the following relationship

\dfrac{1}{\alpha_{EM}}\approx \ln \left( \dfrac {1}{16\alpha_G}\right)

Is this relationship fundamental or accidental? The answer is unknown. However, since the electric charge (via electromagnetic alpha) is not related a priori with the gravitational constant or Planck mass ( or the cosmological constant via the above gravitational alpha) in any known way I find particularly stunning such a coincidence up to 5 significant digits! Any way, there are many unexplained numerical coincidences that are completely accidental and meaningless, and then, it is not clear why this numeral result should be relevant for the connection between electromagnetism and gravity/cosmology, but it is interesting at least as a curiosity and “joke” of Nature.

ADDENDUM (II):

Some quotes about the electromagnetic alpha from wikipedia http://en.wikipedia.org/wiki/Fine-structure_constant

“(…)There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won’t recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly! (…)”. R.P.Feynman, QED: The Strange Theory of Light and Matter, Princeton University Press, p.129.

“(…) If alpha [the fine-structure constant] were bigger than it really is, we should not be able to distinguish matter from ether [the vacuum, nothingness], and our task to disentangle the natural laws would be hopelessly difficult. The fact however that alpha has just its value 1/137 is certainly no chance but itself a law of nature. It is clear that the explanation of this number must be the central problem of natural philosophy.(…)” Max Born, in A.I. Miller’s book Deciphering the Cosmic Number: The Strange Friendship of Wolfgang Pauli and Carl Jung. p. 253. Publisher W.W. Norton & Co.(2009).

“(…)The mystery about α is actually a double mystery. The first mystery – the origin of its numerical value α ≈ 1/137 has been recognized and discussed for decades. The second mystery – the range of its domain – is generally unrecognized.(…)” Malcolm H. Mac Gregor, M.H. MacGregor (2007). The Power of Alpha.