LOG#111. Basic Cosmology (VI).

inflationUniverseinflationFieldinflationGUTspacetimefoaminflation-horizoninflationVacuuminflation_in_cosmology-mainsomeinflationaryModels

The topic today: problems in the Standard Cosmological Model (LCDM), inflation and scalar fields!

STANDARD COSMOLOGICAL MODEL: ISSUES

Despite the success of the Standard Cosmological model (or LCDM), today it is widely accepted that it is not complete, even if its main features and observables are known, there are some questions we can not understand in that framework.

Firstly, we have the horizon problem. For any comoving horizon, we obtain

\eta=\int_0^t\dfrac{dt'}{a(t')}\propto \begin{cases}a^{1/2}\;\;\mbox{for a MD Universe}\\ a\;\;\mbox{for a RD Universe}\end{cases}

Note that a\propto t^{2/3} for a MD Universe and to t^{1/2} for a RD Universe.

According to the CMB observations, today our Universe is very close to be isotropic, since the deviations are tiny

\dfrac{\delta T}{T}\approx 10^{-5}

The issue, of course, is how can it be true?Recall that the largest scales observed today have entered theo horizon just recently, long after decoupling. Moreover, microscopic causal physics can not make it! So, where the above tiny anisotropy comes from? E.g., for a distant galaxy, we get

\left(\dfrac{\delta T}{T}\right)_{\lambda galaxy}

and this scale was outside the horizon in the past! Please, note that the entropy within a horizon volume is about

S_H=s\dfrac{4\pi}{3}d_H^3=\begin{cases}0\mbox{.05}g_\star^{-1/2}\left(\dfrac{M_p}{T}\right)^3,\;\; RD\\ 3\cdot 10^{87}\left(\Omega_0h^2\right)^{-3/2}(1+z)^{-3/2}, \;\; MD\end{cases}

and so

S_H(t=t_0)=10^{88}

S_H(t=t_{rec})=10^{83}=10^5V_H(today)

S_H(t=t_{BBN})=10^{63}

Another problem is the flatness problem. The physical radius of curvature is given by the term

R_k=a(t)\vert k\vert^{-1/2}=\dfrac{H^{-1}}{\vert \Omega-1\vert^{1/2}}

The total energy density as a function of the scale factor is given by

\Omega-1=\dfrac{k}{H^2a^2}\propto \dfrac{1}{\rho a^2}=\begin{cases}a\;\ MD\\ a^2\;\; RD\end{cases}

When the primordial nucleosynthesis happened, about t=t_{BBN}\sim 1s, it gave \vert \Omega-1\vert\leq 10^{-16} and it gives R_k\geq 10^8H^{-1}. By the other hand, at the Planck time, i.e., if t=t_p\approx 10^{-43}s, it gives the value \vert \Omega-1\vert\leq 10^{-60}, and then R_k\geq 10^{30}H^{-1}. This large mismatch means that the Big Bang Universe requires VERY SPECIAL initial conditions, otherwise the Universe would not be (apparently) flat as we observe at current time. Note that if \Omega\sim 1 and R_k\sim H^{-1} at Planck time, then there are two options:

1) k>0: the Universe recollapse withim few 10^{-43}s.

2) k<0: the Universe reaches 3K at t=10^{-11}s.

Therefore, the natural time scale for cosmology is the Planck time (at least in the very early Universe and before) 10^{-43}s. The current age of the Universe is about 10^{60}t_p.

An important, yet unsolved and terrific, problem is the cosmological constant problem. The vacuum energy we observe today (in the form of dark energy) is given by

T_{\mu\nu}=\Lambda g_{\mu\nu}

The equation of state for the vacuum energy is known to be p=\omega \rho_\Lambda=-\rho_\Lambda, i.e., \omega_\Lambda=-1. It yields

a(t)\propto e^{Ht}

i.e., the so called de Sitter space (dS) or de Sitter Universe. It is a maximally symmetric spacetime. In a dS Universe, we obtain

H=\left(\dfrac{1}{3}k^2\Lambda\right)^{1/2}=const.

Observations provide (via dark energy) that \rho_{vac}=\rho_\Lambda\sim\rho_c\sim (3\cdot 10^{-3}eV)^4

Theoretical (and “natural”) Quantum Field Theory (QFT) calculations give

\rho_{vac}=\rho_\Lambda\sim \Lambda^4_{cut-off}\sim M_P^4\sim 10^{120}\rho_c

or so( even the mismatch can be 122 or 123 orders of magnitude!!!!). This problem is far beyond our current knowledge of QFT. It (likely) requires new physics or to rethink QFT and/or the observed value of the vacuum energy density. It is a hint that our understanding of the Universe is not complete.

INFLATION

One of the most “simple” and elegant solutions to the flatness problem and the horizon problem is the inflationary theory. What is inflation? Let me explain it here better. If the (early) Universe experienced an stage of “very fast” expansion, we can solve the horizon problem! However, there is a problem (you can call Houston if you want to…). If we do want a quick expansion in the early Universe, we need “negative pressure” to realize that scenario. Negative pressure can be obtained in an scalar field theory (there are some alternatives with a repulsive vector field and/or higher order tensors like a 3-form antisymmetric field, but the most simple solution is given by scalar fields).

The solution to the horizon problem in the inflationary theory proceeds as follows. Firstly, for the comoving horizon, we get

\eta=\int_0^tdt'/a(t')=\int_0^a\dfrac{da'}{a'}\dfrac{1}{a'H(a')}

where the disance over which particles can travel in the course of one expansion time is the comoving Hubble radius H^{-1}/a, and it is encoded into the fraction

\dfrac{1}{a'H(a')}

We have to make a distinction of the comoving horizon \eta and the comoving Hubble radius 1/aH. If we observe two particles whose comoving distance is r, then

1st. If r>\eta, we can never have a communication between those particles.

2nd. If r>1/a_H, then we can never communicate NOW using those two particles.

It shows that it is possible to have \eta>> 1/aH\vert_{t=t_0}. A particle with 1/aH<r<\eta ca not communicate today BUT they could be in causal contact early on. We only need that 1/aH\vert_{t=t_0}>>1/aH\vert_{t=now}. That is, \eta get contributions mostly from early epochs. Indeed, both in RD (Radiation Dominated) and MD(Matter Dominated) Universes, 1/a_H increase with time, so the latter epoch contributions dominate over cosmological time scales. Then, a solution for the horizon problem is that in the early Universe, in this “inflationary” (very fast) phase, for at least a brief amount of time (how much is not clear) the comoving Hubble radius DECREASED.

How can the scale factor evolve in order to solve the horizon problem?We do know that (aH)^{-1} must decrease, so aH must increase! Therefore,

\dfrac{d}{dt}(aH)=\dfrac{d^2a}{dt^2}>0

i.e., we get an (positively) accelerating expansion or “inflation”.

How can we understand quantitatively inflation? Firstly, suppose that the energy scale of inflation is about \sim 10^{15}GeV. It is about the Grand Unified Theory (GUT) energy scale, and close to the Quantum Gravity scale (the Planck scale) about \sim 10^{19}GeV. Obviously, it only matters at very high energies, very short distances or very tiny time scales after the Big Bang. Then,

(aH)^{-1}\vert_{T=10^{15}GeV}=10^{-28}(aH)^{-1}\vert_{T=T_0}

For a RD Universe, H\propto a^{-2} and thus

\dfrac{a_0H_0}{a_eH_e}=\dfrac{a_0}{a_e}\dfrac{a_e^2}{a_0^2}=\dfrac{a_e}{a_0}\approx \dfrac{T_0}{10^{15}GeV}

During the inflation, the comoving Hubble radius had to decrease by, at least, 28 orders of magnitude. The most common way to build such inflationary model is to build a model where H\approx const. and

\dfrac{\dot{a}}{a}=H=constant

It gives a(t)=e^{H(t-t_e)} and t<t_e, where t_e is the time when inflation ends. In fact, we obtain that

(aH)^{-1}\propto e^{Ht} and thus 10^{28} can be guessed from the so-called “e-folds”, or exponential factors, in a very simple way: note that 10^{28}\approx e^{64}!!!!!!!!! Therefore, more than 64 “e-folds” (or the 64th power of the number “e”) provide the necessary 10-fold we were searching for.

Remark: The comoving horizon is very similar to an effective time parameter!

\eta_{total}=\int_0^{t_e}\dfrac{dt'}{a(t')}+\int_{t_e}^t\dfrac{dt'}{a(t')}=\eta_{prin}+\eta=(\mbox{very large number})+\mbox{(new time parameter)}

NEGATIVE PRESSURE

In order to allow inflation, we require that the “weak energy condition” be violated. That is, we ask that (for inflation to be possible)

\dfrac{\ddot{a}}{a}=-\dfrac{4\pi G_N}{3}(\rho+3p)>0

It implies that \rho+3p<0 or that p<-\dfrac{1}{3}\rho<0

Obviously, there is no particle/field in the Standard Model (at least until the discovery of the Higgs field in 2012) able to do that. So, if inflation happened, it is an unknown field/particle responsible for it! The simplest implementation of inflation uses, as we said before, some scalar field \phi. Let us define a scalar field \phi (x^\mu). Its lagrangian is generally given by

\mathcal{L}=-\dfrac{1}{2}\partial_\mu\phi\partial^\mu\phi -V(\phi)

The energy-momentum tensor for this scalar field can be easily obtained by the well-known prescription

T_{\mu\ nu}=\partial_\mu\phi\partial_\nu\phi-g_{\mu\nu}\mathcal{L}

Thus, we get

\rho=-T^{0}_{\;\;\;0}=\dfrac{1}{2}\dot{\phi}^2+V(\phi)

p=T^i_{\;\;\; i}=\dfrac{1}{2}\dot{\phi}^2-V(\phi)

Negative pressure is obtained whenever we have V(\phi)>\dfrac{1}{2}\dot{\phi}^2

Therefore:

1st. A scalar field with negative pressure is trapped into a “false vacuum”.

2nd. A scalar field “slow-rolling” toward its true vacuum provides a simple model for inflation.

The evolution of the scalar field in the expanding Universe is given by a simple equation

\ddot{\phi}+3H\dot{\phi}+V'(\phi)=0

Models where the scalar field “slow-rolls” provide H\approx constant (slow varying in time as a scalar function!). The time variable t\longrightarrow \eta implies that

\eta\equiv \int_{a_e}^a\dfrac{da}{Ha^2}\approx \dfrac{1}{H}\int_{a_e}^a\dfrac{da}{a^2}\approx -\dfrac{1}{aH}

and it becomes negative during the inflationary phase! The slow-roll parameters

\varepsilon= \dfrac{dH^{-1}}{dt}=-\dfrac{\dot{H}}{aH}

where the dot means derivative with respect to \eta. Moreover, we also get

\delta=\bar{\eta}=\dfrac{1}{H}\dfrac{d^2\phi/dt^2}{d\phi/dt}=-\dfrac{1}{aH\dot{\phi}}\left(aH\dot{\phi}-\ddot{\phi}\right)=-\dfrac{1}{aH\dot{\phi}}\left(3aH\dot{\phi}+a^2V'\right)

Remark: \varepsilon<<1 for inflation and \varepsilon=2 for RD Universes imply that \varepsilon<1 is a definition of inflationary phase!

Quintessence and phantom energy

For any scalar field, and the pressure and energy density given above, we can calculate the \omega quantity:

\omega=\dfrac{p_\omega}{\rho_\omega}=\dfrac{\dfrac{1}{2}\dot{\phi}^2-V(\phi)}{\dfrac{1}{2}\dot{\phi}^2+V(\phi)}

In the case that the kinetic term is negligible, we obtain the dark energy fluid value \omega_\Lambda=-1. However, this equation is much more general. If we have a slow-rolling scalar field over cosmological times (a very slowly time dependent scalar field) it could mimic the cosmological constant behaviour ( we are ignoring some technical problems here). If the kinetic term is NOT negligible, the \omega value can differ from the standard value of -1 (dark energy/cosmological constant/vacuum energy). However, current observation support a pure cosmological constant term. Moreover, this general scalar field is commonly referred as “quintessence” if -1<\omega_\phi<1/3 and as “phantom energy” if \omega_\phi<-1. Both, dark energy, quintessence or phantom energy models can affect to the future of the Universe. Instead of a Big Freeze (the thermal death of the Universe), the scalar dominated Universe can even destroy atoms/matter/galaxies at some point in the future (at least on very general grounds) and terminate the Universe in a Big (or Little) Rip. Thus, let me review the possible destinies of the Universe according to modern Cosmology:

1) Big Crunch (or recollapse of the Universe). The Universe recollapses until the initial singularity after some time, if it is massive enough. Current observations don’t favour this case.

2) Big Freeze (or thermal death of the Universe). The Universe expands forever cooling itself until it reaches a temperature close to the absolute zero. It was believed that it was the only possible option with the given curvature until the discovery of dark energy in 1998.

3) Big Rip (or Little Rip, depending on the nature of the scalar field and the concrete model). Vacuum energy expands the Universe with an increasing rate until it “rips” even fundamental particles/atoms/matter and galaxies apart from  eath other. It is a new possibility due to the existence of scalar fields and/or dark energy, a mysterious energy that makes the Universe expanss with an increasing rate overseding the gravitational pull of galaxies and clusters!

A more exotic option is that the Universe suffers “oscillations” of positively accelerated expansion and negatively accelerated expansion (oscillatory/cyclic ethernal Universes)…But that is another story…

See you in my final basic cosmological post!

Advertisements

LOG#057. Naturalness problems.

yogurt_berries

In this short blog post, I am going to list some of the greatest “naturalness” problems in Physics. It has nothing to do with some delicious natural dishes I like, but there is a natural beauty and sweetness related to naturalness problems in Physics. In fact, they include some hierarchy problems and additional problems related to stunning free values of parameters in our theories.

Naturalness problems arise when the “naturally expected” property of some free parameters or fundamental “constants” to appear as quantities of order one is violated, and thus, those paramenters or constants appear to be very large or very small quantities. That is, naturalness problems are problems of untuning “scales” of length, energy, field strength, … A value of 0.99 or 1.1, or even 0.7 and 2.3 are “more natural” than, e.g., 100000, 10^{-4},10^{122}, 10^{23},\ldots Equivalently, imagine that the values of every fundamental and measurable physical quantity X lies in the real interval \left[ 0,\infty\right). Then, 1 (or very close to this value) are “natural” values of the parameters while the two extrema 0 or \infty are “unnatural”. As we do know, in Physics, zero values are usually explained by some “fundamental symmetry” while extremely large parameters or even \infty can be shown to be “unphysical” or “unnatural”. In fact, renormalization in QFT was invented to avoid quantities that are “infinite” at first sight and regularization provides some prescriptions to assign “natural numbers” to quantities that are formally ill-defined or infinite. However, naturalness goes beyond those last comments, and it arise in very different scenarios and physical theories. It is quite remarkable that naturalness can be explained as numbers/contants/parameters around 3 of the most important “numbers” in Mathematics:

(0, 1, \infty)

REMEMBER: Naturalness of X is, thus, being 1 or close to it, while values approaching 0 or \infty are unnatural.  Therefore, if some day you heard a physicist talking/speaking/lecturing about “naturalness” remember the triple (0,1,\infty) and then assign “some magnitude/constant/parameter” some quantity close to one of those numbers. If they approach 1, the parameter itself is natural and unnatural if it approaches any of the other two numbers, zero or infinity!

I have never seen a systematic classification of naturalness problems into types. I am going to do it here today. We could classify naturalness problems into:

1st. Hierarchy problems. They are naturalness problems related to the energy mass or energy spectrum/energy scale of interactions and fundamental particles.

2nd. Nullity/Smallness problems. These are naturalness problems related to free parameters which are, surprisingly, close to zero/null value, even when we have no knowledge of a deep reason to understand why it happens.

3rd. Large number problems (or hypotheses). This class of problems can be equivalently thought as nullity reciprocal problems but they arise naturally theirselves in cosmological contexts or when we consider a large amount of particles, e.g., in “statistical physics”, or when we face two theories in very different “parameter spaces”. Dirac pioneered these class of hypothesis when realized of some large number coincidences relating quantities appearing in particle physics and cosmology. This Dirac large number hypothesis is also an old example of this kind of naturalness problems.

4th. Coincidence problems. This 4th type of problems is related to why some different parameters of the same magnitude are similar in order of magnitude.

The following list of concrete naturalness problems is not going to be complete, but it can serve as a guide of what theoretical physicists are trying to understand better:

1. The little hierarchy problem. From the phenomenon called neutrino oscillations (NO) and neutrino oscillation experiments (NOSEX), we can know the difference between the squared masses of neutrinos. Furthermore, cosmological measurements allow us to put tight bounds to the total mass (energy) of light neutrinos in the Universe. The most conservative estimations give m_\nu \leq 10 eV or even m_\nu \sim 1eV as an upper bound is quite likely to be true. By the other hand, NOSEX seems to say that there are two mass differences, \Delta m^2_1\sim 10^{-3} and \Delta m^2_2\sim 10^{-5}. However, we don’t know what kind of spectrum neutrinos have yet ( normal, inverted or quasidegenerated). Taking a neutrino mass about 1 meV as a reference, the little hierarchy problem is the question of why neutrino masses are so light when compared with the remaining leptons, quarks and gauge bosons ( excepting, of course, the gluon and photon, massless due to the gauge invariance).

Why is m_\nu << m_e,m_\mu, m_\tau, m_Z,M_W, m_{proton}?

We don’t know! Let me quote a wonderful sentence of a very famous short story by Asimov to describe this result and problem:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

2. The gauge hierarchy problem. The electroweak (EW) scale can be generally represented by the Z or W boson mass scale. Interestingly, from this summer results, Higgs boson mass seems to be of the same order of magnitue, more or less, than gauge bosons. Then, the electroweak scale is about M_Z\sim M_W \sim \mathcal{O} (100GeV). Likely, it is also of the Higgs mass  order.  By the other hand, the Planck scale where we expect (naively or not, it is another question!) quantum effects of gravity to naturally arise is provided by the Planck mass scale:

M_P=\sqrt{\dfrac{\hbar c}{8\pi G}}=2.4\cdot 10^{18}GeV=2.4\cdot 10^{15}TeV

or more generally, dropping the 8\pi factor

M_P =\sqrt{\dfrac{\hbar c}{G}}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV

Why is the EW mass (energy) scale so small compared to Planck mass, i.e., why are the masses M_{EW}<<M_P so different? The problem is hard, since we do know that EW masses, e.g., for scalar particles like Higgs particles ( not protected by any SM gauge symmetry), should receive quantum contributions of order \mathcal{O}(M_P^2)

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

3. The cosmological constant (hierarchy) problem. The cosmological constant \Lambda, from the so-called Einstein’s field equations of classical relativistic gravity

\mathcal{R}_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}\mathcal{R}=8\pi G\mathcal{T}_{\mu\nu}+\Lambda g_{\mu\nu}

is estimated to be about \mathcal{O} (10^{-47})GeV^4 from the cosmological fitting procedures. The Standard Cosmological Model, with the CMB and other parallel measurements like large scale structures or supernovae data, agree with such a cosmological constant value. However, in the framework of Quantum Field Theories, it should receive quantum corrections coming from vacuum energies of the fields. Those contributions are unnaturally big, about \mathcal{O}(M_P^4) or in the framework of supersymmetric field theories, \mathcal{O}(M^4_{SUSY}) after SUSY symmetry breaking. Then, the problem is:

Why is \rho_\Lambda^{obs}<<\rho_\Lambda^{th}? Even with TeV or PeV fundamental SUSY (or higher) we have a serious mismatch here! The mismatch is about 60 orders of magnitude even in the best known theory! And it is about 122-123 orders of magnitude if we compare directly the cosmological constant vacuum energy we observe with the cosmological constant we calculate (naively or not) with out current best theories using QFT or supersymmetric QFT! Then, this problem is a hierarchy problem and a large number problem as well. Again, and sadly, we don’t know why there is such a big gap between mass scales of the same thing! This problem is the biggest problem in theoretical physics and it is one of the worst predictions/failures in the story of Physics. However,

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

4. The strong CP problem/puzzle. From neutron electric dipople measurements, theoretical physicists can calculate the so-called \theta-angle of QCD (Quantum Chromodynamics). The theta angle gives an extra contribution to the QCD lagrangian:

\mathcal{L}_{\mathcal{QCD}}\supset \dfrac{1}{4g_s^2}G_{\mu\nu}G^{\mu\nu}+\dfrac{\theta}{16\pi^2}G^{\mu\nu}\tilde{G}_{\mu\nu}

The theta angle is not provided by the SM framework and it is a free parameter. Experimentally,

\theta <10^{-12}

while, from the theoretical aside, it could be any number in the interval \left[-\pi,\pi\right]. Why is \theta close to the zero/null value? That is the strong CP problem! Once again, we don’t know. Perhaps a new symmetry?

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

5. The flatness problem/puzzle. In the Stantard Cosmological Model, also known as the \Lambda CDM model, the curvature of the Universe is related to the critical density and the Hubble “constant”:

\dfrac{1}{R^2}=H^2\left(\dfrac{\rho}{\rho_c}-1\right)

There, \rho is the total energy density contained in the whole Universe and \rho_c=\dfrac{3H^2}{8\pi G} is the so called critical density. The flatness problem arise when we deduce from cosmological data that:

\left(\dfrac{1}{R^2}\right)_{data}\sim 0.01

At the Planck scale era, we can even calculate that

\left(\dfrac{1}{R^2}\right)_{Planck\;\; era}\sim\mathcal{O}(10^{-61})

This result means that the Universe is “flat”. However, why did the Universe own such a small curvature? Why is the current curvature “small” yet? We don’t know. However, cosmologists working on this problem say that “inflation” and “inflationary” cosmological models can (at least in principle) solve this problem. There are even more radical ( and stranger) theories such as varying speed of light theories trying to explain this, but they are less popular than inflationary cosmologies/theories. Indeed, inflationary theories are popular because they include scalar fields, similar in Nature to the scalar particles that arise in the Higgs mechanism and other beyond the Standard Model theories (BSM). We don’t know if inflation theory is right yet, so

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

6. The flavour problem/puzzle. The ratios of successive SM fermion mass eigenvalues ( the electron, muon, and tau), as well as the angles appearing in one gadget called the CKM (Cabibbo-Kobayashi-Maskawa) matrix, are roughly of the same order of magnitude. The issue is harder to know ( but it is likely to be as well) for constituent quark masses. However, why do they follow this particular pattern/spectrum and structure? Even more, there is a mysterious lepton-quark complementarity. The analague matrix in the leptonic sector of such a CKM matrix is called the PMNS matrix (Pontecorvo-Maki-Nakagawa-Sakata matrix) and it describes the neutrino oscillation phenomenology. It shows that the angles of PMNS matrix are roughly complementary to those in the CKM matrix ( remember that two angles are said to be complementary when they add up to 90 sexagesimal degrees). What is the origin of this lepton(neutrino)-quark(constituent) complementarity? In fact, the two questions are related since, being rough, the mixing angles are related to the ratios of masses (quarks and neutrinos). Therefore, this problem, if solved, could shed light to the issue of the particle spectrum or at least it could help to understand the relationship between quark masses and neutrino masses. Of course, we don’t know how to solve this puzzle at current time. And once again:

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

7. Cosmic matter-dark energy coincidence. At current time, the densities of matter and vacuum energy are roughly of the same order of magnitude, i.e, \rho_M\sim\rho_\Lambda=\rho_{DE}. Why now? We do not know!

“THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

And my weblog is only just beginning! See you soon in my next post! 🙂