Happy New Year 2013 to everyone and everywhere!
Let me apologize, first of all, by my absence… I have been busy, trying to find my path and way in my field, and I am busy yet, but finally I could not resist without a new blog boost… After all, you should know the fact I have enough materials to write many new things.
So, what’s next? I will dedicate some blog posts to discuss a nice topic I began before, talking about a classic paper on the subject here:
The topic is going to be pretty simple: natural units in Physics.
First of all, let me point out that the election of any system of units is, a priori, totally conventional. You are free to choose any kind of units for physical magnitudes. Of course, that is not very clever if you have to report data, so everyone can realize what you do and report. Scientists have some definitions and popular systems of units that make the process pretty simpler than in the daily life. Then, we need some general conventions about “units”. Indeed, the traditional wisdom is to use the international system of units, or S (Iabbreviated SI from French language: Le Système international d’unités). There, you can find seven fundamental magnitudes and seven fundamental (or “natural”) units:
5) Electric intensity:
6) Luminous intensity:
7) Amount of substance:
The dependence between these 7 great units and even their definitions can be found here http://en.wikipedia.org/wiki/International_System_of_Units and references therein. I can not resist to show you the beautiful graph of the 7 wonderful units that this wikipedia article shows you about their “interdependence”:
In Physics, when you build a radical new theory, generally it has the power to introduce a relevant scale or system of units. Specially, the Special Theory of Relativity, and the Quantum Mechanics are such theories. General Relativity and Statistical Physics (Statistical Mechanics) have also intrinsic “universal constants”, or, likely, to be more precise, they allow the introduction of some “more convenient” system of units than those you have ever heard ( metric system, SI, MKS, cgs, …). When I spoke about Barrow units (see previous comment above) in this blog, we realized that dimensionality (both mathematical and “physical”), and fundamental theories are bound to the election of some “simpler” units. Those “simpler” units are what we usually call “natural units”. I am not a big fan of such terminology. It is confusing a little bit. Maybe, it would be more interesting and appropiate to call them “addapted X units” or “scaled X units”, where X denotes “relativistic, quantum,…”. Anyway, the name “natural” is popular and it is likely impossible to change the habits.
In fact, we have to distinguish several “kinds” of natural units. First of all, let me list “fundamental and universal” constants in different theories accepted at current time:
1. Boltzmann constant: .
Essential in Statistical Mechanics, both classical and quantum. It measures “entropy”/”information”. The fundamental equation is:
It provides a link between the microphysics and the macrophysics ( it is the code behind the equation above). It can be understood somehow as a measure of the “energetic content” of an individual particle or state at a given temperature. Common values for this constant are:
Statistical Physics states that there is a minimum unit of entropy or a minimal value of energy at any given temperature. Physical dimensions of this constant are thus entropy, or since , , where t denotes here dimension of temperature.
2. Speed of light. .
From classical electromagnetism:
The speed of light, according to the postulates of special relativity, is a universal constant. It is frame INDEPENDENT. This fact is at the root of many of the surprising results of special relativity, and it took time to be understood. Moreover, it also connects space and time in a powerful unified formalism, so space and time merge into spacetime, as we do know and we have studied long ago in this blog. The spacetime interval in a D=3+1 dimensional space and two arbitrary events reads:
In fact, you can observe that “c” is the conversion factor between time-like and space-like coordinates. How big the speed of light is? Well, it is a relatively large number from our common and ordinary perception. It is exactly:
although you often take it as . However, it is the speed of electromagnetic waves in vacuum, no matter where you are in this Universe/Polyverse. At least, experiments are consistent with such an statement. Moreover, it shows that is also the conversion factor between energy and momentum, since
and is the conversion factor between rest mass and pure energy, because, as everybody knows, ! According to the special theory of relativity, normal matter can never exceed the speed of light. Therefore, the speed of light is the maximum velocity in Nature, at least if specially relativity holds. Physical dimensions of c are , where L denotes length dimension and T denotes time dimension (please, don’t confuse it with temperature despite the capital same letter for both symbols).
3. Planck’s constant. or generally rationalized .
Planck’s constant (or its rationalized version), is the fundamental universal constant in Quantum Physics (Quantum Mechanics, Quantum Field Theory). It gives
Indeed, quanta are the minimal units of energy. That is, you can not divide further a quantum of light, since it is indivisible by definition! Furthermore, the de Broglie relationship relates momentum and wavelength for any particle, and it emerges from the combination of special relativity and the quantum hypothesis:
In the case of massive particles, it yields
In the case of massless particles (photons, gluons, gravitons,…)
Planck’s constant also appears to be essential to the uncertainty principle of Heisenberg:
Some particularly important values of this constant are:
It is also useful to know that
Planck constant has dimension of . Physical dimensions of this constant coincide also with angular momentum (spin), i.e., with .
4. Gravitational constant. .
Apparently, it is not like the others but it can also define some particular scale when combined with Special Relativity. Without entering into further details (since I have not discussed General Relativity yet in this blog), we can calculate the escape velocity of a body moving at the speed of light
with implies a new length scale where gravitational relativistic effects do appear, the so-called Schwarzschild radius :
5. Electric fundamental charge. .
It is generally chosen as fundamental charge the electric charge of the positron (positive charged “electron”). Its value is:
where C denotes Coulomb. Of course, if you know about quarks with a fraction of this charge, you could ask why we prefer this one. Really, it is only a question of hystory of Science, since electrons were discovered first (and positrons). Quarks, with one third or two thirds of this amount of elementary charge, were discovered later, but you could define the fundamental unit of charge as multiple or entire fraction of this charge. Moreover, as far as we know, electrons are “elementary”/”fundamental” entities, so, we can use this charge as unit and we can define quark charges in terms of it too. Electric charge is not a fundamental unit in the SI system of units. Charge flow, or electric current, is.
An amazing property of the above 5 constants is that they are “universal”. And, for instance, energy is related with other magnitudes in theories where the above constants are present in a really wonderful and unified manner:
Caution: k is not the Boltzmann constant but the wave number.
There is a sixth “fundamental” constant related to electromagnetism, but it is also related to the speed of light, the electric charge and the Planck’s constant in a very sutble way. Let me introduce you it too…
6. Coulomb constant. .
This is a second constant related to classical electromagnetism, like the speed of light in vacuum. Coulomb’s constant, the electric force constant, or the electrostatic constant (denoted ) is a proportionality factor that takes part in equations relating electric force between point charges, and indirectly it also appears (depending on your system of units) in expressions for electric fields of charge distributions. Coulomb’s law reads
Its experimental value is
Generally, the Coulomb constant is dropped out and it is usually preferred to express everything using the electric permitivity of vacuum and/or numerical factors depending on the pi number if you choose the gaussian system of units (read this wikipedia article http://en.wikipedia.org/wiki/Gaussian_system_of_units ), the CGS system, or some hybrid units based on them.
High Energy Physicists use to employ units in which the velocity is measured in fractions of the speed of light in vacuum, and the action/angular momentum is some multiple of the Planck’s constant. These conditions are equivalent to set
Complementarily, or not, depending on your tastes and preferences, you can also set the Boltzmann’s constant to the unit as well
and thus the complete HEP system is defined if you set
This “natural” system of units is lacking yet a scale of energy. Then, it is generally added the electron-volt as auxiliary quantity defining the reference energy scale. Despite the fact that this is not a “natural unit” in the proper sense because it is defined by a natural property, the electric charge, and the anthropogenic unit of electric potential, the volt. The SI prefixes multiples of eV are used as well: keV, MeV, GeV, etc. Here, the eV is used as reference energy quantity, and with the above election of “elementary/natural units” (or any other auxiliary unit of energy), any quantity can be expressed. For example, a distance of 1 m can be expressed in terms of eV, in natural units, as
This system of units have remarkable conversion factors
A) of length is equal to
B) of mass is equal to
C) of time is equal to
D) of temperature is equal to
E) of electric charge in the Lorentz-Heaviside system of units is equal to
F) of electric charge in the Gaussian system of units is equal to
This system of units, therefore, leaves free only the energy scale (generally it is chosen the electron-volt) and the electric measure of fundamentl charge. Every other unit can be related to energy/charge. It is truly remarkable than doing this (turning invisible the above three constants) you can “unify” different magnitudes due to the fact these conventions make them equivalent. For instance, with natural units:
It is due to , and equations. Setting and or provides
, and .
Note that natural units turn invisible the units we set to the unit! That is the key of the procedure. It simplifies equations and expressions. Of course, you must be careful when you reintroduce constants!
It is due to , and again.
One extra bonus for theoretical physicists is that natural units allow to build and write proper lagrangians and hamiltonians (certain mathematical operators containing the dynamics of the system enconded in them), or equivalently the action functional, with only the energy or “mass” dimension as “free parameter”. Let me show how it works.
Natural units in HEP identify length and time dimensions. Thus . Planck’s constant allows us to identify those 2 dimensions with 1/Energy (reciprocals of energy) physical dimensions. Therefore, in HEP units, we have
The speed of light identifies energy and mass, and thus, we can often heard about “mass-dimension” of a lagrangian in the following sense. HEP units can be thought as defining “everything” in terms of energy, from the pure dimensional ground. That is, every physical dimension is (in HEP units) defined by a power of energy:
Thus, we can refer to any magnitude simply saying the power of such physical dimension (or you can think logarithmically to understand it easier if you wish). With this convention, and recalling that energy dimension is mass dimension, we have that
Using these arguments, the action functional is a pure dimensionless quantity, and thus, in D=4 spacetime dimensions, lagrangian densities must have dimension 4 ( or dimension D is a general spacetime).
In D=4 spacetime dimensions, it can be easily showed that
where is a scalar field, is a vector field (like the electromagnetic or non-abelian vector gauge fields), and are a Dirac spinor, a Majorana spinor, and are Weyl spinors (of different chiralities). Supersymmetry (or SUSY) allows for anticommuting c-numbers (or Grassmann numbers) and it forces to introduce auxiliary parameters with mass dimension . They are the so-called SUSY transformation parameters . There are some speculative spinors called ELKO fields that could be non-standandard spinor fields with mass dimension one! But it is an advanced topic I am not going to discuss here today. In general D spacetime dimensions a scalar (or vector) field would have mass dimension , and a spinor/fermionic field in D dimensions has generally mass dimension (excepting the auxiliary SUSY grassmanian fields and the exotic idea of ELKO fields). This dimensional analysis is very useful when theoretical physicists build up interacting lagrangians, since we can guess the structure of interaction looking at purely dimensional arguments every possible operator entering into the action/lagrangian density! In summary, therefore, for any D:
Remark (for QFT experts only): Don’t confuse mass dimension with the final transverse polarization degrees or “degrees of freedom” of a particular field, i.e., “components” minus “gauge constraints”. E.g.: a gauge vector field has degrees of freedom in D dimensions. They are different concepts (although both closely related to the spacetime dimension where the field “lives”).
i) HEP units are based on QM (Quantum Mechanics), SR (Special Relativity) and Statistical Mechanics (Entropy and Thermodynamics).
ii) HEP units need to introduce a free energy scale, and it generally drives us to use the eV or electron-volt as auxiliary energy scale.
iii) HEP units are useful to dimensional analysis of lagrangians (and hamiltonians) up to “mass dimension”.
In Physics, the Stoney units form a alternative set of natural units named after the Irish physicist George Johnstone Stoney, who first introduced them as we know it today in 1881. However, he presented the idea in a lecture entitled “On the Physical Units of Nature” delivered to the British Association before that date, in 1874. They are the first historical example of natural units and “unification scale” somehow. Stoney units are rarely used in modern physics for calculations, but they are of historical interest but some people like Wilczek has written about them (see, e.g., http://arxiv.org/abs/0708.4361). These units of measurement were designed so that certain fundamental physical constants are taken as reference basis without the Planck scale being explicit, quite a remarkable fact! The set of constants that Stoney used as base units is the following:
A) Electric charge, .
B) Speed of light in vacuum, .
C) Gravitational constant, .
D) The Reciprocal of Coulomb constant, .
Stony units are built when you set these four constants to the unit, i.e., equivalently, the Stoney System of Units (S) is determined by the assignments:
Interestingly, in this system of units, the Planck constant is not equal to the unit and it is not “fundamental” (Wilczek remarked this fact here ) but:
Today, Planck units are more popular Planck than Stoney units in modern physics, and even there are many physicists who don’t know about the Stoney Units! In fact, Stoney was one of the first scientists to understand that electric charge was quantized!; from this quantization he deduced the units that are now named after him.
The Stoney length and the Stoney energy are collectively called the Stoney scale, and they are not far from the Planck length and the Planck energy, the Planck scale. The Stoney scale and the Planck scale are the length and energy scales at which quantum processes and gravity occur together. At these scales, a unified theory of physics is thus likely required. The only notable attempt to construct such a theory from the Stoney scale was that of H. Weyl, who associated a gravitational unit of charge with the Stoney length and who appears to have inspired Dirac’s fascination with the large number hypothesis. Since then, the Stoney scale has been largely neglected in the development of modern physics, although it is occasionally discussed to this day. Wilczek likes to point out that, in Stoney Units, QM would be an emergent phenomenon/theory, since the Planck constant wouldn’t be present directly but as a combination of different constants. By the other hand, the Planck scale is valid for all known interactions, and does not give prominence to the electromagnetic interaction, as the Stoney scale does. That is, in Stoney Units, both gravitation and electromagnetism are on equal footing, unlike the Planck units, where only the speed of light is used and there is no more connections to electromagnetism, at least, in a clean way like the Stoney Units do. Be aware, sometimes, rarely though, Planck units are referred to as Planck-Stoney units.
What are the most interesting Stoney system values? Here you are the most remarkable results:
1) Stoney Length, .
2) Stoney Mass, .
3) Stoney Energy, .
4) Stoney Time, .
5) Stoney Charge, .
6) Stoney Temperature, .
The reference constants to this natural system of units (generally denoted by P) are the following 4 constants:
1) Gravitational constant.
2) Speed of light. .
3) Planck constant or rationalized Planck constant. .
4) Boltzmann constant. .
The Planck units are got when you set these 4 constants to the unit, i.e.,
It is often said that Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even features of any fundamental particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of classical spacetime in the relativistic theory of gravitation, also known as general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of elementary quantum mechanics. This is the reason why Planck units particularly useful and common in theories of quantum gravity, including string theory or loop quantum gravity.
This system defines some limit magnitudes, as follows:
1) Planck Length, .
2) Planck Time, .
3) Planck Mass, .
4) Planck Energy, .
5) Planck charge, .
In Lorentz-Heaviside electromagnetic units
In Gaussian electromagnetic units
6) Planck temperature, .
From these “fundamental” magnitudes we can build many derived quantities in the Planck System:
1) Planck area.
2) Planck volume.
3) Planck momentum.
A relatively “small” momentum!
4) Planck force.
It is independent from Planck constant! Moreover, the Planck acceleration is
5) Planck Power.
6) Planck density.
Planck density energy would be equal to
7) Planck angular frequency.
8) Planck pressure.
Note that Planck pressure IS the Planck density energy!
9) Planck current.
10) Planck voltage.
11) Planck impedance.
A relatively small impedance!
12) Planck capacitor.
Interestingly, it depends on the gravitational constant!
Some Planck units are suitable for measuring quantities that are familiar from daily experience. In particular:
1 Planck mass is about 22 micrograms.
1 Planck momentum is about 6.5 kg m/s
1 Planck energy is about 500kWh.
1 Planck charge is about 11 elementary (electronic) charges.
1 Planck impendance is almost 30 ohms.
i) A speed of 1 Planck length per Planck time is the speed of light, the maximum possible speed in special relativity.
ii) To understand the Planck Era and “before” (if it has sense), supposing QM holds yet there, we need a quantum theory of gravity to be available there. There is no such a theory though, right now. Therefore, we have to wait if these ideas are right or not.
iii) It is believed that at Planck temperature, the whole symmetry of the Universe was “perfect” in the sense the four fundamental foces were “unified” somehow. We have only some vague notios about how that theory of everything (TOE) would be.
The physical dimensions of the known Universe in terms of Planck units are “dramatic”:
i) Age of the Universe is about .
ii) Diameter of the observable Universe is about
iii) Current temperature of the Universe is about
iv) The observed cosmological constant is about
v) The mass of the Universe is about .
vi) The Hubble constant is
The Schrödinger Units do not obviously contain the term c, the speed of light in a vacuum. However, within the term of the Permittivity of Free Space [i.e., electric constant or vacuum permittivity], and the speed of light plays a part in that particular computation. The vacuum permittivity results from the reciprocal of the speed of light squared times the magnetic constant. So, even though the speed of light is not apparent in the Schrödinger equations it does exist buried within its terms and therefore influences the decimal placement issue within square roots. The essence of Schrödinger units are the following constants:
A) Gravitational constant .
B) Planck constant .
C) Boltzmann constant .
D) Coulomb constant or equivalently the electric permitivity of free space/vacuum .
E) The electric charge of the positron .
In this sistem we have
1) Schrödinger Length .
2) Schrödinger time .
3) Schrödinger mass .
4) Schrödinger energy .
5) Schrödinger charge .
6) Schrödinger temperature .
There are two alternative systems of atomic units, closely related:
1) Hartree atomic units:
2) Rydberg atomic units:
There, is the electron mass and is the electromagnetic fine structure constant. These units are designed to simplify atomic and molecular physics and chemistry, especially the quantities related to the hydrogen atom, and they are widely used in these fields. The Hartree units were first proposed by Doublas Hartree, and they are more common than the Rydberg units.
The units are adapted to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Böhr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy equal to 1/2, and so on.
Some quantities in the Hartree system of units are:
1) Atomic Length (also called Böhr radius):
2) Atomic Time:
3) Atomic Mass:
4) Atomic Energy:
5) Atomic electric Charge:
6) Atomic temperature:
The fundamental unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant is extremely small in atomic units (about 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force . The unit length, LA, is the so-called and well known Böhr radius, a0.
The values of c and e shown above imply that , as in Gaussian units, not Lorentz-Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units. Be aware of these issues!
In the framework of Quantum Chromodynamics, a quantum field theory (QFT) we know as QCD, we can define the QCD system of units based on:
1) QCD Length .
and where is the proton mass (please, don’t confuse it with the Planck mass ).
2) QCD Time .
3) QCD Mass .
4) QCD Energy .
Thus, QCD energy is about 1 GeV!
5) QCD Temperature .
6) QCD Charge .
In Heaviside-Lorent units:
In Gaussian units:
The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units. That is, we set:
and the remaining constants are set to the unit according to your needs and tastes.
This table from wikipedia is very useful:
i) is the fine-structure constant, approximately 0.007297.
ii) is the gravitational fine-structure constant.
Some conversion factors for geometrized units are also available:
Conversion from kg, s, C, K into m:
Conversion from m, s, C, K into kg:
Conversion from m, kg, C, K into s
Conversion from m, kg, s, K into C
Conversion from m, kg, s, C into K
Or you can read off factors from this table as well:
Advantages and Disadvantages of Natural Units
Natural units have some advantages (“Pro”):
1) Equations and mathematical expressions are simpler in Natural Units.
2) Natural units allow for the match between apparently different physical magnitudes.
3) Some natural units are independent from “prototypes” or “external patterns” beyond some clever and trivial conventions.
4) They can help to unify different physical concetps.
However, natural units have also some disadvantages (“Cons”):
1) They generally provide less precise measurements or quantities.
2) They can be ill-defined/redundant and own some ambiguity. It is also caused by the fact that some natural units differ by numerical factors of pi and/or pure numbers, so they can not help us to understand the origin of some pure numbers (adimensional prefactors) in general.
Moreover, you must not forget that natural units are “human” in the sense you can addapt them to your own needs, and indeed,you can create your own particular system of natural units! However, said this, you can understand the main key point: fundamental theories are who finally hint what “numbers”/”magnitudes” determine a system of “natural units”.
Remark: the smart designer of a system of natural unit systems must choose a few of these constants to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized: if the mass of an electron is defined to be 1, then the mass of a proton has to be . In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants through a very known equation:
where is the Coulomb constant, e is the positron electric charge (elementary charge), ℏ is the reduced Planck constant, and c is the again the speed of light in vaccuum. It is believed that in a normal theory is not possible to simultaneously normalize all four of the constants c, ℏ, e, and kC.
Fritzsch and Xing have developed a very beautiful plot of the fundamental constants in Nature (those coming from gravitation and the Standard Model). I can not avoid to include it here in the 2 versions I have seen it. The first one is “serious”, with 29 “fundamental constants”:
However, I prefer the “fun version” of this plot. This second version is very cool and it includes 28 “fundamental constants”:
The Okun Cube
Long ago, L.B. Okun provided a very interesting way to think about the Planck units and their meaning, at least from current knowledge of physics! He imagined a cube in 3d in which we have 3 different axis. Planck units are defined as we have seen above by 3 constants plus the Boltzmann constant. Imagine we arrange one axis for c-Units, one axis for -units and one more for -units. The result is a wonderful cube:
Or equivalently, sometimes it is seen as an equivalent sketch ( note the Planck constant is NOT rationalized in the next cube, but it does not matter for this graphical representation):
Classical physics (CP) corresponds to the vanishing of the 3 constants, i.e., to the origin .
Newtonian mechanics (NM) , or more precisely newtonian gravity plus classical mechanics, corresponds to the “point” .
Special relativity (SR) corresponds to the point , i.e., to “points” where relativistic effects are important due to velocities close to the speed of light.
Quantum mechanics (QM) corresponds to the point , i.e., to “points” where the action/angular momentum fundamental unit is important, like the photoelectric effect or the blackbody radiation.
Quantum Field Theory (QFT) corresponds to the point , i.e, to “points” where both, SR and QM are important, that is, to situations where you can create/annihilate pairs, the “particle” number is not conserved (but the particle-antiparticle number IS), and subatomic particles manifest theirselves simultaneously with quantum and relativistic features.
Quantum Gravity (QG) would correspond to the point where gravity is quantum itself. We have no theory of quantum gravity yet, but some speculative trials are effective versions of (super)-string theory/M-theory, loop quantum gravity (LQG) and some others.
Finally, the Theory Of Everything (TOE) would be the theory in the last free corner, that arising in the vertex . Superstring theories/M-theory are the only serious canditate to TOE so far. LQG does not generally introduce matter fields (some recent trials are pushing into that direction, though) so it is not a TOE candidate right now.
Some final remarks and questions
1) Are fundamental “constants” really constant? Do they vary with energy or time?
2) How many fundamental constants are there? This questions has provided lots of discussions. One of the most famous was this one:
The trialogue (or dialogue if you are precise with words) above discussed the opinions by 3 eminent physicists about the number of fundamental constants: Michael Duff suggested zero, Gabriel Veneziano argued that there are only 2 fundamental constants while L.B. Okun defended there are 3 fundamental constants
3) Should the cosmological constant be included as a new fundamental constant? The cosmological constant behaves as a constant from current cosmological measurements and cosmological data fits, but is it truly constant? It seems to be…But we are not sure. Quintessence models (some of them related to inflationary Universes) suggest that it could vary on cosmological scales very slowly. However, the data strongly suggest that
It is simple, but it is not understood the ultimate nature of such a “fluid” because we don’t know what kind of “stuff” (either particles or fields) can make the cosmological constant be so tiny and so abundant (about the 72% of the Universe is “dark energy”/cosmological constant) as it seems to be. We do know it can not be “known particles”. Dark energy behaves as a repulsive force, some kind of pressure/antigravitation on cosmological scales. We suspect it could be some kind of scalar field but there are many other alternatives that “mimic” a cosmological constant. If we identify the cosmological constant with the vacuum energy we obtain about 122 orders of magnitude of mismatch between theory and observations. A really bad “prediction”, one of the worst predictions in the history of physics!
Be natural and stay tuned!
Today we are going to study a relatively new effect ( new experimentally speaking, because it was first detected when I was an undergraduate student, in 2000) but it is not so new from the theoretical aside (theoretically, it was predicted in 1962). This effect is closely related to the Cherenkov effect. It is named Askaryan effect or Askaryan radiation, see below after a brief recapitulation of the Cherenkov effect last post we are going to do in the next lines.
We do know that charged particles moving faster than light through the vacuum emit Cherenkov radiation. How can a particle move faster than light? The weak speed of a charged particle can exceed the speed of light. That is all. About some speculations about the so-called tachyonic gamma ray emissions, let me say that the existence of superluminal energy transfer has not been established so far, and one may ask why. There are two options:
1) The simplest solution is that superluminal quanta just do not exist, the vacuum speed of light being the definitive upper bound.
2) The second solution is that the interaction of superluminal radiation with matter is very small, the quotient of tachyonic and electric fine-structure constants being . Therefore superluminal quanta and their substratum are hard to detect.
A related and very interesting question could be asked now related to the Cherenkov radiation we have studied here. What about neutral particles? Is there some analogue of Cherenkov radiation valid for chargeless or neutral particles? Because neutrinos are electrically neutral, conventional Cherenkov radiation of superluminal neutrinos does not arise or it is otherwise weakened. However neutrinos do carry electroweak charge and may emit certain Cherenkov-like radiation via weak interactions when traveling at superluminal speeds. The Askaryan effect/radiation is this Cherenkov-like effect for neutrinos, and we are going to enlighten your knowledge of this effect with this entry.
We are being bombarded by cosmic rays, and even more, we are being bombarded by neutrinos. Indeed, we expect that ultra-high energy (UHE) neutrinos or extreme ultra-high energy (EHE) neutrinos will hit us as too. When neutrinos interact wiht matter, they create some shower, specifically in dense media. Thus, we expect that the electrons and positrons which travel faster than the speed of light in these media or even in the air and they should emit (coherent) Cherenkov-like radiation.
Who was Gurgen Askaryan?
Let me quote what wikipedia say about him: Gurgen Askaryan (December 14, 1928-1997) was a prominent Soviet (armenian) physicist, famous for his discovery of the self-focusing of light, pioneering studies of light-matter interactions, and the discovery and investigation of the interaction of high-energy particles with condensed matter. He published more than 200 papers about different topics in high-energy physics.
Other interesting ideas by Askaryan: the bubble chamber (he discovered the idea independently to Glaser, but he did not published it so he did not win the Nobel Prize), laser self-focussing (one of the main contributions of Askaryan to non-linear optics was the self-focusing of light), and the acoustic UHECR detection proposal. Askaryan was the first to note that the outer few metres of the Moon’s surface, known as the regolith, would be a sufficiently transparent medium for detecting microwaves from the charge excess in particle showers. The radio transparency of the regolith has since been confirmed by the Apollo missions.
If you want to learn more about Askaryan ideas and his biography, you can read them here: http://en.wikipedia.org/wiki/Gurgen_Askaryan
What is the Askaryan effect?
The next figure is from the Askaryan radiation detected by the ANITA experiment:
The Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric medium (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contain a charge anisotropy and thus emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. It is similar, or more precisely it is based on the Cherenkov effect.
High energy processes such as Compton, Bhabha and Moller scattering along with positron annihilation rapidly lead to about a 20%-30% negative charge asymmetry in the electron-photon part of a cascade. For instance, they can be initiated by UHE (higher than, e.g.,100 PeV) neutrinos.
1962, Askaryan first hypothesized this effect and suggested that it should lead to strong coherent radio and microwave Cherenkov emission for showers propagating within the dielectric. Since the dimensions of the clump of charged particles are small compared to the wavelength of the radio waves, the shower radiates coherent radio Cherenkov radiation whose power is proportional to the square of the net charge in the shower. The net charge in the shower is proportional to the primary energy so the radiated power scales quadratically with the shower energy, .
Indeed, these radio and coherent radiations are originated by the Cherenkov effect radiation. We do know that:
from the charged particle in a dense (refractive) medium experimenting Cherenkov radiation (CR). Every charge emittes a field . Then, the power is proportional to . In a dense medium:
We have two different experimental and interesting cases:
A) The optical case, with . Then, we expect random phases and .
B) The microwave case, with . In this situation, we expect coherent radiation/waves with .
We can exploit this effect in large natural volumes transparent to radio (dry): pure ice, salt formations, lunar regolith,…The peak of this coherent radiation for sand is produced at a frequency around , while the peak for ice is obtained around .
The first experimental confirmation of the Askaryan effect detection were the next two experiments:
1) 2000 Saltzberg et.al., SLAC. They used as target silica sand. The paper is this one http://arxiv.org/abs/hep-ex/0011001
2) 2002 Gorham et.al., SLAC. They used a synthetic salt target. The paper appeared in this place http://arxiv.org/abs/hep-ex/0108027
Indeed, in 1965, Askaryan himself proposes ice and salt as possible target media. The reasons are easy to understand:
1st. They provide high densities and then it means a higher probability for neutrino interaction.
2nd. They have a high refractive index. Therefore, the Cerenkov emission becomes important.
3rd. Salt and ice are radio transparent, and of course, they can be supplied in large volumes available throughout the world.
The advantages of radio detection of UHE neutrinos provided by the Askaryan effect are very interesting:
1) Low attenuation: clear signals from large detection volumes.
2) We can observe distant and inclined events.
3) It has a high duty cycle: good statistics in less time.
4) I has a relative low cost: large areas covered.
5) It is available for neutrinos and/or any other chargeless/neutral particle!
Problems with this Askaryan effect detection are, though: radio interference, correlation with shower parameters (still unclear), and that it is limited only to particles with very large energies, about .
Askaryan effect = coherent Cerenkov radiation from a charge excess induced by (likely) neutral/chargeless particles like (specially highly energetic) neutrinos passing through a dense medium.
Why the Askaryan effect matters?
It matters since it allows for the detection of UHE neutrinos, and it is “universal” for chargeless/neutral particles like neutrinos, just in the same way that the Cherenkov effect is universal for charged particles. And tracking UHE neutrinos is important because they point out towards its source, and it is suspected they can help us to solve the riddle of the origin and composition of cosmic rays, the acceleration mechanism of cosmic radiation, the nuclear interactions of astrophysical objects, and tracking the highest energy emissions of the Universe we can observe at current time.
Is it real? Has it been detected? Yes, after 38 years, it has been detected. This effect was firstly demonstrated in sand (2000), rock salt (2004) and ice (2006), all done in a laboratory at SLAC and later it has been checked in several independent experiments around the world. Indeed, I remember to have heard about this effect during my darker years as undergraduate student. Fortunately or not, I forgot about it till now. In spite of the beauty of it!
Moreover, it has extra applications to neutrino detection using the Moon as target: GLUE (detectors are Goldstone RTs), NuMoon (Westerbork array; LOFAR), or RESUN (EVLA), or the LUNASKA project. Using ice as target, there has been other experiments checking the reality of this effect: FORTE (satellite observing Greenland ice sheet), RICE (co-deployed on AMANDA strings, viewing Antarctic ice), and the celebrated ANITA (balloon-borne over Antarctica, viewing Antarctic ice) experiment.
Furthermore, even some experiments have used the Moon (an it is likely some others will be built in the near future) as a neutrino detector using the Askaryan radiation (the analogue for neutral particles of the Cherenkov effect, don’t forget the spot!).
Askaryan effect and the mysterious cosmic rays.
Askaryan radiation is important because is one of the portals of the UHE neutrino observation coming from cosmic rays. The mysteries of cosmic rays continue today. We have detected indeed extremely energetic cosmic rays beyond the scale. Their origin is yet unsolved. We hope that tracking neutrinos we will discover the sources of those rays and their nature/composition. We don’t understand or know any mechanism being able to accelerate particles up to those incredible particles. At current time, IceCube has not detected UHE neutrinos, and it is a serious issue for curren theories and models. It is a challenge if we don’t observe enough UHE neutrinos as the Standard Model would predict. Would it mean that cosmic rays are exclusively composed by heavy nuclei or protons? Are we making a bad modelling of the spectrum of the sources and the nuclear models of stars as it happened before the neutrino oscillations at SuperKamiokande and Kamikande were detected -e.g.:SN1987A? Is there some kind of new Physics living at those scales and avoiding the GZK limit we would naively expect from our current theories?
The Doppler effect is a very important phenomenon both in classical wave motion and relativistic physics. For instance, nowadays it is used to the detect exoplanets and it has lots of applications in Astrophysics and Cosmology.
Firstly, we remember the main definitions we are going to need here today.
Sometimes, we will be using the symbol for the frequency . We also have:
A plane (sometimes electromagnetic) wave is defined by the oscillation:
If , then the wave number is lightlike (null or isotropic) and then
Using the Lorentz transformations for the wave number spacetime vector, we get:
i.e., if the angle with the direction of motion is so
we deduce that
or for the normalized frequency shift
This is the usual formula for the relativistic Doppler effect when we define , and thus, the angular frequency (also the frecuency itself, since there is only a factor 2 times the number pi of difference) changes with the motion of the source. When the velocity is “low”, i.e., , we obtain the classical Doppler shift formula:
We then calculate the normalized frequency shift from
The classical Doppler shift states that when the source approaches the receiver (), then the frequency increases, and when the source moves away from the receiver (), the frequency decreases. Interestingly, in the relativistic case, we also get a transversal Doppler shift which is absent in classical physics. That is, in the relativistic Doppler shift, for , we obtain
and the difference in frequency would become
There is an alternative deduction of these formulae. The time that an electromagnetic wave uses to run a distance equal to the wavelength, in a certain inertial frame S’ moving with relative speed V to another inertial frame S at rest, is equal to:
where is the frequency of the source. Due to the time dilation of special relativity
so we get
The redshift (or Doppler displacement) is generally defined as:
When , i.e., , we get the classical result
The generalization for a general non-parallel motion of the source/observer is given by
If we use the stellar aberration formula:
the last equation can be recasted in terms of instead of as follows:
Again, for a transversal motion, , we get a transversal Doppler effect:
Remark: Remember that the Doppler shift formulae are only valid if the relative motion (of both source and observer/receiver) is slower than the speed of the (electromagnetic) wave, i.e., if .
In the final part of this entry, we are going to derive the most general formula for Doppler effect, given an arbitrary motion of source and observer, in both classical and relativistic Physics. Recall that the Doppler shift in Classical Physics for an arbitrary observer is given by a nice equation:
Here, is the velocity of the (electromagnetic) wave in certain medium, is the velocity of the observer in certain direction forming an angle with the “line of sight”, while is the velocity of the source forming an angle with the line of sight/observation. If we write
we can rewrite this last Doppler formula in the following way ( for ):
or with and
The most general Doppler shift formula, in the relativistic case, reads:
and where are the velocities of the source and the observer at the time of emission and reception, respectively, are the corresponding beta boost parameters, is the “light” or “wave” velocity vector and we have defined the angles to be the angles formed at the time of emission and the time of reception/observation between the source velocity and the “wave” velocity, respectively, between the source and the wave and the observer and the wave. Two simple cases of this formula:
1st. Parallel motion with .
2nd. Antiparallel motion with going in the contrary sense than that of . Then, .
The deduction of this general Doppler shift formula can be sketched in a simple fashion. For a signal using some propagating wave, we deduce:
Differentiating with respect to carefully, it provides
Solving for we get
Using the known formula , we obtain in a simple way:
Finally, using the fact that , the similar result , and that the proper time induces an extra gamma factor due to time dilation
and we calculate for the frequency:
where the PREFACTOR denotes the previously calculated ratio between differential times. Finally, elementary algebra let us derive the expression:
An interesting remark about the Doppler effect in relativity: the Doppler effect allows us to “derive” the Planck’s relation for quanta of light. Suppose that a photon in the S’-frame has an energy and momentum is being emitted along the negative x’-axis toward the origin of the S-frame. The inverse Lorentz transformation provides:
By the other hand, by the relativistic Doppler effect we have seen that the frequency f’ in the S’-frame is transformed into the frequency f in the S-frame if we use the following equation:
If we divide the last two equations we get:
Then, if we write , and and .
AN ALTERNATIVE HEURISTIC DEDUCTION OF THE RELATIVISTIC DOPPLER EFFECT
In certain rest frame S, there is an observer receiving light beams/signals. The moving frame is the S’-frame and it is the emitter of light. The source of light approaches at velocity V, and it sends pulses with frequency . What is the frequency that the observer at S observes? Due to time dilation, the observer at S observes a longer period
The distance between two consecutive light beams seen by the observer at S will be:
Therefore, the observer frequency in the S-frame is:
If the source approaches the observer, then . If the source moves away from the observer, then . In the case, the velocity of the source forms certain angle in the direction of observation, the same argument produces:
In the case of transversal Doppler effect, we get , and so:
Final remark (I): If or AND , therefore we have that there is no Doppler effect at all.
Final remark (II): If , there is no Doppler effect in certain observation directions. Those directions can be deduced from the above relativistic Doppler effect formula with the condition and solving for . This gives the next angular direction in which Doppler effect can not be detected
In this entry, we are going to study a relativistic effect known as “stellar aberration”.
From the known Lorentz transformations of velocities (inverse case), we get:
The classical result (galilean addition of velocities) is recovered in the limit of low velocities or sending the light speed get the value “infinite” . Then,
Let us define
Thus, we get the component decomposition into the xy and x’y’ planes:
From this equations, we get
From the last equation, we get
From this equation, if , i.e., if and with , we obtain the result
By these formaulae, the angle of a light beam propagating in space depends on the velocity of the source respect to the observer. We can observe this relativistic effect every night (supposing a good approximation that Earth’s velocity is non-relativistic, as it shows). The physical interpretation of the above aberration formulae (for the stars we watch during a skynight) is as follows: due to the Earth’s motion, a star in the zenith is seen under an angle .
Other important consequence from the stellar aberration is when we track ultra-relativistic particles (). Then, and then, the observer moves close to the source of light. In this case, almost every star (excepting those behind with ) are seen “in front of” the observer. If the source moves with almost the speed of light, then the light is “observed” as it were concentrated in a little cone with an aperture