Dedicated to Niels Bohr
and his atomic model
1st part: A centenary model
This is a blog entry devoted to the memory of a great scientist, N. Bohr, one of the greatest master minds during the 20th century, one of the fathers of the current Quantum model of atoms and molecules.
One century ago, Bohr was the pioneer of the introduction of the “quantization” rules into the atomic realm, 8 years after the epic Annus Mirabilis of A. Einstein (1905). Please, don’t forget that Einstein himself was the first physicist to consider Planck hypothesis into “serious” physics problems, explaining the photoelectric effect in a simple way with the aid of “quanta of light” (a.k.a. photons!). Therefore, it is not correct to assest that N.Bohr was the “first” quantum physicist. Indeed, Einstein or Planck were the first. Said, this, Bohr was the first to apply the quantum hypothesis into the atomic domain, changing forever the naive picture of atoms coming from the “classical” physics. I decided that this year I would be writting something in to honour the centenary of his atomic model (for the hydrogen atom).
I wish you will enjoy the next (short) thread…
When I was young, and I was explained and shown the Periodic Table (the ordered list or catalogue of elements) by the first time, I wondered how many elements could be in Nature. Are they 103? 118?Maybe 212? 1000? ? Or ? , Infinity?
We must remember what an atom is…Atom is a greek word meaning “with no parts”. That is, an atom is (at least from its original idea), something than can not be broken into smaller parts. Nice concept, isn’t it?
Greek philosophers thought millenia ago if there is a limit to the divisibility of matter, and if there is an “ultimate principle” or “arche” ruling the whole Universe (remarkably, this is not very different to the questions that theoretical physicists are trying to solve even now or the future!). Different schools and ideas arose. I am not very interested today into discussing Philosophy (even when it is interesting in its own way), so let me simplify the general mainstream ideas several thousands of years ago (!!!!):
1st. There is a well-defined ultimate “element”/”substance” and an ultimate “principle”. Matter is infinitely divisible. There are deep laws that govern the Universe and the physical Universe, in a cosmic harmony.
2nd. There is a well-defined ultimate “element”/”substance” and an ultimate “principle”. Matter is FINITELY divisible. There are deep laws that govern the Universe and the physical Universe, in a cosmic harmony.
3rd. There is no a well-defined ultimate “element”/”substance” or an ultimate principle. Chaos rules the Universe. Matter is infinitely divisible.
4th. There is no a well-defined ultimate “element”/”substance” or an ultimate principle. Chaos rules the Universe. Matter is finitely divisible.
Remark: Please, note the striking “similarity” with some of the current (yet) problems of Physics. The existence of a Theory Of Everything (TOE) is the analogue to the question of the first principle/fundamental element quest of ancient greek philosophers or any other philosophy in all over the world. S.W. Hawking himself provided in his Brief Story of Time the following (3!) alternative approaches
1st. There is not a TOE. There is only a chaotic pattern of regularities we call “physical laws”. But Nature itself is ultimately chaotic and the finite human mind can not understand its ultimate description.
2nd. There is no TOE. There are only an increasing number of theories more and more precise or/and more and more accurate without any limit. As we are finite beings, we can only try to guess better and better approximations to the ultimate reality (out of our imagination) and the TOE can not be reached in our whole lifetime or even in the our whole species/civilization lifetime.
3rd. There is a well defined TOE, with its own principles and consequences. We will find it if we are persistent enough and if we are clever enough. All the physical events could be derived from this theory. If we don’t find the “ultimate theory and its principles” is not because it is non-existent, it is only that we are not smart enough. Try harder (If you can…)!
If I added another (non Greek) philosophies, I could create some other combinations, but, as I told you above, I am not going to tell you Philosophy here, not at least more than necessary.
As you probably know, the atomic idea was mainly defended by Leucippus and Democritus, based on previous ideas by Anaxagoras. It is quite likely that Anaxagoras himself learned them from India (or even from China), but that is quite speculative… Well, the keypoint of the atomic idea is that you can not smash into smaller pieces forever smaller and smaller bits of matter. Somewhere, the process of breaking down the fundamental constituents of matter must end…But where? And mostly, how can we find an atom or “see” what an atom looks like? Obviously, ancient greeks had not idea of how to do that, or even knowing the “ground idea” of what a atom is, they had no experimental device to search for them. Thus, the atomic idea was put into the freezer until the 18th and 19th century, when the advances in experimental (and theoretical) Chemistry revived the concept and the whole theory. But Nature had many surprises ready for us…Let me continue this a bit later…
In the 19th century, with the discovery of the ponderal laws of Chemistry, Dalton and other chemists were stunned. Finally, Dalton was the man who recovered the atomism into “real” theoretical Science. But their existence was controversial until the 20th century. However, Dalton concluded that there was a unique atom for each element, using Lavoisier’s definition of an element as a substance that could not be analyzed into something simpler. Thus, Dalton arrived to an important conclusion:
- “(…)Chemical analysis and synthesis go no farther than to the separation of particles one from another, and to their reunion. No new creation or destruction of matter is within the reach of chemical agency. We might as well attempt to introduce a new planet into the solar system, or to annihilate one already in existence, as to create or destroy a particle of hydrogen. All the changes we can produce, consist in separating particles that are in a state of cohesion or combination, and joining those that were previously at a distance(…)”.
The reality of atoms was a highly debated topic during all the 19th century. It is worthy to remark that was Einstein himself (yes, he…agian) who went further and with his studies about the Brownian motion established their physical existence. It was a brillian contribution to this area, even when, in time, he turned against the (interpretation of) Quantum Mechanics…But that is a different story not to be told today.
Dalton’s atoms or Dalton atomic model was very simple.
Atoms had no parts and thus, they were truly indivisible particles. However, the electrical studies of matter and the electromagnetic theory put this naive atomic model into doubt. After the discovery of “the cathode” rays (1897) and the electron by J.J.Thomson (no, it is not J.J.Abrahams), it became clear that atoms were NOT indivisible after all! Surprising, isn’t it? It is! Chemical atoms are NOT indivisible. They do have PARTS.
Thomson’s model or “plum pudding” model, came into the rescue…Dalton believed that atoms were solid spheres, but J.J.Thomson was forced (due to the electron existence) to elaborate a “more complex” atomic model. He suggested that atoms were a spherical “fluid” mass with positive charge, and that electrons were placed into that sphere as in a “plum pudding” cake. I have to admit that I were impressed by this model when I was 14…It seemed too ugly for me to be true, but anyway it has its virtues (it can explain the cathode ray experiment!).
The next big step was the Rutherford experiment! Thomson KNEW that electrons were smaller pieces inside the atom, but despite his efforts to find the positive particles (and you see there he had and pursued his own path since he discovered the reason of the canal rays), he could not find it (and they should be there since atoms were electrically neutrial particles). However, clever people were already investigating radioactivity and atomic structure with other ideas…In 1911, E. Rutherford, with the aid of his assistants, Geiger and Marsden, performed the celebrated gold foil experiment.
To his surprise (Rutherford’s), his assistants and collaborators provided a shocking set of results. To explain all the observations, the main consequences of the Rutherford’s experiment were the next set of hypotheses:
1st. Atoms are mostly vacuum space.
2nd. Atoms have a dense zone of positive charge, much smaller than the whole atom. It is the atomic nucleus!
3rd. Nuclei had positive charge, and electrons negative charge.
He (Rutherford) did not know from the beginning how was the charge arranged and distributed into the atom. He had to improve the analysis and perform additional experiment in order to propose his “Rutherford” solar atomic model and to get an estimate of the nuclei size (about 1fm or ). In fact, years before him, the japanase Nagaoka had proposed a “saturnian” atomic model with a similar looking. It was unstable, though, due to the electric repulsion of the electronic “rings” (previously there was even a “cubic” model of atom, but it was unsuccessful too to explain every atomic experiment) and it had been abandoned.
And this is the point where theory become “hard” again. Rutherford supposed that the electron orbits around nuclei were circular (or almost circular) and then electrons experimented centripetal forces due to the electrical forces of the nucleus. The classical electromagnetic theory said that any charged particle being accelerated (and you do have acceleration with a centripetal force) should emit electromagnetic waves, losing energy and, then, electrons should fall over the the nuclei (indeed, the time of the fall down was ridiculously small and tiny). We do not observe that, so something is wrong with our “classical” picture of atoms and radiation (it was also hinted with the photoelectric effect or the blackbody physics, so it was not too surprising but challenging to find the rules and “new mechanics” to explain the atomic stability of matter). Moeover, the atomic spectra was known to be discrete (not continuous) since the 19th century as well. To find out the new dynamics and its principles became one of the oustanding issues in the theoretical (and experimental) community. The first scientist to determine a semiclassical but almost “quantum” and realistic atomic spectrum (for the simpler atom, the hydrogen) was Niels Bohr. The Bohr model of the hydrogen atom is yet explained at schools not only due to its historical insterest, but to the no less important fact that it provides right answers (indeed, Quantum Mechanics reproduces its features) for the simplest atom and that its equations are useful and valid from a quantitative viewpotint (as I told you, Quantum Mechanics reproduces Bohr formulae). Of course, Bohr model does not explain the Stark effect, the Zeeman effect, or the hyperfine structure of the hydrogen atom and some other “quantum/relativistic” important effects, but it is a really useful toy model and analytical machine to think about the challenges and limits of Quantum Mechanics of atoms and molecules. Bohr model can not be applied to helium and other elements in the Periodic Table of the elements (its structure is described by Quantum Mechanics), so it can be very boring but, as we will see, it has many secrets and unexpected surprises in its core…
Bohr model for the hydrogen atom
Bohr model hypotheses/postulates:
1st. Electrons describe circular orbits around the proton (in the hydrogen atom). The centripetal force is provided by the electrostatic force of the proton.
2nd. Electrons, while in “stationary” orbits with a fixed energy, do NOT radiate electromagnetic waves ( note that this postulate is againsts the classical theory of electromagnetics as it was known in the 19th century).
3rd. When a single electron passes from one energetic level to another, the energy transitions/energy differences satisfy the Planck law. That is, during level transitions, .
In summary, we have:
Firstly, we begin with the equality between the electron-proton electrostatic force and the centripetal force in the atom:
Mathematically speaking, this first postulate/ansatz requieres that , where is the elementary electric charge of the electron (and equal in absolute value to the proton charge) and is the electron mass:
and implies that
Remark: Instead of having the electron mass, it would be more precise to use the “reduced” mass for this two body problem. The reduced mass is, by definition,
However, it is easy to realize that the reduced mass is essentially the electron mass (since )
The second Bohr’s great idea was to quantize the angular momentum. Classically, angular momentum can take ANY value, Bohr great’s intuition suggested that it could only take multiple values of some fundamental constant, the Planck’s constant. In fact, assuming orbitar stationary orbits, the quantization rule provides
(2) or with and a positive integer.
Remark: and are the Planck constant and the reduced Planck constant, respectively.
From this quantization rule (2), we can easily get
Thus, we have
Using the result we got in (1) for the squared velocity of the electron in the circular orbit, we deduce the quantization rule for the orbits in the hydrogen atom according to Bohr’s hypotheses:
where again and the Bohr radius is defined to be
Inserting values into (4), we obtain the celebrated value of the Bohr radius
The third important consequence in the spectrum of energy levels in the hydrogen atom. To obtain the energy spectrum, there is two equivalent paths (in fact, they are the same): use the virial theorem or use (1) into the total energy for the electron-proton system. The total energy of the hydrogen atom can be written
Substituting (1) into this, we get exactly the expected expression for the virial theorem to a potential (i.e. ):
Inserting into (5) the quantized values of the orbit, we deduce the famous and well-known formula for the spectrum of the hydrogen atom (known to Balmer and the spectroscopists at the end of the 19th century and the beginning of the 20th century):
and where we have defined the Rydberg (constant) as
Its value is . Here, the electromagnetic fine structure constant (alpha) is
and is the speed of light. In fact, using the quantum relation
we can deduce that the Rydberg corresponds to a wavenumber
or a frequency
and a wavelength
Please, check it yourself! :D.
The above results allowed Bohr to explain the spectral series of the hydrogen atom. He won the Nobel Prize due to this wonderful achievement…
(and positronium, muonium,…)
In fact, it is easily straightforward to extend all these results to “hydrogenic” (“hydrogenoid”) atoms, i.e., to atoms with only a single electron BUT a nucleus with charge equal to , and is an integer (atomic) number greater than one! The easiest way to obtain the results is not to repeat the deduction but to make a rescaling of the proton charge, i.e., you plug or/and make a rescaling of the electric charge (be aware of making the right scaling in the formulae). The final result for the radius and the energy spectrum is as follows:
A) From , with , you get
B) From , with the rescaling , you get
Therefore, the consequence of the rescaling of the nuclear charge is that energy levels are “enlarged” by a factor and that the orbits are “squeezed” or “contracted” by a factor .
Exercise: Can you obtain the energy levels and the radius for the positronium (an electron and positron system instead an electron a positron). What happens with the muonium (strange substance formed by electron orbiting and antimuon)?And the muonic atom (muon orbiting an proton)? And a muon orbiting an antimuon? And the tau particle orbiting an antitau or the electron orbiting an antitau or a tau orbiting a proton(supposing that it were possible of course, since the tau particle is unstable)? Calculate the “Bohr radius” and the “Rydberg” constant for the positronium, the muonium, the muonic atom (or the muon-antimuon atom) and the tauonium (or the tau-antitau atom). Hint: think about the reduced mass for the positronium and the muonium, then make a good mass/energy or radius rescaling.
Now, we can also calculate the velocity of an electron in the quantized orbits for the Bohr atom and the hydrogenic atom. Using (3) and (8),
and inserting the quantized values of the orbit radius
so, for the Bohr atom (hydrogen)
In the case of hydrogenic atoms, the rescaling of the electric charge yields
so, the hydrogenic atoms have a “enlarged” electron velocity in the orbits, by a factor of .
This result for velocities is very interesting. Suppose we consider the fundamental level (or the orbital 1s in Quantum Mechanics, since, magically or not, Quantum Mechanics reproduces the results for the Bohr atom and the hydrogenic atoms we have seen here, plus other effects we will not discuss today relative to spin and some energy splitting for perturbed atoms). Then, the last formula yield, in the hydrogenic case,
Furthermore, suppose now in addition that we have some “superheavy” (hydrogenic) atom with, say, (note that at ordinary energies), say or greater than it. Then, the electron moves faster than the speed of light!!!!! That is, for hydrogenic atoms, with Z>137 and considering the fundalmental level, the electron would move with . This fact is “surprising”. The element with Z=137 is called untriseptium (Uts) by the IUPAC rules, but it is often called the feynmanium (Fy), since R.P. Feynman often remarked the importance of this result and mystery. Of course, Special Relativity forbids this option. Therefore, something is wrong or Z=137 is the last element allowed by the Quantum Rules (or/and the Bohr atom). Obviously, we could claim that this result is “wrong” since we have not consider the relativistic quantum corrections or we have not made a good relativistic treatment of this system. It is not as simple as you can think or imagine, since using a “naive” relativistic treatment, e.g., using the Dirac equation , we obtain for the fundamental level of the hydrogenic atom the spectrum
(12) . This result can be obtained from the Dirac equation spectrum for the hydrogen atom (in a Coulomb potential):
where n is a nonnegative integer number and . Putting these into numbers, we get
If you plug Z=138 or more into the above equation from the Dirac spectrum, you obtain an imaginary value of the energy, and thus an oscillating (unbound) system! Therefore, the problem for atoms with high Z even persist taking the relativistic corrections! What is the solution? Nobody is sure. Greiner et al. suggest that taking into account the finite (extended) size of the nuclei, the problem is “solved” until . Beyond, i.e., with , you can not be sure that quantum fluctuations of strong fields introduce vacuum pair creation effects such as they make the nuclei and thus atoms to be unstable at those high values of Z. Some people believe that the issues arise even before, around Z=150 or even that strong field effects can make atoms even below of Z=137 to be non-existent. That is why the search for superheavy elements (SHE) is interesting not only from the chemical viewpoint but also to the fundamental physics viewpoint: it challenges our understanding of Quantum Mechanics and Special Relativity (and their combination!!!!).
Is the feynmanium (Z=137) the last element? This hypothetical element and other superheavy elements (SHE) seem to hint the end of the Periodic Table. Is it true? Options:
1st. The feynmanium (Fy) or Untriseptrium (Uts) is the last element of the Periodic Table.
2nd. Greiner et al. limit around Z=172. References:
(i) B Fricke, W Greiner and J T Waber,Theor. Chim. Acta, 1971, 21, 235.
(ii)W Greiner and J Reinhardt, Quantum Electrodynamics, 4th edn (Springer, Berlin, 2009).
3rd. Other predictions of an end to the periodic table include Z = 128 (John Emsley) and Z = 155 (Albert Khazan). Even Seaborg, from his knowledge and prediction of an island of stability around , left this question open to interpretation and experimental search!
4th. There is no end of the Periodic Table. According to Greiner et al. in fact, even when superheavy nuclei can produce a challenge for Quantum Mechanics and Special Relativity, indeed, since there is always electrons in the orbitals (a condition to an element to be a well-defined object), there is no end of The Periodic Table (even when there are probabilities to a positron-electron pair to be produced for a superheavy nuclei, the presence of electrons does not allow for it; but strong field effects are important there, and it should be great to produce these elements and to know their properties, both quantum and relativistic!). Therefore, it would be very, very interesting to test the superheavy element “zone” of the Periodic Table, since it is a place where (strong) quantum effects and (non-negligible) relativistic effects both matter. Then, if both theories are right, superheavy elements are a beautiful and wonderful arena to understand how to combine together the two greatest theories and (unfinished?) revolutions of the 20th century. What awesome role for the “elementary” and “fundamental” superheavy (composite) elements!
Probably, there is no limit to the number of (chemical) elements in our Universe… But we DO NOT KNOW!
In conclusion: what will happen for superheavy elements with Z >173 (or Z>126, 128, 137, etc.) remains unresolved with our current knowledge. And it is one of the last greatest mysteries in theoretical Chemistry!
More about the fine structure constant, the Sommerfeld corrections and the Dirac equation+QED (Quantum ElectroDynamics) corrections to the hydrogen spectrum, in slides (think it yourself!):
Final remarks (for experts only): Some comments about the self-adjointness of the Dirac equation for high value of Z in Coulombian potentials. It is a well known fact that the Dirac operator for the hydrogen problem is essentially self-adjoint if Z<119. Therefore, it is valid for all the currently known elements (circa 2013, June, every element in the Periodic Table, for the 7th period, has been created and then, we know that chemical elements do exist at least up to Z=118 and we have tried to search for superheavy elements beyond that Z with negative results until now). However, for any “self-adjoint extension” requires a precise physical meaning. A good idea could be that the expectation value of every component of the Hamilton is finite in the selected basis. Indeed, the solution to the Coulombian potential for the hydrogenic atom using the Dirac equation makes use of hypergeometric functions that are well-posed for any . If Z is greater than that critical value, we face the oscillating energy problem we discussed above. So, we have to consider the effect of the finite size of the nucleus and/or handle relativistic corrections more carefully. It is important to realize this and that we have to understand the main idea of all this crazy stuff. This means that the s states start to be destroyed above Z = 137, and that the p states begin being destroyed above Z = 274. Note that this differs from the result of the Klein-Gordon equation, which predicts s states being destroyed above Z = 68 and p states destroyed above Z = 82. In summary, the superheavy elements are interesting because they challenge our knowledge of both Quantum Mechanics and Special Relativity. What a wonderful (final) fate for the chemical elements: the superheavy elements will test if the “marriage” between Quantum Mechanics or Special Relativity is going further or it ends into divorce!
Epilogue: What do you think about the following questions? This is a test for you, eager readers…
1) Is there an ultimate element?
2) Is there a theory of everything (TOE)?
3) Is there an ultimate chemical element?
4) Is there a single “ultimate” principle?
5) How many elements does the Periodic Table have?
6) Is the feynmanium the last element?
7) Are Quantum Mechanics/Special relativity consistent to each other?
8) Is Quantum Mechanics a fundamental and “ultimate” theory for atoms and molecules?
9) Is Special Relativity a fundamental and “ultimate” theory for “quick” particles?
10) Are the atomic shells and atomic structure completely explained by QM and SR?
11) Are the nuclei and their shell structure xompletely explained by QM and SR?
12) Do you think all this stuff is somehow important and relevant for Physics or Chemistry (or even for Mathematics)?
13) Will we find superheavy elements the next decade?
14) Will we find superheavy elements this century?
15) Will we find that there are some superheavy elements stable in the island of stability (Seaborg) with amazing properties and interesting applications?
16) Did you like/enjoy this post?
17) When you was a teenager, how many chemical elements did you know? How many chemical elements were known?
18) Did you learn/memorize the whole Periodic Table? In the case you did not, would you?
19) What is your favourite chemical element?
20) Did you know that every element in the 7th period of the Periodic table has been established to exist but th elements E113, E115,E117 and E118 are not named yet (circa, 2013, 30th June) and they keep their systematic (IUPAC) names ununtrium, ununpentium, ununseptium and ununoctium? By the way, the last named elements were the coperninicium (E112, Cn), the flerovium (Fl, E114) and the livermorium (Lv, E116)…
In the next group theory threads we are going to study the relationship between Special Relativity, electromagnetic fields and the complex group .
There is a close interdependence of the following three concepts:
The classical electromagnetic fields and can be in fact combined into a complex six dimensional (6D) vector, sometimes called SIXTOR or Riemann-Silberstein vector:
and where the numerical prefactor is conventional ( you can give up for almost every practical purposes).
Moreover, we have
where and so
and where we have used natural units for simplicity.
The Maxwell-Faraday equation reads:
The Ampère circuital law in vacuum reads:
These two equations can be combined into a single equation using the Riemann-Silberstein vector or sixtor :
B) Comparing both sides in A), we easily get and
We can take the divergence of the time derivative of the sixtor:
Therefore, and hold in the absence of electric and magnetic charges on any section of a Minkovski spacetime, and everywhere! The presence of electric charges and the absence of magnetic charges, the so-called magnetic monopoles, breaks down the gauge symmetry of
Introducing 3 matrices with the aid of the 3D Levi-Civita tensor , the completely antisymmetric tensor with 3 indices such that and we can write these matrices as follows:
If for , then
We can define matrices so
and then for . Experts in Clifford/geometric algebras will note that these matrices are in fact “Dirac matrices” up to a conventional sign.
In fact, you can admire the remarkable similarity between the sixtor equation AND the Dirac equation as follows:
In summary: the sixtor equation is a Dirac-like equation (but of course the electromagnetic field is not a fermion!).
The equation for , since , will be the feynmanity
Let us define the formal adjoint field and the 4 components of a “density-like” quantity
Then, we can recover the classical result that says that the energy density and the Poynting vector of the electromagnetic field is
These equations provide an important difference between the Dirac equation for a massive spin true (anti)particle and the electromagnetic massless spin photon, because you can observe that in the former case you HAVE:
and you HAVE
in the latter (the electromagnetic field has not mass term!). In fact, you also have that for a Dirac field the current is defined to be:
and it transforms like a VECTOR field under Lorentz transformations, while the previous current are the components of some stress-energy-momentum !!!! They are NOT the same thing!
In fact, transform under the and (complex conjugated) representation of the proper Lorentz group.
Remark: Belinfante coined the term “undor” when dealing with fields transforming according to some specific representations of the Lorentz group.
Imagine that an idealised bug of negligible dimensions is hiding at the end of a hole of length L. A rivet has a shaft length of .
Clearly the bug is “safe” when the rivet head is flush to the (very resiliente) surface. The problem arises as follows. Consider what happens when the rivet slams into the surface at a speed of , where c is the speed of light and . One of the essences of the special theory of relativity is that objects moving relative to our frame of reference are shortened in the direction of motion by a factor , where is generally called the Lorentz dilation factor, as readers of this blog already know. However, from the point of view (frame of reference) of the bug, the rivet shaft is even shorter and therefore the bug should continue to be safe, and thus fast the rivet is moving.
Apparently, we have:
Remark: this idea assumes that both objects are ideally rigid! We will return to this “fact” later.
From the frame of reference of the rivet, the rivet is stationary and unchanged, but the hole is moving fast and is shortened by the Lorentz contraction to
If the approach speed is fast enough, so that , then the end of the hole slams into the tip of the rivet before the surface
can reach the head of the rivet. The bug is squashed! This is the “paradox”: is the bug squashed or not?
There are many good sources for this paradox (a relative of the pole-barn paradox), such as:
2) A nice animation can be found here http://math.ucr.edu/~jdp/Relativity/Bug_Rivet.html
In this blog post we are going to solve this “paradox” in the framework of special relativity.
One of the consequences of special relativity is that two events that are simultaneous in one frame of reference are no longer simultaneous in other frames of reference. Perfectly rigid objects are impossible.
In the frame of reference of the bug, the entire rivet cannot come to a complete stop all at the same instant. Information
cannot travel faster than the speed of light. It takes time for knowledge that the rivet head has slammed into the surface to
travel down the shaft of the rivet. Until each part of the shaft receives the information that the rivet head has stopped, that part keeps going at speed . The information proceeds down the shaft at speed c while the tip continues to move at speed .
The tip cannot stop until a time
after the head has stopped. During that time the tip travels a distance . The bug will be squashed if
This implies that
From we can calculate that
The bug will be squashed if the following condition holds
or equivalently, after some algebraic manipulations, the bug will be squashed if:
Conclusion (in bug’s reference frame): the bug will be definitively squashed when such as
Check: It can be verified that the limits and are valid and physically meaningful.
Note that the impact of the rivet head always happens before the bug is squashed.
In the frame of reference of the rivet, the bug is definitively squashed whenever .
The bug is squashed before the impact of the surface on the rivet head. This last equation (and thus ) is a velocity higher than .
Conclusion (in rivet’s reference frame): The entire surface cannot come to an abrupt stop at the same instant. It takes time for the information about the impact of the rivet tip on the end of the hole to reach the surface that is rushing towards the rivet head. Let us now examine the case where the speed is not high enough for the Lorentz-contracted hole to be shorter than the rivet shaft in the frame of reference of the rivet. Now the observers agree that the impact of the rivet head happens first. When the surface slams into contact with the head of the rivet, it takes time for information about that impact to travel down to the end of the hole. During this time the hole continues to move towards the tip of the rivet.
The time it takes for the propagating information to reach the tip of the stationary rivet is
during which time the bug moves a distance
In the rivet’s reference frame, therefore, The bug is squashed if the following condition holds
and from this equation, we get same minimum speed that guarantees the squashing of the bug as was the case in the frame of reference of the bug! That is:
Note that observers travelling with each of the two frames of reference (bug and rivet) agree that the bug is squashed IF , and that resolves the “paradox”. They also agree that the impact of rivet head on surface happens before the bug is squashed, provided that the following condition is satisfied:
i.e., they agree if the impact of rivet head on surface happens before the bug is squashed
Otherwise, they disagree on which event happens first. For instance, if
For speeds this high, the observer in the bug’s frame of reference still deduces that the rivet-head impact happens first, but the other observer deduces that the bug is squashed first. This is consistent with the relativity of simultaneity! At the critical speed, when the two events are simultaneous in the frame of the rivet, (the river fits perfectly in the shortened hole), but they are not simultaneous in the other frame of reference.
See you in the next blog post!
The Batmobile “fake paradox” helps us to understand Special Relativity a little bit. This problem consists in the next experiment:
There are two observers. Alfred, the external observer, and Batman moving with his Batmobile.
Now, we will suppose that the Batmobile is moving at a very fast constant speed with respect to the garage. Let us suppose that . Then, we have the following situation from the external observer:
The question is. Who is right? Alfred or Batman? The surprinsig answer from Special Relativity is that Both are correct. Alfred and Batman are right! Let’s see why it is true. For Alfred, there is a time during which the Batmobile is completely inside the garage with both doors closed:
By the other hand, for Batman, the front and rear doors are not closed simultaneously! So there is never a time during which the Batmobile is completely inside the garage with both doors closed.
So, there is no paradox at all, if you are aware about the notion of simultaneity and its relativity!
I found this fun (Spanish) exam about Special Relativity at a Spanish website:
3) t=13.6 months = 13 months and 18 days.
1) We use the relativistic addition of velocities rule. That is,
where u=Millenium Falcon velocity, v=imperial cruiser velocity= c/5, y V=relative speed=4c/5.
Using units with c=1:
Then, reinserting units.
2) This part is solved with the length contraction formula and the velocity calculated in the previous part (1). Moreover, we obtain:
Using the result we got from (1), and plugging that velocity v and the fact that is equal to one hour, then es
, and from this
Substituting the numerical values, we obtain the given solution easily.
3) Simple application of time dilation formula provides:
Inserting, in this case, our given velocity, we obtain the solution we wrote above:
We are going to learn about the different notions of velocity that the special theory of relativity provides.
The special theory of relativity is a simple wonderful theory, but it comes with many misconceptions due to bad teaching/science divulgation. It is not easy to master the full theory of relativity without the proper mathematical background and physical insight. In the internet era where knowledge is shared, a fundamental issue is to understand things properly. There are many people who thinks they understand the theory of relativity when they don’t. Even at the academia.
Moreover, you can find many people in the blogsphere/websphere trying to sell false theories and wrong theories. It is the same like the so-called alternative medicine: they are not medicine at all. Bad science is not science, it is simply a lie and not science at all. It is religion. Science can be critized, but nobody can critize that Earth revolves around the Sun, it is common knowledge and truth. So, we can make critics to scientist, but not the scientific method and well established theories. We can try to understand better or in a novel way, but we can not deny facts and experiments. Gerard ‘t Hooft, Nobe Prize, explain it in his web page www.phys.uu.nl/~thooft/.
It is important to remark that Science revolutions come when we extend the theories we know they are correct, like special relativity and not with a full destruction of the current and well-tested theories. Newtonian relativity is a limit of General Relativity. Galilean relativity is a limit of Special Relativity. Quantum Mechanics is a limit of QFT and so on. The issue is not that. Said these words, I am quite sure that scientists and particularly physicists wish to overcome current theories with new ones. However, the process to create a new theory is not easy. Specially, if you don’t understand the traps and theories that have passed every known test till now.
What is velocity? Classically, the answer is short and very clear/neat: velocity is the rate of change of position with respect to time. It is a vector magnitude. Mathematically speaking is the quotient between the displacement vector and the time interval, or in the infinitesimal limit, the derivative of the position vector with respect to time.
In the special theory of relativity, due to the fact that time is not universal but relative we can build different notions of velocity. And it matters. There are some clear concepts from relativity you should master till now:
a) You can attach a clock to any yardstick you could physically use for measurements of space and time.
b) You must distinguish the notions of coordinate velocity (map coordinate is another commonly used notion/concept) and proper velocity. The latter is sometimes called hyperbolic (or imaginary) velocity. These two notions are caused by the presence of two “natural” elections of time: the proper time and the coordinate time.
c) Due to the previous two facts, you must also distinguish between proper acceleration and geometric acceleration. Proper-accelerations caused by the tug of external forces and geometric accelerations caused by choice of a reference frame that’s not geodesic i.e. a local reference coordinate-system that is not ”in free-fall”. Proper-accelerations are felt through their points of action e.g. through forces on the bottom of your feet. On the other hand geometric accelerations give rise to inertial forces that act on every ounce of an object’s being. They either vanish when seen from the vantage point of a local free-float frame, or give rise to non-local force effects on your mass distribution that cannot be made to disappear. Coordinate acceleration goes to zero whenever proper-acceleration is exactly canceled by that connection term, and thus when physical and inertial forces add to zero.
People who are not aware of the previous comments, don’t understand relativity and the physics behind it. They even don’t undertand what experiments and their data say.
Let me review the main magnitudes, 3-vectors and 4-vectors which the special theory of relativity studies in the next tables:
The two notions of 3-velocity we do have from the special theory of relativity, i.e., from the 4-velocity , are:
1) Coordinate velocity, :
It is the common notion of 3-velocity, measured from an inertial observer with respect to the coordinate time t. Note that the coordinate time is not a true invariant in SR!
2) Proper velocity (or the hyperbolic velocity/imaginary angle velocity related to it):
where is the proper time. This velocity can intuitively defined as the distance per unit traveler-time, retains many of the properties that ordinary velocity loses at high speed. In addition to these two definitions, we also have:
1)Proper-acceleration , is the acceleration experienced relative to a locally co-moving free-float-frame, and it helps when we are accelerating, speeding, and in curvy space-time.
2) How some of the space-like effect of sideways ”felt” forces moves into the reference-frame’s time-domain at high speed, making the relatively unknown bound (from special relativity!)
With the above definitions, the relativistic momentum can be expressed in termns of coordinate velocity or proper velocity as follows:
is the Lorentz factor. The last equal sign in the previous equation can be easily derived from the relativistic relationship:
and the definition of above.
Thanks to the metric-equation’s assignment of a frame-invariant traveler or proper-time to the displacement between events in context of a single map-frame of comoving yardsticks and synchronized clocks, proper velocity becomes one of three related derivatives in special relativity (coordinate velocity , proper-velocity , and Lorentz factor ) that describe an object’s rate of travel. For unidirectional motion, in units of lightspeed c (i.e. c=1 if we want to) each of these is also simply related to a traveling object’s hyperbolic velocity angle or rapidity by the next set of equations:
The next table illustrates how the proper-velocity of or “one map-lightyear per traveler-year” is a natural benchmark for the transition from a sub-relativistic coordinate frame to a (fake) auxiliary super-relativistic motion (in imaginary units of ). Note that the velocity angle or pseudorapidity and the proper-velocity run from 0 to infinity and track the physical coordinate-velocity when . On the other hand when , the (hyperbolic or imaginary) proper-velocity tracks Lorentz factor while velocity angle is logarithmic and hence increases much more slowly:
LUDICROUS SPEED AND WARP SPEED
Hyperbolic velocities CAN exceed c! They can reach even the ludicrous speed of when the coordinate velocity approaches c! However, you must never forget the fact that the velocity-angle/hyperbolic velocity IS imaginary in value. It is quite clear from the above table. Indeed, being somehow “trekkie” or a Sci-Fi “romantic” person, you could “define” warp-speeds as “imaginary/hyperbolic” velocities, i.e., in terms of proper velocity. In that case, you could get the correspondence
In general, we can define the WARP speed as and so, the proper velocity can be expressed in terms of the warp speed W in a very simple way . Thus, the real or coordinate velocity would be connected with warp-speed through the relativistic equation:
Of course, the point is that, unlike the Sci-Fi franchise, the real velocity has never exceeded c, only the hyperbolic velocity and the proper velocity (note that in terms of SR, velocities approaching c imply very boosted frames, so despite we could travel to any point of the Universe in SR only approaching c very closely with respect to the traveler proper time-one human life-, but in terms of the “Earth” (or rest) reference frame millions of years would have passed away!).
When the coordinate-speeds approach c, the respective coordinate velocities deviate from this simple addition rule in that rapidities (hyperbolic velocity angle boosts) add instead of velocities, i.e. . Coordinate velocities add non-linearly. And it is a well-tested consequence of the Special Theory of relativity. For highly relativistic objects (i.e. those with momentum per unit mass much larger than lightspeed) the result of the coordinate-velocity expression familiar from most textbooks is rather uninteresting since the coordinate-velocities all peak out at c, i.e., as everybody knows, in special relativity , because applying the relativistic addition of velocities rule, we get
And it is a fact from both theory and experiment! It will remain as long as SR remains a valid theory. SR holds yet with an astonishing degree of precision and accuracy. So, you can not deny every data and experiment that confirms SR. That is completely nonsense but there are some people and pseudo-scientists out there building their own theories AGAINST the achievements and explanations that SR provides to every experiment we have done until the current time. I am sorry for all of them. They are totally wrong. Science is not what they say it is. Any theory going beyond SR HAS to explain every experiment and data that SR does explain, and it is not easy to build such a theory or to say, e.g., why we have not observed (apparently) superluminal objects. I will discuss more superluminal in a forthcoming post/log entry, some posts after the special 50th post/log that is coming after this one! Stay tuned!
Coming back to our discussion…Why is all this stuff important? High Energy Physics is the natural domain of SR! And there, SR has not provided ANY wrong result till, in spite that some researches going beyond the Standard Model include modified dispersion relationships that reduce to SR in the low energy regime, we have not seen yet ANY deviation from SR until now.
For unidirectional motion, at low speeds the coordinate velocity of object 1 from the point of view of oncoming object 3 might be described as the sum of the velocity of object 1 with respect to lab frame 2 plus the velocity of the lab frame 2 with respect to object 3, that is:
Compare this expression to the previously obtained expression for rapidities! Rapidities always add, coordinate velocities add (linearly) only at low velocities. In conclusion, you must be careful by what you mean by velocity is a boosted system!
By the other hand, for relative proper-velocity, the result is:
This expression shows how the momentum per unit mass as well as the map-distance traveled per unit traveler time of object 1, as seen in the frame of oncoming particle 3, goes as the sum of the coordinate-velocities times the product of the gamma (energy) factors. The proper velocity equation is especially important in high energy physics, because colliders enable one to explore proper-speed and energy ranges much higher than accessible with fixed-target collisions. For instance each of two electrons (traveling with frames 1 and 3) in a head-on collision traveling in the lab frame (2) at
or equivalenty lightseconds per traveler second would see the other coming toward them at coordinate velocity and lightseconds per traveler second or . From the target’s view, that is an incredible increase in both energy and momentum per unit of mass.
Other magnitudes and their frame dependence in SR can be read from the following table:
CAUTION: These results don’t mean that the “real” energy is that. Energy is relative and it depends on the frame! The fact that in colliders, seen from the target reference frame, the energy can be greater than the center of mass energy is not an accident. It is a consequence of the formalism of special relativity. A similar observation can be done for velocities. Coordinate velocities, IN THE FRAMEWORK OF SPECIAL RELATIVITY, can never exceed the speed of light. As long as SR holds, there is no particle whose COORDINATE velocity can overcome the speed of light. However, we have seen that PROPER velocities are other monsters. They serve as a tool to handle rotations along the temporal axis, i.e., to handle boosts mixing space and time coordinates. Proper (or hyperbolic) velocities CAN be greater than speed of light. But, it does not contradict the special theory of relativity at all since hyperbolic velocities ARE NOT REAL since they are imaginary quantities and they are not physical. We can only measure momentum and real quantities! Moreover, remember that, in fact, group or phase velocities we have found before can ALSO be greater than c. So, you must be careful by what do you mean by velocity in SR or in any theory. Furthermore, you must distinguish the notion of particle velocity with those of the relative velocity between two inertial frames, since the particle velocities ( coordinate or proper) always refer to some concrete frame! In summary, be aware of people saying that there are superluminal particles in our colliders or astrophysical processes. It is simply not true. Superluminal objects have observable consequences, and they have failed to be observed ( the last example was the superluminal neutrino affair by the OPERA collaboration, now in agreement with SR).
Remark (I): From the last table we observe that in SR, the rotation angle is imaginary. Therefore, we are forced to use this gadget of hyperbolic velocity in order to avoid “imaginary velocities”.
Remark (II): Hyperbolic velocities would become imaginary velocities if we used the imaginary formalism of SR, the infamous .
Remark (III): Hyperbolic velocities are not coordinate velocities, so they are not physical at all. They are just a tool to provide the right answers in terms of rapidities, or the hyperbolic angle, whose units are imaginary radians! Hyperbolic velocities are measured in imaginary units of velocity!
Remark (IV): About the imaginary issues you can have now. The spacetime separation formula means that the time t can often be treated mathematically as if it were an imaginary spatial dimension. That is, you can define so , where is the square root of -1, and is a “fourth spatial coordinate”. Of course it is not at all. It is only a trick to treat the problem in a clever way. By the other hand, a Lorentz boost by a velocity can likewise be treated as a rotation by an imaginary angle. Consider a normal spatial rotation in which a primed frame is rotated in the -plane clockwise by an angle about the origin, relative to the unprimed frame. The relation between the coordinates and of a point in the two frames is:
Now set and , with both real. In other words, take the spatial coordinate to be imaginary, and the rotation angle likewise to be imaginary. Then the rotation formula above becomes
This agrees with the usual Lorentz transformation formulat if the boost velocity and boost angle are related by the known formula . We realize that if we identify the imaginary angle with the rapidity, we are back to Special Relativity. Indeed, it is only the rotations involving the time axis which can cause confusion because they are so different from our everyday experience. That is, we experience rotations along some direction in our daily experience, so we are familiarized with rotations and their (real) rotation angles. However, rotations along a time axis mixing space and time is a weird creature. It uses imaginary numbers or, if we avoid them, we have to use hyperbolic (pseudo)-rotations.
SUMMARY OF MAIN IDEAS
A) Lorentz factor
B) Proper-velocity or momentum per unit mass.
C) Coordinate velocity .
D) Hyperbolic velocity angle or rapidity.
or in terms of logarithms:
E) Warp speed (just for fun):
LORENTZ TRANSFORMATIONS IN NON-STANDARD FORM
Let me begin this post with an uncommon representation of Lorentz transformations in terms of “uncommon matrices”. A Lorentz transformation can be written symbolically, as we have seen before, as the set of linear transformations leaving invariant
Therefore, the Lorentz transformations are naively . Let be 3-rowed column matrices and let represent matrices and will be used (unless it is stated the contrary) to denote the matrix transposition ( interchange of rows and columns in the matrix).
The invariance of implies the following results from the previous definitions:
Then, we can write the matrix for a Lorent transformation (boost) in the following non-standard manner:
and the inverse transformation will be
Thus, we have , where we also have
Let us define, in addition to this stuff, the reference frames , corresponding to the the coordinates and . Then, the boost matrix will be recasted, if the velocity read , as
Remark: a Lorentz transformation will differ from boosts only by rotations in the general case. That is, with these conventions, the most general Lorentz transformations include both boosts and rotations.
For all , the above transformation is well-defined, but if , then it implies we will face with transformations containing the reversal of time ( the time reversal operation T, please, is a different thing than matrix transposition, do not confuse their same symbols here, please. I will denote it by in order to distinguish, althoug there is no danger to that confusion in general). The time reversal can be written indeed as:
In that case, (), after the boost , we have to make the changes and . If these shifts are done, the reference frames and can be easily related
in such a way that
where the rotation matrix is given formally by the next equation:
R must be an orthogonal matrix, i.e., . Then , or . For we have the parity matrix
and it will transform right-handed frames to left-handed frames or . The rotation vector can be defined as well:
so . The rotation acting on 3-rowed matrices:
implies that , and it changes of the frame S into . Passing from one frame into another, to , it implies we can define a boost with . In fact,
Remark(I): Without the time reversal, we would get
with and .
Remark (II): . If , then the uniqueness of provides that , i.e., that R is an orthogonal matrix. If R is an orthogonal matrix and a proper Lorentz transformation ( ), then we would get , and thus or , and so, or , with the unimodular vector , i.e., . That would be the case and . Otherwise, if , then would be an arbitrary vector.
ADDITION OF VELOCITIES REVISITED
The second step previous to our treatment of Thomas precession is to review ( setting ) the addition of velocities in the special relativistic realm. Suppose a point particle moves with velocity in the reference frame . Respect to the S-frame (in rest) we will write:
and with we can calculate the ratio :
where we have defined:
Comment: the composition law for 3-velocities is special relativity is both non-linear AND non-associative.
There are two special cases of motion we use to consider in (special) relativity and inertial frames:
1st. The case of parallel motion between frames (or “parallel motion”). In this case , i.e., . Therefore,
This is the usual non-linear rule to add velocities in Special Relativity.
2nd. The case of orthogonal motion between frames, where . It means . Then,
This orthogonal motion to the direction of relative speed has an interesting phenomenology, since this inertial motion will be slowed down due to time dilation because the spatial distances that are orthogonal to are equal in both reference frames.
Furthermore, we get also:
Indeed, the condition implies that or , and the latter condition is actually forbidden because of our interpretation of as a relative velocity between different frames. Thus, this last equation shows the Lorentz invariance in Special relativity don’t allow for superluminal motion, although, a priori, it could be also used for even superluminal speeds since no restriction apply for them beyond those imposed by the principle of relativity.
We are ready to study the Thomas precession and its meaning. Suppose an inertial frame obtained from another inertial frame by boosting the velocity . Therefore, owns the relative velocity given by the addition rule we have seen in the previous section. Moreover, we have:
Then, we get
Here, we have defined:
Remark (I): The matrix L given by
is NOT symmetric as we would expect from a boost. According to our decomposition for the matrix it can be rewritten in the following way
This last equation is called the Thomas precession associated with the tridimensional 3-vectors . We observe that R is a proper-orthogonal matrix from the multiplicative property of the determinants and the fact that all boosts have determinant one. Equivalently, from the condition for all orthogonal matrix R together with the continuous dependence of R on the velocities and the initial condition .
Remark (II): From the definitions of M, and the vectors , we deduce that is an eigenvector of R with eigenvalue +1 and this gives the axis of rotation. The rotation angle as calculated from is complicated expression, and only after some clever manipulations or the use of the geometric algebra framework, it simplifies to
In order to understand what this equation means, we have to observe that the components and refer to different reference frames, and then, the scalar product and the cross product must be given good analitic expressions before the geometric interpretation can be accomplished. Moreover, if we want to interpret the cross product as an axis in the reference frame , and correspondingly we want to split , by the definition we deduce that
and thus, the Thomas rotation of the inertial frame S has its axis orhtogonal to the relative velocity vectors of the reference frame , against S.
By the other hand, if we interpret the above last equation as an axis in the reference frame , asociated to the split , we would deduce that implies the following consequence. The reference frame is got from boosting certain frame S’ obtained itself from a rotation of S by R. Then, obtains (compared with S or S’), a velocity whose components are in the inertial frame S’. Reciprocally, the components of the velocity of S or S’ against the frame are provided, in , by . Therefore, from the Thomas precession formula for R we observe that differs from only by linear combinations of the vectors and . With all this results we easily derive:
i.e., the axis for the Thomas rotation matrix of is orthogonal to the relative velocities of the inertial frames S, against . Finally, to find the rotation matrix, it is enough to restrict the problem to the case where is small so that squares of it may be neglected. In this simple case, R would become into:
and where the rotation angle is given by
In order to understand the Physics behind the Thomas precession, we will consider one single experiment. Imagine an inertial frame S in accelerated motion with respect to other inertial frame I. The spatial axes of S remain parallel at any time in the sense that the instantaneous reference frame coinciding with S at times are related by a pure boost in the limit . This may be managed if we orient S with the aid of a very fast spinning torque-free gyroscope. Then, from the inertial frame I, S seems to be rotated at each instant of time and there is a continuous rotation of S against I since the velocity of S varies and changes continuously. This gyroscopic rotation of S relative to I IS the Thomas precession. We can determine the angular velocity of this motion in a straightforward manner. During the small interval of time measured from I, the instantaneous velocity of S changes by certain quantity , measured from I. In that case,
for the rotation vector during a time interval . Thus, the angular velocity for the Thomas precession will be given by:
or reintroducing the speed of light we get
Remark(I): The special relativistic effect given by the Thomas precession was used by Thomas himself to remove a discrepancy and mismatch between the non-relativistic theory of the spinning electron and the experimental value of the fine structure. His observation was, in fact, that the gyromagnetic ratio of the electron calculated from the anomalous Zeeman effect led to a wrong value of the fine structure constant . The Thomas precession introduces a correction to the equation of motion of an electron in an external electromagnetic filed and such a correction induces a correction of the spin-orbit coupling, explaining the correct value of the fine structure.
Remark (II): In the framework of the relativistic quantum theory of the electron, Dirac realized that the effect of Thomas precession was automatically included!
Remark (III): Inside the Thomas paper, we find these interesting words
“(…)It seems that Abraham (1903) was the first to consider in any detail an electron with an axis. Many have since then considered spinning electron, ring electrons, and the like. Compton (1921) in particular suggested a quantized spin for the electron. It remained for Uhlenberg and Goudsmit (1925) to show ho this idea can be used to explain the anomalous Zeeman effect. The asumptions they had to make seemed to lead to optical and relativity doublet separations twice larger than those we observe. The purpose of the following paper, which contains the results mentioned in my recent letter to Nature (1926), is to investigate the kinematics of an electron with an axis on the basis of the restricted theory of relativity. The main fact used is that the combination of two Lorentz transformations without rotation in general is not of the same form(…)”.
From the historical viewpoint it should also be remarked that the precession effect was known by the end of 1912 to the mathematician E.Borel (C.R.Acad.Sci.,156. 215 (1913)). It was described by him (Borel, 1914) as well as by L.Silberstein (1914) in textbooks already 1914. It seems that the effect was even known to A.Sommerfeld in 1909 and before him, perhaps even to H.Poincaré. The importance of Thomas’ work and papers on this subject was thus not only the rediscovery but the relevant application to a virulent problem in that time, as it was the structure of the atomic spectra and the fine structure constant of the electron!
Remark (IV): Not every Lorentz transformation can be written as the product of two boosts due to the Thomas precession!
THE LORENTZ GROUP AS A QUASIDIRECT PRODUCT: QUASIGROUPS, LOOPS AND GYROGROUPS
Even though we have not studied group theory in this blog, I feel the need to explain some group theory stuff related to the Thomas precession here.
The kinematical differences between Galilean and Einsteinian relativity theories is observed at many levels. The essential differences become apparent already on the level of the homogenous groups without reversals (inverses). Let me first consider the Galileo group. It is generated by space rotations and galilean boosts in any number and order. Using the notation we have developed in this post, we could write in this way:
The following relationships are deduced:
In the case of the Lorentz group, these equations are “generalized” into
where is the Thomas precession and the circle denotes the nonlinear relativisti velocity addition. Be aware that the domain of velocities in special relativity is , in units with c set to unity.
Both groups (Galileo and Lorentz) contain as a subroupt the group of al spatial rotations . The set of galilean or lorentzian boosts and are invariant under conjugation by , since
are boosts as well. In the case of the Galileo group, the set of (galilean) boost forms an (abelian) subgroup and then, it provides an invariant group. We can calculate the factor group with respect to it and we will obtain an isomorphic group to the subgroup of space rotations. Using the group law for the Galileo group:
with and . As a consequence, the homogenous Galileo group (without reversals) is called a semidirect product of the rotation group with the Abelian group of all boosts given by .
The case of Lorentz group is more complicated/complex. The reason is the Thomas precession. Indeed, the set of boost does NOT form a subgroup of the Lorentz group! We can define a product in this group:
but, in the contrary to the result we got with the Galileo group, this condition does NOT define a group structure. In fact, mathematicians call objects with this property groupoids. The domain of velocities of the this lorentzian grupoid becomes a groupoid under the multiplication . It has dramatic consequences. In particular, the associative does not hold for this multiplication and this groupoid structure! Anyway, a weaker form of it is true, involving the Thomas precession/rotation formula:
In an analogue way, the multiplication is not commuative in general too, but it satisfies a weaker form of commutativity. While in general groupoids require to distinguish between right and left unit elements (if any), we have indeed as a “two-sided” unit element for the velocity groupoid. In the same manner, while in general groupoids right and left inverses may differ (if any), in the case of Lorentz group, the groupoid associated to Thomas precession has a unique two-sided inverse for any relative to the groupoid multiplication law. It is NON-trivial ( due to non-associativeness), albeit true, that the equation given by
may be solved uniquely for and, provided we plug , it may be solve uniquely for any . A groupoid satisfying this property (i.e., a groupoid that allows such a uniqueness in the solutions of its equation) is called quasi-group.
In conclusion, we can say that the Lorentz group IS, in sharp contrast to the Galileo group, in no way a semidirect product, being what mathematicians and physicists call a simple group, i.e., it is a noncommutative group having no nontrivial invariant subgroup! It is due to the fact that the multiplication rule of the Lorentz group without reversals makes it, in the sense of our previous definitions, the quasidirect product of the rotation group (as a subgroup of the automorphism group of the velocity groupoid) with the so-called “weakly associative groupoid of velocities”. Here, weakly associative(-commutative) groupoid means the following: a groupoid with a left-sided unit and left-sided inverses with the next properties:
1. Weak associativeness:
2. Loop property (from Thomas precession formula):
and where the automorphims group of the velocity groupoid is defined with the next equations
Definition (Automorphism group of the velocity groupoid):
Note: an associative groupoid is called semigroup and and a semigroup with two-sided unit element is called a monoid.
This algebraic structure hidden in the Lorentz group has been rediscovered several times along the History of mathematical physics. A groupoid satisfying the loop property has been named in other ways. For instance, in 1988, A. A. Ungar derived the above composition laws and the automorphism group of the Thomas precession R. Independently, A. Nesterov and coworkers in the Soviet Union had studied the same problem and quasigroup since 1986. And we can track this structure even more. 20 years before the Ungar “rediscovery”, H. Karzel had postulated a version of the same abstract object, and it was integrated into a richer one with two compositions (laws). He called it “near-domain”, where the automorphims R (Thomas precessions) were to be realized by the (distributive) left multiplication with suitable elements of the near-domian ( the reference is Abh. Math.Sem.Uni. Hamburg, 1968).
However, Ungar himself developed a more systematic treatment and description for the Thomas precession “groupoid” that is behind all this weird non-associative stuff in the Lorentz-group in 3+1 dimensions. Accorging to his new approach and terminology, the structure is called “gyrocommutative gyrogroup” and it includes the Thomas precession as “Thomas gyration” in this framework. If you want to learn more about gyrogroups and gyrovector spaces, read this article
Some other authors, like Wefelscheid and coworkers, called K-loops to these gyrogroups. Even more, there are two extra sources from this nontrivial mathematical structure.
Firstly, in Japan, M.Kikkawa had studied certain loops with a compatible differentiable structure called “homegeneous symmetric Lie groups” ( Hiroshima Math. J.5, 141 (1975)). Even though he did not discuss any concrete example, it is natural from his definitions that it was the same structure Karzel found. Being romantic, we can observe certain justice to call K-loops to gyrogroups (since Kikkawa and Karzel discovered them first!). The second source can be tracked in time since the same ideas were already known by L.Sabinin et alii circa 1972 ( Sov. Math. Dokl.13,970(1972)). Their relation to symmetric homogeneous spaces of noncompact type has been discussed some years ago by W. Krammer and H.K.Urbatke, e.g., in Res. Math.33, 310 (1998).
Finally, a purely algebraic loop theory approach (with motivations far way from geometry or physics) was introduced by D. A. Robinson in 1966. In 1995, A. Kreuzer showed thath it was indeed identical to K-loops, again adding some extra nomenclature ( Math.Proc.Camb. Phylos.Soc.123, 53 (1998)).
THOMAS PRECESSION: EASY DEDUCTION
We have seen that the composition of 2 Lorentz boosts, generally with 2 non collinear velocities, results in a Lorentz transformation that IS NOT a pure boost but a composition of a single Lorentz transformation or boost and a single spatial rotation. Indeed, this phenomenon is also called Wigner-Thomas rotation. The final consequence, any body moving on a curvilinear trajectory undergoes and experiences a rotational precession, firstly noted by Thomas in the relativistic theory of the spinning electron.
In this final section, I am going to review the really simple deduction of the Thomas precession formula given in the paper http://arxiv.org/abs/1211.1854
Imagine 3 different inertial observers Anna, Bob and Charles and their respective inertial frames A, B, and C attached to them. We choose A as a non-rotated frame with respect to B, and B as a non-rotated reference frame w.r.t. C. However, surprisingly, C is going to be rotated w.r.t. A and it is inevitable! We are going to understand it better. Let Bob embrace Charles and let them move together with constant velocity w.r.t. Anna. In some point, Charles decides to run away from Bob with a tiny velocity w.r.t. Bob. Then, Bob is moving with relative velocity w.r.t. C and Anna is moving with relative velocity w.r.t. B. We can show these events with the following diagram:
Now, we can write Charles’ velocity in the Anna’s frame by the sum . Since the frame C is rotated with respect to the A frame, his velocity in the C frame will be will be calculated step to step as follows. Firstly, we remark that
Secondly, the angle of an infinitesimal rotation is given by:
The precession rate in the A frame will be provided using the general nonlinear composition rule in SR. If the motion is parallel to the x-axis with velocity , we do know that
and where and are the velocities of some object in the rest frame and the moving frame, respectively. For an arbitrary non-collinear, non-orthogonal, i.e., non parallel velocity we obtain the transformations
and where the unprimed and primed frames are mutually non-rotated to each other. Using this last equation, (2), we can easily describe the transition from the frame A to the frame B. It involves the substitutions:
After leaving the first order terms in , we can get the following expansion from eq.(2):
Using again eq.(2) to make the transition between the B frame to the C frame, i.e., making the substitutions:
and dropping out higher order differentials in , we obtain the next formula after we neglect those terms
The final step consists is easy: we plug eq.(3) into eq.(4) and the resulting expression into eq.(1). Then, we divice by the differential in the final formula to provide the celebrated Thomas precession formula:
It can easily shown that these formulae is the same as the given previously above, writing in terms of and performing some elementary algebraic manipulations.
Aren’t you fascinated by how these wonderful mathematical structures emerge from the physical world? I can say it: Fascinating is not enough for my surprised mind!
Today we are going to study a relatively new effect ( new experimentally speaking, because it was first detected when I was an undergraduate student, in 2000) but it is not so new from the theoretical aside (theoretically, it was predicted in 1962). This effect is closely related to the Cherenkov effect. It is named Askaryan effect or Askaryan radiation, see below after a brief recapitulation of the Cherenkov effect last post we are going to do in the next lines.
We do know that charged particles moving faster than light through the vacuum emit Cherenkov radiation. How can a particle move faster than light? The weak speed of a charged particle can exceed the speed of light. That is all. About some speculations about the so-called tachyonic gamma ray emissions, let me say that the existence of superluminal energy transfer has not been established so far, and one may ask why. There are two options:
1) The simplest solution is that superluminal quanta just do not exist, the vacuum speed of light being the definitive upper bound.
2) The second solution is that the interaction of superluminal radiation with matter is very small, the quotient of tachyonic and electric fine-structure constants being . Therefore superluminal quanta and their substratum are hard to detect.
A related and very interesting question could be asked now related to the Cherenkov radiation we have studied here. What about neutral particles? Is there some analogue of Cherenkov radiation valid for chargeless or neutral particles? Because neutrinos are electrically neutral, conventional Cherenkov radiation of superluminal neutrinos does not arise or it is otherwise weakened. However neutrinos do carry electroweak charge and may emit certain Cherenkov-like radiation via weak interactions when traveling at superluminal speeds. The Askaryan effect/radiation is this Cherenkov-like effect for neutrinos, and we are going to enlighten your knowledge of this effect with this entry.
We are being bombarded by cosmic rays, and even more, we are being bombarded by neutrinos. Indeed, we expect that ultra-high energy (UHE) neutrinos or extreme ultra-high energy (EHE) neutrinos will hit us as too. When neutrinos interact wiht matter, they create some shower, specifically in dense media. Thus, we expect that the electrons and positrons which travel faster than the speed of light in these media or even in the air and they should emit (coherent) Cherenkov-like radiation.
Who was Gurgen Askaryan?
Let me quote what wikipedia say about him: Gurgen Askaryan (December 14, 1928-1997) was a prominent Soviet (armenian) physicist, famous for his discovery of the self-focusing of light, pioneering studies of light-matter interactions, and the discovery and investigation of the interaction of high-energy particles with condensed matter. He published more than 200 papers about different topics in high-energy physics.
Other interesting ideas by Askaryan: the bubble chamber (he discovered the idea independently to Glaser, but he did not published it so he did not win the Nobel Prize), laser self-focussing (one of the main contributions of Askaryan to non-linear optics was the self-focusing of light), and the acoustic UHECR detection proposal. Askaryan was the first to note that the outer few metres of the Moon’s surface, known as the regolith, would be a sufficiently transparent medium for detecting microwaves from the charge excess in particle showers. The radio transparency of the regolith has since been confirmed by the Apollo missions.
If you want to learn more about Askaryan ideas and his biography, you can read them here: http://en.wikipedia.org/wiki/Gurgen_Askaryan
What is the Askaryan effect?
The next figure is from the Askaryan radiation detected by the ANITA experiment:
The Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric medium (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contain a charge anisotropy and thus emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. It is similar, or more precisely it is based on the Cherenkov effect.
High energy processes such as Compton, Bhabha and Moller scattering along with positron annihilation rapidly lead to about a 20%-30% negative charge asymmetry in the electron-photon part of a cascade. For instance, they can be initiated by UHE (higher than, e.g.,100 PeV) neutrinos.
1962, Askaryan first hypothesized this effect and suggested that it should lead to strong coherent radio and microwave Cherenkov emission for showers propagating within the dielectric. Since the dimensions of the clump of charged particles are small compared to the wavelength of the radio waves, the shower radiates coherent radio Cherenkov radiation whose power is proportional to the square of the net charge in the shower. The net charge in the shower is proportional to the primary energy so the radiated power scales quadratically with the shower energy, .
Indeed, these radio and coherent radiations are originated by the Cherenkov effect radiation. We do know that:
from the charged particle in a dense (refractive) medium experimenting Cherenkov radiation (CR). Every charge emittes a field . Then, the power is proportional to . In a dense medium:
We have two different experimental and interesting cases:
A) The optical case, with . Then, we expect random phases and .
B) The microwave case, with . In this situation, we expect coherent radiation/waves with .
We can exploit this effect in large natural volumes transparent to radio (dry): pure ice, salt formations, lunar regolith,…The peak of this coherent radiation for sand is produced at a frequency around , while the peak for ice is obtained around .
The first experimental confirmation of the Askaryan effect detection were the next two experiments:
1) 2000 Saltzberg et.al., SLAC. They used as target silica sand. The paper is this one http://arxiv.org/abs/hep-ex/0011001
2) 2002 Gorham et.al., SLAC. They used a synthetic salt target. The paper appeared in this place http://arxiv.org/abs/hep-ex/0108027
Indeed, in 1965, Askaryan himself proposes ice and salt as possible target media. The reasons are easy to understand:
1st. They provide high densities and then it means a higher probability for neutrino interaction.
2nd. They have a high refractive index. Therefore, the Cerenkov emission becomes important.
3rd. Salt and ice are radio transparent, and of course, they can be supplied in large volumes available throughout the world.
The advantages of radio detection of UHE neutrinos provided by the Askaryan effect are very interesting:
1) Low attenuation: clear signals from large detection volumes.
2) We can observe distant and inclined events.
3) It has a high duty cycle: good statistics in less time.
4) I has a relative low cost: large areas covered.
5) It is available for neutrinos and/or any other chargeless/neutral particle!
Problems with this Askaryan effect detection are, though: radio interference, correlation with shower parameters (still unclear), and that it is limited only to particles with very large energies, about .
Askaryan effect = coherent Cerenkov radiation from a charge excess induced by (likely) neutral/chargeless particles like (specially highly energetic) neutrinos passing through a dense medium.
Why the Askaryan effect matters?
It matters since it allows for the detection of UHE neutrinos, and it is “universal” for chargeless/neutral particles like neutrinos, just in the same way that the Cherenkov effect is universal for charged particles. And tracking UHE neutrinos is important because they point out towards its source, and it is suspected they can help us to solve the riddle of the origin and composition of cosmic rays, the acceleration mechanism of cosmic radiation, the nuclear interactions of astrophysical objects, and tracking the highest energy emissions of the Universe we can observe at current time.
Is it real? Has it been detected? Yes, after 38 years, it has been detected. This effect was firstly demonstrated in sand (2000), rock salt (2004) and ice (2006), all done in a laboratory at SLAC and later it has been checked in several independent experiments around the world. Indeed, I remember to have heard about this effect during my darker years as undergraduate student. Fortunately or not, I forgot about it till now. In spite of the beauty of it!
Moreover, it has extra applications to neutrino detection using the Moon as target: GLUE (detectors are Goldstone RTs), NuMoon (Westerbork array; LOFAR), or RESUN (EVLA), or the LUNASKA project. Using ice as target, there has been other experiments checking the reality of this effect: FORTE (satellite observing Greenland ice sheet), RICE (co-deployed on AMANDA strings, viewing Antarctic ice), and the celebrated ANITA (balloon-borne over Antarctica, viewing Antarctic ice) experiment.
Furthermore, even some experiments have used the Moon (an it is likely some others will be built in the near future) as a neutrino detector using the Askaryan radiation (the analogue for neutral particles of the Cherenkov effect, don’t forget the spot!).
Askaryan effect and the mysterious cosmic rays.
Askaryan radiation is important because is one of the portals of the UHE neutrino observation coming from cosmic rays. The mysteries of cosmic rays continue today. We have detected indeed extremely energetic cosmic rays beyond the scale. Their origin is yet unsolved. We hope that tracking neutrinos we will discover the sources of those rays and their nature/composition. We don’t understand or know any mechanism being able to accelerate particles up to those incredible particles. At current time, IceCube has not detected UHE neutrinos, and it is a serious issue for curren theories and models. It is a challenge if we don’t observe enough UHE neutrinos as the Standard Model would predict. Would it mean that cosmic rays are exclusively composed by heavy nuclei or protons? Are we making a bad modelling of the spectrum of the sources and the nuclear models of stars as it happened before the neutrino oscillations at SuperKamiokande and Kamikande were detected -e.g.:SN1987A? Is there some kind of new Physics living at those scales and avoiding the GZK limit we would naively expect from our current theories?
The Cherenkov effect/Cherenkov radiation, sometimes also called Vavilov-Cherenkov radiation, is our topic here in this post.
In 1934, P.A. Cherenkov was a post graduate student of S.I.Vavilov. He was investigating the luminescence of uranyl salts under the incidence of gamma rays from radium and he discovered a new type of luminiscence which could not be explained by the ordinary theory of fluorescence. It is well known that fluorescence arises as the result of transitions between excited states of atoms or molecules. The average duration of fluorescent emissions is about and the transition probability is altered by the addition of “quenching agents” or by some purification process of the material, some change in the ambient temperature, etc. It shows that none of these methods is able to quench the fluorescent emission totally, specifically the new radiation discovered by Cherenkov. A subsequent investigation of the new radiation ( named Cherenkov radiation by other scientists after the Cherenkov discovery of such a radiation) revealed some interesting features of its characteristics:
1st. The polarization of luminiscence changes sharply when we apply a magnetic field. Cherenkov radiation luminescence is then causes by charged particles rather than by photons, the -ray quanta! Cherenkov’s experiment showed that these particles could be electrons produced by the interaction of -photons with the medium due to the photoelectric effect or the Compton effect itself.
2nd. The intensity of the Cherenkov’s radiation is independent of the charge Z of the medium. Therefore, it can not be of radiative origin.
3rd. The radiation is observed at certain angle (specifically forming a cone) to the direction of motion of charged particles.
The Cherenkov radiation was explained in 1937 by Frank and Tamm based on the foundations of classical electrodynamics. For the discovery and explanation of Cherenkov effect, Cherenkov, Frank and Tamm were awarded the Nobel Prize in 1958. We will discuss the Frank-Tamm formula later, but let me first explain how the classical electrodynamics handle the Vavilov-Cherenkov radiation.
The main conclusion that Frank and Tamm obtained comes from the following observation. They observed that the statement of classical electrodynamics concerning the impossibility of energy loss by radiation for a charged particle moving uniformly and following a straight line in vacuum is no longer valid if we go over from the vacuum to a medium with certain refractive index . They went further with the aid of an easy argument based on the laws of conservation of momentum and energy, a principle that rests in the core of Physics as everybody knows. Imagine a charged partice moving uniformly in a straight line, and suppose it can loose energy and momentum through radiation. In that case, the next equation holds:
This equation can not be satisfied for the vacuum but it MAY be valid for a medium with a refractive index gretear than one . We will simplify our discussion if we consider that the refractive index is constant (but similar conclusions would be obtained if the refractive index is some function of the frequency).
By the other hand, the total energy E of a particle having a non-null mass and moving freely in vacuum with some momentum p and velocity v will be:
Moreover, the electromagnetic radiation in vaccum is given by the relativistic relationship
From this equation, we easily get that
Since the particle velocity is , we obtain that
In conclusion: the laws of conservation of energy and momentum prevent that a charged particle moving with a rectilinear and uniform motion in vacuum from giving away its energy and momentum in the form of electromagnetic radiation! The electromagnetic radiation can not accept the entire momentum given away by the charged particle.
Anyway, we realize that this restriction and constraint is removed and given up when the aprticle moves in a medium with a refractive index . In this case, the velocity of light in the medium would be
and the velocity v of the particle may not only become equal to the velocity of light in the medium, but even exceed it when the following phenomenological condition is satisfied:
It is obvious that, when the condition
will be satisfied for electromagnetic radiation emitted strictly in the direction of motion of the particle, i.e., in the direction of the angle . If , this equation is verified for some direction along with , where
is the projection of the particle velocity v on the observation direction. Then, in a medium with , the conservation laws of energy and momentum say that it is allowed that a charged particle with rectilinear and uniform motion, can loose fractions of energy and momentum and , whenever those lost energy and momentum is carried away by an electromagnetic radiation propagating in the medium at an angle/cone given by:
with respect to the observation direction of the particle motion.
These arguments, based on the conservation laws of momenergy, do not provide any ide about the real mechanism of the energy and momentum which are lost during the Cherenkov radiation. However, this mechanism must be associated with processes happening in the medium since the losses can not occur ( apparently) in vacuum under normal circumstances ( we will also discuss later the vacuum Cherenkov effect, and what it means in terms of Physics and symmetry breaking).
We have learned that Cherenkov radiation is of the same nature as certain other processes we do know and observer, for instance, in various media when bodies move in these media at a velocity exceeding that of the wave propagation. This is a remarkable result! Have you ever seen a V-shaped wave in the wake of a ship? Have you ever seen a conical wave caused by a supersonic boom of a plane or missile? In these examples, the wave field of the superfast object if found to be strongly perturbed in comparison with the field of a “slow” object ( in terms of the “velocity of sound” of the medium). It begins to decelerate the object!
Question: What is then the mechanism behind the superfast motion of a charged particle in a medium wiht a refractive index producing the Cherenkov effect/radiation?
Answer: The mechanism under the Cherenkov effect/radiation is the coherent emission by the dipoles formed due to the polarization of the medium atoms by the charged moving particle!
The idea is as follows. Dipoles are formed under the action of the electric field of the particle, which displaces the electrons of the sorrounding atoms relative to their nuclei. The return of the dipoles to the normal state (after the particle has left the given region) is accompanied by the emission of an electromagnetic signal or beam. If a particle moves slowly, the resulting polarization will be distribute symmetrically with respect to the particle position, since the electric field of the particle manages to polarize all the atoms in the near neighbourhood, including those lying ahead in its path. In that case, the resultant field of all dipoles away from the particle are equal to zero and their radiations neutralize one to one.
Then, if the particle move in a medium with a velocity exceeding the velocity or propagation of the electromagnetic field in that medium, i.e., whenever , a delayed polarization of the medium is observed, and consequently the resulting dipoles will be preferably oriented along the direction of motion of the particle. See the next figure:
It is evident that, if it occurs, there must be a direction along which a coherent radiation form dipoles emerges, since the waves emitted by the dipoles at different points along the path of the particle may turn our to be in the same phase. This direction can be easiy found experimentally and it can be easily obtained theoretically too. Let us imagine that a charged particle move from the left to the right with some velocity in a medium with a refractive index, with . We can apply the Huygens principle to build the wave front for the emitted particle. If, at instant , the aprticle is at the point , the surface enveloping the spherical waves emitted by the same particle on its own path from the origin at to the arbitrary point . The radius of the wave at the point at such an instant t is equal to . At the same moment, the wave radius at th epint x is equal to . At any intermediate point x’, the wave radius at instant t will be . Then, the radius decreases linearly with increasing . Thus, the enveloping surface is a cone with angle , where the angle satisfies in addition
The normal to the enveloping surface fixes the direction of propagation of the Cherenkov radiation. The angle between the normal and the -axis is equal to , and it is defined by the condition
This is the result we anticipated before. Indeed, it is completely general and Quantum Mechanics instroudces only a light and subtle correction to this classical result. From this last equation, we observer that the Cherenkov radiation propagates along the generators of a cone whose axis coincides with the direction of motion of the particle an the cone angle is equal to . This radiation can be registered on a colour film place perpendicularly to the direction of motion of the particle. Radiation flowing from a radiator of this type leaves a blue ring on the photographic film. These blue rings are the archetypical fingerprints of Vavilov-Cherenkov radiation!
The sharp directivity of the Cherenkov radiation makes it possible to determine the particle velocity from the value of the Cherenkov’s angle . From the Cherenkov’s formula above, it follows that the range of measurement of is equal to
For , the radiation is observed at an angle , while for the extreme with , the angle reaches a maximum value
For instance, in the case of water, and . Therefore, the Cherenkov radiation is observed in water whenever . For electrons being the charged particles passing through the water, this condition is satisfied if
As a consequence of this, the Cherenkov effect should be observed in water even for low-energy electrons ( for isntance, in the case of electrons produced by beta decay, or Compton electrons, or photoelectroncs resulting from the interaction between water and gamma rays from radioactive products, the above energy can be easily obtained and surpassed!). The maximum angle at which the Cherenkov effec can be observed in water can be calculated from the condition previously seen:
This angle (for water) shows to be equal to about . In agreement with the so-called Frank-Tamm formula ( please, see below what that formula is and means), the number of photons in the frequency interval and emitted by some particle with charge Z moving with a velocity in a medium with a refractive indez n is provided by the next equation:
This formula has some striking features:
1st. The spectrum is identical for particles with , i.e., the spectrum is exactly the same, irespectively the nature of the particle. For instance, it could be produced both by protons, electrons, pions, muons or their antiparticles!
2nd. As Z increases, the number of emitted photons increases as .
3rd. increases with , the particle velocity, from zero ( with ) to
4th. is approximately independent of . We observe that .
5th. As the spectrum is uniform in frequency, and , this means that the main energy of radiation is concentrated in the extreme short-wave region of the spectrum, i.e.,
And then, this feature explains the bluish-violet-like colour of the Cherenkov radiation!
Indeed, this feature also indicates the necessity of choosing materials for practical applications that are “transparent” up to the highest frequencies ( even the ultraviolet region). As a rule, it is known that in the X-ray region and hence the Cherenkov condition can not be satisfied! However, it was also shown by clever experimentalists that in some narrow regions of the X-ray spectrum the refractive index is ( the refractive index depends on the frequency in any reasonable materials. Practical Cherenkov materials are, thus, dispersive! ) and the Cherenkov radiation is effectively observed in apparently forbidden regions.
The Cherenkov effect is currently widely used in diverse applications. For instance, it is useful to determine the velocity of fast charged particles ( e.g, neutrino detectors can not obviously detect neutrinos but they can detect muons and other secondaries particles produced in the interaction with some polarizable medium, even when they are produced by (electro)weak intereactions like those happening in the presence of chargeless neutrinos). The selection of the medium fo generating the Cherenkov radiation depends on the range of velocities over which measurements have to be produced with the aid of such a “Cherenkov counter”. Cherenkov detectors/counters are filled with liquids and gases and they are found, e.g., in Kamiokande, Superkamiokande and many other neutrino detectors and “telescopes”. It is worth mentioning that velocities of ultrarelativistic particles are measured with Cherenkov detectors whenever they are filled with some special gasesous medium with a refractive indes just slightly higher than the unity. This value of the refractive index can be changed by realating the gas pressure in the counter! So, Cherenkov detectors and counters are very flexible tools for particle physicists!
Remark: As I mentioned before, it is important to remember that (the most of) the practical Cherenkov radiators/materials ARE dispersive. It means that if is the photon frequency, and is the wavenumber, then the photons propagate with some group velocity , i.e.,
Note that if the medium is non-dispersive, this formula simplifies to the well known formula . As it should be for vacuum.
Accodingly, following the PDG, Tamm showed in a classical paper that for dispersive media the Cherenkov radiation is concentrated in a thin conical shell region whose vertex is at the moving charge and whose opening half-angle is given by the expression
where is the critical Cherenkov angle seen before, is the central value of the small frequency range under consideration under the Cherenkov condition. This cone has an opening half-angle (please, compare with the previous convention with for consistency), and unless the medium is non-dispersive (i.e. , ), we get . Typical Cherenkov radiation imaging produces blue rings.
THE CHERENKOV EFFECT: QUANTUM FORMULAE
When we considered the Cherenkov effect in the framework of QM, in particular the quantum theory of radiation, we can deduce the following formula for the Cherenkov effect that includes the quantum corrections due to the backreaction of the particle to the radiation:
where, like before, , n is the refraction index, is the De Broglie wavelength of the moving particle and is the wavelength of the emitted radiation.
Cherenkov radiation is observed whenever (i.e. if ), and the limit of the emission is on the short wave bands (explaining the typical blue radiation of this effect). Moreover, corresponds to .
By the other hand, the radiated energy per particle per unit of time is equal to:
where is the angular frequency of the radiation, with a maximum value of .
Remark: In the non-relativistic case, , and the condition implies that . Therefore, neglecting the quantum corrections (the charged particle self-interaction/backreaction to radiation), we can insert the limit and the above previous equations will simplify into:
Remember: is determined with the condition , where represents the dispersive effect of the material/medium through the refraction index.
THE FRANK-TAMM FORMULA
The number of photons produced per unit path length and per unit of energy of a charged particle (charge equals to ) is given by the celebrated Frank-Tamm formula:
In terms of common values of fundamental constants, it takes the value:
or equivalently it can be written as follows
The refraction index is a function of photon energy , and it is also the sensitivity of the transducer used to detect the light with the Cherenkov effect! Therefore, for practical uses, the Frank-Tamm formula must be multiplied by the transducer response function and integrated over the region for which we have .
Remark: When two particles are close toghether ( to be close here means to be separated a distance wavelength), the electromagnetic fields form the particles may add coherently and affect the Cherenkov radiation. The Cherenkov radiation for a electron-positron pair at close separation is suppressed compared to two independent leptons!
Remark (II): Coherent radio Cherenkov radiation from electromagnetic showers is significant and it has been applied to the study of cosmic ray air showers. In addition to this, it has been used to search for electron neutrinos induced showers by cosmic rays.
CHERENKOV DETECTOR: MAIN FORMULA AND USES
The applications of Cherenkov detectors for particle identification (generally labelled as PID Cherenkov detectors) are well beyond the own range of high-energy Physics. Its uses includes: A) Fast particle counters. B) Hadronic particle indentifications. C) Tracking detectors performing complete event reconstruction. The PDG gives some examples of each category: a) Polarization detector of SLD, b) the hadronic PID detectors at B factories like BABAR or the aerogel threshold Cherenkov in Belle, c) large water Cherenkov counters liket those in Superkamiokande and other neutrino detector facilities.
Cherenkov detectors contain two main elements: 1) A radiator/material through which the particle passes, and 2) a photodetector. As Cherenkov radiation is a weak source of photons, light collection and detection must be as efficient as possible. The presence of a refractive material specifically designed to detect some special particles is almost vindicated in general.
The number of photoelectrons detected in a given Cherenkov radiation detector device is provided by the following formula (derived from the Tamm-Frank formula simply taking into account the efficiency in a straightforward manner):
where is the path length of the particle in the radiator/material, is the efficiency for the collector of Cherenkov light and transducing it in photoelectrons, and
Remark: The efficiencies and the Cherenkov critical angle are functions of the photon energy, generally speaking. However, since the typical energy dependen variation of the refraction index is modest, a quantity sometimes called Cherenkov detector quality fact can be defined as follows
In this case, we can write
Remark(II): Cherenkov detectors are classified into imaging or threshold types, depending on its ability to make use of Cherenkov angle information. Imaging counters may be used to track particles as well as identify particles.
Other main uses/applications of the Vavilov-Cherenkov effect are:
1st. Detection of labeled biomolecules. Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. For instance, radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.
2nd. Nuclear reactors. Cherenkov radiation is used to detect high-energy charged particles. In pool-type nuclear reactors, the intensity of Cherenkov radiation is related to the frequency of the fission events that produce high-energy electrons, and hence is a measure of the intensity of the reaction. Similarly, Cherenkov radiation is used to characterize the remaining radioactivityof spent fuel rods.
3rd. Astrophysical experiments. The Cherenkov radiation from these charged particles is used to determine the source and intensity of the cosmic ray,s which is used for example in the different classes of cosmic ray detection experiments. For instance, Ice-Cube, Pierre-Auger, VERITAS, HESS, MAGIC, SNO, and many others. Cherenkov radiation can also be used to determine properties of high-energy astronomical objects that emit gamma rays, such as supernova remnants and blazars. In this last class of experiments we place STACEE, in new Mexico.
4th. High-energy experiments. We have quoted already this, and there many examples in the actual LHC, for instance, in the ALICE experiment.
VACUUM CHERENKOV RADIATION
Vacuum Cherenkov radiation (VCR) is the alledged and conjectured phenomenon which refers to the Cherenkov radiation/effect of a charged particle propagating in the physical vacuum. You can ask: why should it be possible? It is quite straightforward to understand the answer.
The classical (non-quantum) theory of relativity (both special and general) clearly forbids any superluminal phenomena/propagating degrees of freedom for material particles, including this one (the vacuum case) because a particle with non-zero rest mass can reach speed of light only at infinite energy (besides, the nontrivial vacuum itself would create a preferred frame of reference, in violation of one of the relativistic postulates).
However, according to modern views coming from the quantum theory, specially our knowledge of Quantum Field Theory, physical vacuum IS a nontrivial medium which affects the particles propagating through, and the magnitude of the effect increases with the energies of the particles!
Then, a natural consequence follows: an actual speed of a photon becomes energy-dependent and thus can be less than the fundamental constant of speed of light, such that sufficiently fast particles can overcome it and start emitting Cherenkov radiation. In summary, any charged particle surpassing the speed of light in the physical vacuum should emit (Vacuum) Cherenkov radiation. Note that it is an inevitable consequence of the non-trivial nature of the physical vacuum in Quantum Field Theory. Indeed, some crazy people saying that superluminal particles arise in jets from supernovae, or in colliders like the LHC fail to explain why those particles don’t emit Cherenkov radiation. It is not true that real particles become superluminal in space or collider rings. It is also wrong in the case of neutrino propagation because in spite of being chargeless, neutrinos should experiment an analogue effect to the Cherenkov radiation called the Askaryan effect. Other (alternative) possibility or scenario arises in some Lorentz-violating theories ( or even CPT violating theories that can be equivalent or not to such Lorentz violations) when a speed of a propagating particle becomes higher than c which turns this particle into the tachyon. The tachyon with an electric charge would lose energy as Cherenkov radiation just as ordinary charged particles do when they exceed the local speed of light in a medium. A charged tachyon traveling in a vacuum therefore undergoes a constant proper-time acceleration and, by necessity, its worldline would form an hyperbola in space-time. These last type of vacuum Cherenkov effect can arise in theories like the Standard Model Extension, where Lorentz-violating terms do appear.
One of the simplest kinematic frameworks for Lorentz Violating theories is to postulate some modified dispersion relations (MODRE) for particles , while keeping the usual energy-momentum conservation laws. In this way, we can provide and work out an effective field theory for breaking the Lorentz invariance. There are several alternative definitions of MODRE, since there is no general guide yet to discriminate from the different theoretical models. Thus, we could consider a general expansion in integer powers of the momentum, in the next manner (we set units in which ):
However, it is generally used a more soft expansion depending only on positive powers of the momentum in the MODRE. In such a case,
and where . If Lorentz violations are associated to the yet undiscovered quantum theory of gravity, we would get that ordinary deviations of the dispersion relations in the special theory of relativity should appear at the natural scale of the quantum gravity, say the Planck mass/energy. In units where we obtain that Planck mass/energy is:
Lets write and parametrize the Lorentz violations induced by the fundamental scale of quantum gravity (naively this Planck mass scale) by:
Here, is a dimensionless quantity that can differ from one particle (type) to another (type). Considering, for instance , since the seems to be ruled out by previous terrestrial experiments, at higer energies the lowest non-null term will dominate the expansion with . The MODRE reads:
and where the label in the term is specific of the particle type. Such corrections might only become important at the Planck scale, but there are two exclusions:
1st. Particles that propagate over cosmological distances can show differences in their propagation speed.
2nd. Energy thresholds for particle reactions can be shifted or even forbidden processes can be allowed. If the -term is comparable to the -term in the MODRE. Thus, threshold reactions can be significantly altered or shifted, because they are determined by the particle masses. So a threshold shift should appear at scales where:
Imposing/postulating that , the typical scales for the thresholds for some diffent kind of particles can be calculated. Their values for some species are given in the next table:
We can even study some different sources of modified dispersion relationships:
1. Measurements of time of flight.
2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum.
3. Shift in the so-called GZK cut-off.
4. Modified dispersion relationships induced by non-commutative theories of spacetime. Specially, there are time shifts/delays of photon signals induced by non-commutative spacetime theories.
We will analyse this four cases separately, in a very short and clear fashion. I wish!
Case 1. Time of flight. This is similar to the recently controversial OPERA experiment results. The OPERA experiment, and other similar set-ups, measure the neutrino time of flight. I dedicated a post to it early in this blog
In fact, we can measure the time of flight of any particle, even photons. A modified dispersion relation, like the one we introduced here above, would lead to an energy dependent speed of light. The idea of the time of flight (TOF) approach is to detect a shift in the arrival time of photons (or any other massless/ultra-relativistic particle like neutrinos) with different energies, produced simultaneous in a distant object, where the distance gains the usually Planck suppressed effect. In the following we use the dispersion relation for only, as modifications in higher orders are far below the sensitivity of current or planned experiments. The modified group velocity becomes:
and then, for photons,
The time difference in the photon shift detection time will be:
where D is the distance multiplied (if it were the case) by the redshift to correct the energy with the redshift. In recent years, several measurements on different objects in various energy bands leading to constraints up to the order of 100 for . They can be summarized in the next table ( note that the best constraint comes from a short flare of the Active Galactic Nucleus (AGN) Mrk 421, detected in the TeV band by the Whipple Imaging Air Cherenkov telescope):
There is still room for improvements with current or planned experiments, although the distance for TeV-observations is limited by absorption of TeV photons in low energy metagalactic radiation fields. Depending on the energy density of the target photon field one gets an energy dependent mean free path length, leading to an energy and redshift dependent cut off energy (the cut off energy is defined as the energy where the optical depth is one).
2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum. By the other hand, the interaction vertex in quantum electrodynamics (QED) couples one photon with two leptons. When we assume for photons and leptons the following dispersion relations (for simplicity we adopt all units with M=1). Then:
Let us write the photon tetramomentum like and the lepton tetramomentum and . It can be shown that the transferred tetramomentum will be
where the r.h.s. is always positive. In the Lorentz invariant case the parameters are zero, so that this equation can’t be solved and all processes of the single vertex are forbidden. If these parameters are non-zero, there can exist a solution and so these processes can be allowed. We now consider two of these interactions to derive constraints on the parameters . The vacuum
Cherenkov effect and the spontaneous photon-decay .
A) As we have studied here, the vacuum Cherenkov effect is a spontaneous emission of a photon by a charged particle . These effect occurs if the particle moves faster than the slowest possible radiated photon in vacuum!
In the case of , the maximal attainable speed for the particle is faster than c. This means, that the particle can always be faster than a zero energy photon with
and it is independent of . In the case of , i.e., decreases with energy, you need a photon with . This is only possible if .
Therefore, due to the radiation of photons such an electron loose energy. The observation of high energetic electrons allows to derive constraints on and . In the case of , in the case with n=3, we have the bound
Moreover, from the observation of 50 TeV photons in the Crab Nebula (and its pulsar) one can conclude the existens of 50 TeV electrons due to the inverse Compton scattering of these electrons with those photons. This leads to a constraint on of about
where we have used in this case.
B) The decay of photons into positrons and electrons should be a very rapid spontaneous decay process. Due to the observation of Gamma rays from the Crab Nebula on earth with an energy up to . Thus, we can reason that these rapid decay doesn’t occur on energies below 50 TeV. For the constraints on and these condition means (again we impose n=3):
3. Shift in the GZK cut-off. As the energy of a proton increases,the pion production reaction can happen with low energy photons of the Cosmic Microwave Background (CMB).
This leads to an energy dependent mean free path length of the particles, resulting in a cutoff at energies around . This is the the celebrated Greisen-Kuzmin-Zatsepin (GZK) cut off. The resonance for the GZK pion photoproduction with the CMB backgroud can be read from the next condition (I will derive this condition in a future post):
Thus in Lorentz invariant world, the mean free path length of a particle of energy 5.1019 eV is 50 Mpc i.e. particle over this energy are readily absorbed due to pion photoproduction reaction. But most of the sources of particle of ultra high energy are outside 50 Mpc. So, one expects no trace of particles of energy above on Earth. From the experimental point of view AGASA has found
a few particles having energy higher than the constraint given by GZK cutoff limit and claimed to be disproving the presence of GZK cutoff or at least for different threshold for GZK cutoff, whereas HiRes is consistent with the GZK effect. So, there are two main questions, not yet completely unsolved:
i) How one can get definite proof of non-existence GZK cut off?
ii) If GZK cutoff doesn’t exist, then find out the reason?
The first question could by answered by observation of a large sample of events at these energies, which is necessary for a final conclusion, since the GZK cutoff is a statistical phenomena. The current AUGER experiment, still under construction, may clarify if the GZK cutoff exists or not. The existence of the GZK cutoff would also yield new limits on Lorentz or CPT violation. For the second question, one explanation can be derived from Lorentz violation. If we do the calculation for GZK cutoff in Lorentz violated world we would get the modified proton dispersion relation as described in our previous equations with MODRE.
4. Modified dispersion relationships induced by non-commutative theories of spacetime. As we said above, there are time shifts/delays of photon signals induced by non-commutative spacetime theories. Noncommutative spacetime theories introduce a new source of MODRE: the fuzzy nature of the discreteness of the fundamental quantum spacetime. Then, the general ansatz of these type of theories comes from:
where are the components of an antisymmetric Lorentz-like tensor which components are the order one. The fundamental scale of non-commutativity is supposed to be of the Planck length. However, there are models with large extra dimensions that induce non-commutative spacetime models with scale near the TeV scale! This is interesting from the phenomenological aside as well, not only from the theoretical viewpoint. Indeed, we can investigate in the following whether astrophysical observations are able to constrain certain class of models with noncommutative spacetimes which are broken at the TeV scale or higher. However, there due to the antisymmetric character of the noncommutative tensor, we need a magnetic and electric background field in order to study these kind of models (generally speaking, we need some kind of field inducing/producing antisymmetric field backgrounds), and then the dispersion relation for photons remains the same as in a commutative spacetime. Furthermore, there is no photon energy dependence of the dispersion relation. Consequently, the time-of-flight experiments are inappopriate because of their energy-dependent dispersion. Therefore, we suggest the next alternative scenario: suppose, there exists a strong magnetic field (for instance, from a star or a cluster of stars) on the path photons emitted at a light source (e.g. gamma-ray bursts). Then, analogous to gravitational lensing, the photons experience deflection and/or change in time-of-arrival, compared to the same path without a magnetic background field. We can make some estimations for several known objects/examples are shown in this final table:
1st. Vacuum Cherenkov and related effects modifying the dispersion relations of special relativity are natural in many scenarios beyond the Standard Relativity (BSR) and beyond the Standard Model (BSM).
2nd. Any theory allowing for superluminal propagation has to explain the null-results from the observation of the vacuum Cherenkov effect. Otherwise, they are doomed.
3rd. There are strong bounds coming from astrophysical processes and even neutrino oscillation experiments that severely imposes and kill many models. However, it is true that current MODRE bound are far from being the most general bounds. We expect to improve these bounds with the next generation of experiments.
4th. Theories that can not pass these tests (SR obviously does) have to be banned.
5th. Superluminality has observable consequences, both in classical and quantum physics, both in standard theories and theories beyond standard theories. So, it you buid a theory allowing superluminal stuff, you must be very careful with what kind of predictions can and can not do. Otherwise, your theory is complentely nonsense.
As a final closing, let me include some nice Cherenkov rings from Superkamiokande and MiniBoone experiments. True experimental physics in action. And a final challenge…
FINAL CHALLENGE: Are you able to identify the kind of particles producing those beautiful figures? Let me know your guesses ( I do know the answer, of course).
Figure 1. Typical SuperKamiokande Ring. I dedicate this picture to my admired Japanase scientists there. I really, really admire that country and their people, specially after disasters like the 2011 Earthquake and the Fukushima accident. If you are a japanase reader/follower, you must know we support your from abroad. You were not, you are not and you shall not be alone.
Figure 2. Typical MiniBooNe ring. History: I used this nice picture in my Master Thesis first page, as the cover/title page main picture!
Before becoming apparent superluminal readers, we are going to remember and review some elementary notation and concepts from the relativistic Doppler effect and the starlight aberration we have already studied in this blog.
Let us consider and imagine the next gedankenexperiment/thought experiment. Some moving object emits pulses of light during some time interval, denoted by in its own frame. Its distance from us is very large, say
Question: Does it (light) arrive at time ? Suppose the object moves forming certain angle according to the following picture
Time dilation means that a second pulse would be experiment a time delay , later of course from the previous pulse, and at that time the object would have travelled a distance away from the source, so it would take it an additional time to arrive at its destination. The reception time between pulses would be:
In the range , the time interval separation measured from both pulses in the rest frame on Earth will be longer than in the rest frame of the moving object. This analysis remains valid even if the 2 events are not light beams/pulses but succesive packets or “maxima” of electromagnetic waves ( electromagnetic radiation).
Astronomers define the dimensionless redshift
where, as it is common in special relativity, ,
The 3 interesting limits of the above expression are:
1st. Receding emitter case. The moving object moves away from the receiver. Then, we have supposing a completely radial motion in the line of sight, and then a literal “redshift” ( lower frequencies than the proper frequencies)
2nd. Approaching emitter case. The moving object approaches and goes closer to the observer. Then, we get , or motion inward the radial direction, and then a “blueshift” ( higher frequencies than those of the proper frequencies)
3rd. Tangential or transversal motion of the source. This is also called second-order redshift. It has been observed in extremely precise velocity measurements of pulsars in our Galaxy.
Furthermore, these redshifts have all been observed in different astrophysical observations and, in addition, they have to be taken into account for tracking the position via GPS, geolocating satellites and/or following their relative positions with respect to time or calculating their revolution periods around our planet.
Remark: Quantum Mechanics and Special Relativity would be mutually inconsistent IF we did not find the same formual for the ratios between energy and frequencies at different reference frames.
EXAMPLE: The emission line of the oxygen (II) [O(II)] is, in its rest frame, . It is observed in a distant galaxy to be at . What is the redshift z and the recession velocity of this galaxy?
Solution. From the definition of wavelength in electromagnetism , adn . Then,
, and thus
From the radial velocity hypothesis, we get
and thus or
Note that this result follows from the hypothesis of the expansion of the Universe, and it holds in the relativistic theory of gravity, General Relativity, and it should also holds in extensions of it, even in Quantum Gravity somehow!
Remember: Stellar aberration causes taht the positions on the sky of the celestial objects are changing as the Earth moves around the Sun. As the Earth’s velocity is about , and then , it implies an angular separation about . Anyway, it is worth mentioning that the astronomer Bradley observed this starlight aberration in 1729! A moving observer observes that light from stars are at different positions with respect to a rest observer, and that the new position does not depend on the distance to the star. Thus, as the relative velocity increases, stars are “displaced” further and further towards the direction of observation.
Now, we are going to the main subject of the post. I decided to review this two important effects because it is useful to remember then and to understand that they are measured and they are real effects. They are not mere artifacts of the special theory of relativity masking some unknown reality. They are the reality in the sense they are measured. Alternative theories trying to understand these effects exist but they are more complicated and they remember me those people trying to defend the geocentric model of the Universe with those weird metaphenomenon known as epicycles in order to defend what can not be defended from the experimental viewpoint.
In order to make our discussion visual and phenomenological, I am going to consider a practical example. Certain radio-galaxy, denoted by 3C 273 moves with a velocity
Knowing the rate expansion of the universe and the redshift of the radiogalaxy, its distance is calculated to be about . To obtain the relative tangential velocity, we simply multiply the angular velocity by the distance, i.e. .
From the above data, we get that the apparent tangential radial velocity of our radiogalaxy would be about . Indeed, this observation is not isolated. There are even jets of matter flowin from some stars at apparent superluminal velocities. Of course this is an apparent issue for SR. How can we explain it? How is it possible in the SR framework to obtain a superluminal velocity? It shows that there is no contradiction with SR. The (fake and apparent) superluminal effect CAN BE EXPLAINED naturally in the SR framework in a very elegant way. Look at the following picture:
-A moving object with velocity with respect to Earth, approaching to Earth.
-There is some angle in the direction of observation. And as it moves towards Earth, with our conventions, $lates \theta\approx\pi=180\textdegree$
-The moving object emits flashes of light at two different points, A and B, separated by some time interval in the Earth reference frame.
-The distance between those two points A and B, is very small compared with the distance object-Earth, i.e., .
Question: What is the time separation between the receptions of the pulses at the Earth surface?
The solution is very cool and intelligent. We get
A: time interval
B: time interval
Note that !
From this equations, we get a combined equation for the time separation of pulses on Earth
The tangential separation is defined to be
so, the apparent velocity of the source, seen from the Earth frame, is showed to be:
Remark (I): IFF AND !
Remark (II): There are some other sources of fake superluminality in special relativity or general relativity (the relativist theory of gravity). One example is that the phase velocity and the group velocity can indeed exceed the speed of light, since from the equation , it is obvious that whenever that one of those two velocities (group or phase velocity) are lower than the speed of light at vacuum, the another has to be exceeding the speed of light. That is not observable but it has an important rôle in the de Broglie wave-particle portrait of the atom. Other important example of apparent and fake superluminal motion is caused by gravitational (micro)lensing in General Relativity. Due to the effect of intense gravitational fields ( i.e., big concentrations of mass-energy), light beams from slow-movinh objects can be magnified to make them, apparently, superluminal. In this sense, gravity acts in an analogue way of a lens, i.e., as it there were a refraction index and modifying the propagation of the light emitted by the sources.
Remark (III): In spite of the appearance, I am not opposed to the idea of superluminal entities, if they don’t break established knowledge that we do know it works. Tachyons have problems not completely solved and many physicists think (by good reasons) they are “unphysical”. However, my own experience working with theories beyond special/general relativity and allowing superluminal stuff (again, we should be careful with what we mean with superluminality and with “velocity” in general) has showed me that if superluminal objects do exist, they have observable consequences. And as it has been showed here, not every apparent superluminal motion is superluminal!Indeed, it can be handled in the SR framework. So, be aware of crackpots claiming that there are superluminal jets of matter out there, that neutrinos are effectively superluminal entities ( again, an observation refuted by OPERA, MINOS and ICARUS and in complete disagreement with the theory of neutrino oscillations and the real mass that neutrino do have!) or even when they say there are superluminal protons and particles in the LHC or passing through the atmosphere without any effect that should be vissible with current technology. It is simply not true, as every good astronomer, astrophysicist or theoretical physicist do know! Superluminality, if it exists, it is a very subtle thing and it has observable consequences that we have not observed until now. As far as I know, there is no (accepted) observation of any superluminal particle, as every physicist do know. I have discussed the issue of neutrino time of flight here before:
Final challenge: With the date given above, what would the minimal value of be in order to account for the observed motion and apparent (fake) superluminal velocity of the radiogalaxy 3C 273?