LOG#122. Basic Neutrinology(VII).

The observed mass and mixing both in the neutrino and quark cases could be evidence for some interfamily hierarchy hinting that the lepton and quark sectors were, indeed, a result of the existence of a new quantum number related to “family”. We could name this family symmetry as U(1)_F. It was speculated by people like Froggatt long ago. The actual intrafamily hierarchy, i.e., the fact that m_u>>m_d in the quark sector, seem to require one of these symmetries to be anomalous.

A simple model with one family dependent anomalous U(1) beyond the SM was first proposed long ago to produce the given Yukawa coupling and their hierarchies, and the anomalies could be canceled by the Green-Schwarz mechanism which as by-product is able to fix the Weinberg angle as well. Several developments include the models inspired by the E_6\times E_8 GUT or the E_8\times E_8 heterotic superstring theory. The gauge structure of the model is that of the SM but enlarged by 3 abelian U(1) symmetries and their respective fields, sometimes denoted by X,Y^{1,2}. The first one is anomalous and family independent. Two of these fields, the non-anomalous, have specific dependencies on the 3 chiral families designed to reproduce the Yukawa hierarchies. There are right-handed neutrinos which “trigger” neutrino masses by some special types of seesaw mechanisms.

The 3 symmetries and their fields X,Y^{1,2} are usually spontaneously broken at some high energy scale M_X by stringy effects. It is assumed that 3 fields, \theta_i, with i=1,2,3, develop a non-null vev. These \theta_i fields are singlets under the SM gauge group but not under the abelian symmetries carried by X, Y^{1,2}. Thus, the Yukawa couplings appear as some effective operators after the U(1)_F spontaneous symmetry breaking. In the case of neutrinos, we have the mass lagrangian (at effective level):

\mathcal{L}_m\sim h_{ij}L_iH_uN_j^c\lambda^{q_i+n_j}+M_N\xi_{ij}N_i^cN_j^c\lambda^{n_i+n_j}

and where h_ {ij},\xi_{ij}\sim \mathcal{O}(1). The parameters \lambda determine the mass and mixing hierarchy with the aid of some simple relationships:

\lambda=\dfrac{\langle \theta\rangle}{M_X}\sim\sin\theta_c

and where \theta_c is the Cabibblo angle. The q_i,n_i are the U(1)_F charges assigned to the left handed leptons L and the right handed neutrinos N. These couplings generate the following mass matrices for neutrinos:

m_\nu^D=\mbox{diag}(\lambda^{q_1},\lambda^{q_2},\lambda^{q_3})\hat{h}\mbox{diag}(\lambda^{n_1},\lambda^{n_2},\lambda^{n_3})\langle H_u\rangle


From these matrices, the associated seesaw mechanism gives the formula for light neutrinos:

m_\nu\approx \dfrac{\langle H_u\rangle^2}{M_X}\mbox{diag}(\lambda^{q_1},\lambda^{q_2},\lambda^{q_3})\hat{h}\hat{\xi}^{-1}\hat{h}^T\mbox{diag}(\lambda^{q_1},\lambda^{q_2},\lambda^{q_3})

The neutrino mass mixing matrix depends only on the charges we assign to the LH neutrinos due to cancelation of RH neutrino charges and the seesaw mechanism. There is freedom in the assignment of the charges q_i. If the charges of the second and the third generation of leptos are equal (i.e., if q_2=q_3), then one is lead to a mass matrix with the following structure (or “texture”):

m_\nu\sim \begin{pmatrix}\lambda^6 & \lambda^3 & \lambda^3\\ \lambda^3 & a & b\\ \lambda^3 & b & c\end{pmatrix}

and where a,b,c\sim \mathcal{O}(1). This matrix can be diagonalized in a straightforward fashion by a large \nu_2-\nu_3 rotation. It is consistent (more or less), with a large \mu-\tau mixing. In this theory or model, the explanation of the large neutrino mixing angles is reduced to a theory of prefactors in front of powers of the parameters \lambda, related with the vev after the family group spontaneous symmetry breaking!

LOG#117. Basic Neutrinology(II).


The current Standard Model of elementary particles and interactions supposes the existence of 3 neutrino species or flavors. They are neutral, upper components of “doublets” L_i with respect to the SU(2)_L group, the weak interaction group after the electroweak symmetry breaking, and we have:

L_i\equiv \begin{pmatrix}\nu_i\\ l_i\end{pmatrix} \forall i=(e,\mu,\tau)

These doublets have the 3rd component of the weak isospin I_{3W}=1/2 and they are assigned an unit of the global ith lepton number. Thus, we have electron, muon or tau lepton numbers. The 3 right-handed charged leptons have however no counterparts in the neutrino sector, and they transform as singlets with respect to the weak interaction. That is, there are no right-handed neutrinos in the SM, we have only left-handed neutrinos. Neutrinos are “vampires” and, at least at low energies (those we have explored till now), they have only one “mirror” face: the left-handed part of the helicities. No observed neutrino has shown to be right-handed.


Beyond mass and charge assignments and their oddities, in any other respect, neutrinos are very well behaved particles within the SM framework and some figures and facts are unambiguosly known about them. The LEP Z boson line-shape measurements imply tat there are only 3 ordinary/light (weakly interacting) neutrinos.

The Big Bang Nucleosynthesis (BBN) constrains the parameters of possible additional “sterile” neutrinos, non-weak interacting or those which interact and are produced only my mixing. All the existing data on the weak interaction processes and reactions in which neutrinos take part are perfectly described by the SM charged-current (CC) and neutral-current (NC) lagrangians:

\displaystyle{\mathcal{L}_{I}(CC)=-\dfrac{1}{\sqrt{2}}\sum_{i=e,\mu,\tau}\bar{\nu}_{L,i}\gamma_\alpha l_{Li}W^\alpha+h.c.}

\displaystyle{\mathcal{L}_{I}(NC)=-\dfrac{1}{2\cos\theta_W}\sum_{i=e,\mu,\tau}\bar{\nu}_{L,i}\gamma_\alpha l_{Li}Z^\alpha+h.c.}

and where W^\alpha, Z^\alpha are the neutral and charged massive vector bosons of the weak interaction. The CC and NC interaction lagrangians conserve 3 total additive quantum numbers: the lepton numbers L_{e}, L_\mu, L_\tau, while the structure of the CC interactions is what determine the notion of flavor neutrinos \nu_e, \nu_\mu, \nu_\tau.

There are no hints (yet) in favor of the violation of the conservation of these (global) lepton numbers in weak interactions and this fact provides very strong bound on brancing ratios of rare, lepton number violating reactions. For instance (even when the next data is not completely updated), we generally have (up to a 90% of confidence level, C.L.):

1. R(\mu\longrightarrow e\mu)<4.9\cdot 10^{-11}

2. R(\mu\longrightarrow 3e)<1.0\cdot 10^{-12}

3. R(\mu\longrightarrow e(2\gamma))<7.2\cdot 10^{-11}

4. R(\tau\longrightarrow e\gamma)<2.7\cdot 10^{-6}

5. R(\tau\longrightarrow \mu\gamma)<3.0\cdot 10^{-6}

6. R(\mu\longrightarrow 3e)< 2.9\cdot 10^{-6}

As we can observe, these lepton number violating reactions, if they exist, are very weird. From the theoretical viewpoint, in the minimal extension of the SM where the right-handed neutrinos are introduced and the neutrino gets a mass, the branching ratio of the \mu\longrightarrow e\gamma decay is given by (2 flavor mixing only is assumed):

R(\mu\longrightarrow e\gamma)=G_F\left(\dfrac{\sin 2\theta \Delta m_{12}^2}{2M_W^2}\right)^2

and where m_{1,2} are the neutrino masses, \Delta m_{12}^2 their squared mass difference, M_W is the W boson mass and \theta is the mixing angle of their respective neutrino flavors in the lepton sector. Using the experimental upper bound on the heaviest neutrino (believed to be \nu_\tau without loss of generality), we obtain that

R^{theo}\sim 10^{-18}

Thus, we get a value far from being measurable at present time as we can observe by direct comparison with the above experimental results!!!

In fact, the transition \mu\longrightarrow e\gamma and similar reactions are very sensitive to new physics, and particularly, to new particles NOT contained in the current description of the Standard Model. However, the R value is quite “model-dependent” and it could change by several orders of magnitude if we modify the neutrino sector introducing some extra number of “heavy”/”superheavy” neutrinos.

See you in another Neutrinology post! May the neutrinos be with you until then!

LOG#100. Crystalline relativity.



CENTENARY BLOG POST! And dedicatories…

1. Serendipitous thoughts about my 100th blog post

2. The search for unification and higher dimensional theories

3. Final relativity

4. Kalitzin’s metric: multitemporal relativity

5. Spacetime crystals and crystalline relativity: concepts and results

6. Enhanced galilean relativity

7. Conformal two-time relativity and gravitation

8. Hyperspherical electromagnetism and multitemporal relativity

9. Conclusions

Centenary blog post and dedicatories

My blog is 100 posts “old”. I decided that I wanted a special topic and subject for it, so I have thinking during several days if I should talk about Physmatics, tropical mathematics or polylogarithms, but these topics deserve longer entries, or a full thread to discuss them with details I consider very important, so finally I changed my original mind and I took a different path.

This blog entry is dedicated specially to my friends out there. They are everywhere in the world. And specially to Carlos Castro, M. Pavsic (inventors of C-space, M-space relativity in Clifford spaces and the brane M-space approach to relativity with Clifford Algebras, respectively), my dear friend S.Lukic (now working hard in biomathematics and mathematical approaches to genetics), A. Zinos (a promising Sci-Fi writer), J. Naranja (my best friend, photographer and eclectic man) and all my (reduced) Spanish friends (yes, you know who are you, aren’t you?). I dedicated this special blog entry to my family (even if they don’t know what I am doing with this stuff, likely they have no idea at all…) and those special people who keep me up and make me feel alive (from time to time, when they write me, in russian worlds), even when the thunder sounds and the storm arises and I switch off from almost all the real world. And finally, it is also dedicated to all my unbiased followers around the world… Wherever you are… It is also dedicated to all of you…

Well, firstly I should eat a virtual take, don’t you think so?


1. Serendipitous thoughts about my 100th blog post


Here, in my 100th post, I am going to write about some old fashioned idea/s, likely “crackpot” to some current standards, but it also shares interesting ideas with Sci-Fi and real scientific topics like the recently introduced “time crystals” by Wilczek. The topic today is: a forgotten (likely wrong) multitemporal theory of relativity!

Why did I choose such a crazy topic? Firstly, it is an uncommon topic. Multitemporal theories or theories with extra time-like dimensions are generally given up or neglected by the physics community. The reasons seem to be broad: causality issues (closed time-like curves “are bad”), the loss of experimental evidence (time seems to be 1D, doesn’t it?), vacuum instabilities induced/triggered by QM with extra time-like dimensions and many others (some of them based on phislophical prejudices, I would say). From the pure mathematical viewpoint, extra time-like dimensions are posible and we can handle them in a similar way to space-like dimensions, but some differences arise. Let me remark that there is a complete branch of mathematics (sometimes called semi-riemannian geometry) that faces with spaces with multiple temporal dimensions (spaces with more than one temporal coordinate, generally more than minus, or plus-dependind on your sign convention).

The second reason is that I am very interested in any theory beyond the Standard Model, and particularly, any extension of Special Relativity that has been invented and in any extension that could be built from first principles. Extended theories of relativity beyond Special Relativiy do exist. The first theory Beyond Standard Special Relativity, to my knowledge, was metarelativity, namely: extended special relativity allowing “tachyons”. It was pioneered by Recami, Sudarshan, Pavsic and some other people, to quote only some of the people I have in mind right now. Perhaps, the next (known) trial was Snyder Non-Commutative spacetime. It extends relativity beyond the realm of commutative spacetime coordinates. After these “common” extended relativities, we also have (today): deformed special relativities like Doubly or Triply Special Relativities and q-deformed versions like kappa-Minkovski spacetime and some other models like the de Sitter (dS) relativity. These theories are “non mainstream” today,  but they certainly have some followers (I am one of them) and there are clever people involved in their development. Let me note that Special Relativity seems to hold yet in any High Energy experiment, so extended relativities have to explain the data in such a way that their deformation parameters should approach the Minkonvskian geometry in certain limits. Even the Kaluza-Klein approach to extra spacelike dimensions is “a deformed” version of Special Relativity somehow!

Some more modern versions of extended relativities are the theory of relativity in Clifford spaces ( pioneered by Carlos Castro Perelman and Matej Pavsic, and some other relatively unknown researchers), a theory based in relativity in (generalized) phase spaces with a (generalized) Finsler geometry or the very special relativity.  In fact, Finsler geometries triggered another extension of special relativity long ago. People call this extension VERY SPECIAL relativity (or Born reciprocal relativity in phase space, a Finsler spacetime), and other people anisotropic spacetime relativity (specially some people from Russia and Eastern Europe). Perhaps, there are some subtle details, but they share similar principles and I consider very special relativity and finslerian relativity as “equivalent” models (some precision should be done here from a mathematical perspective, but I prefer an intuitive approach in this post). Remember: all these extensions are out there, irrespectively you believe in them or not, such theories do exist. A different issue IS if Nature obeys them or not closer or not, they can be built and either you neglect them due to some conservative tastes you have (Occam’s razor: you keep Minkovskian/General Relativity since they can fit every observation at a minimum ” theoretical cost”) or you find some experimental fact that can falsify them (note that they can fix their deformation parameters in order you avoid the experimental bounds we have at current time).

My third reason to consider this weird and zenzizenzizenzic post is to be an open mind. String theory or loop quantum gravity have not been “proved” right in the experiments. However, they are great mathematical and theoretical frameworks. Nobody denies that, not me at least. But no new evidences from the alledged predictions of string theory/Loop Quantum Gravity have been confirmed so far. Therefore, we should consider new ideas or reconsider old fashioned ideas in order to be unbiased. Feynman used to say that the most dangerous thing in physics was that everyone were working on the same ideas/theories. Of course, we can coincide in some general ideas or principles, but theory and experiment advances are both necessary. With only one theory or idea in the city, everything is boring. Again, the ultimate theory, if it exists, could be a boring theory, something like SM plus gravity (asymptotically safe) until and even beyond the Planck scale, but some people think otherwise. There are many “dark” and unglued pieces yet in Physmatics…

The final reason I will provide you is that…I like strange ideas! Does it convert me in a crackpot? I wish you think otherwise! I wouldn’t be who I am if I enjoyed dogmatic ideas only. I use to distinguish crackpottery from “non-standard” models, so maybe, a more precise definition or rule should be provided to know what is the difference between them (crackpottery and non-stardardness) but I believe that it is quite “frame dependent” at the end. So…Let me begin now with a historial overview!

2. The search for unification and higher dimensional theories

The unification of fundamental forces in a single theory or unified field theory was Einstein’s biggest dream. After the discovery that there was a pseudoeuclidean 4D geometry and a hidden symmetry in the Maxwell’s equations, Einstein’s quest was to express gravity in way that were consistent with the Minkovskian geometry in certain limit. Maxwell’s equations in 4D can be written as follows in tensor form:

\partial^\mu F_{\mu\nu}=\mbox{Div} F_{\mu\nu}=J_\nu


\mbox{Rot}F_{\mu\nu}=\dfrac{1}{2}\epsilon_{\mu\nu\sigma\tau}\partial^\nu F^{\sigma\tau}=0

where J_\nu=(-c\rho,\vec{j}) is the electromagnetic four-current. The symmetry group of these classical electromagnetic equations is the Poincare group, or to be more precise, the conformal group since we are neglecting the quantum corrections that break down that classical symmetre. I have not talked about the conformal group in my group theory thread but nobody is perfect! Eintein’s field equations for gravity are the following equations (they are “common knowledge” in general relativity courses):

G_{\mu\nu}=\kappa T_{\mu\nu}

The invariance group of (classical or standard) general relativity is something called the diffeomorphism group (due to general covariance). The diffeomorphism group invariace tells us that every (inertial or not) frame is a valid reference frame to every physical laws. Gravity can be “locally given away” if you use a “free fall” reference frame. The fact that you can “locally” forget about gravity is the content of the Einstein’s equivalence principle. I will discuss more the different classes of existing equivalence principles in a forthcoming thread of General Relativity, but this issue is not important today.

What else? Well, 4D theories seem not to be good enough to explain everything! Einstein’s himself devoted the last years of his life to find the unified theory of electromagnetism and gravity, ignoring the nuclear (quantum) interactions. It was his most famous failure beyond his struggles against the probabilistic interpretation of the  “new” Quantum Mechanics. Eintein’s unification dreams was tried by many others: Weyl, Kaluza, Klein, Eddington, Dirac himself, Heisenberg,…Remember that Faraday himself tried to find out a relation between gravity and electromagnetism! And those dreams continue alive today! In fact, quantum field theory “unifies” electromagnetism and weak nuclear forces with the electroweak theory inside the Standard Model. It is believed  that a Grand Unified Theory(GUT) should unify the electroweak force and the strong (nuclear) interaction at certain energy scale E_X. X is called the GUT scale, and it is generally believed that it arises at about $latez 10^{15}$ GeV. Unification with gravity is thought to be “relevant” at Planck scale E_P, or about 10^{19} GeV. Therefore, we can observe that there are two main “approaches” to the complete unification of the known “fundamental interactions”:

1st. The Particle Physics path. It began with the unification of electricity and magnetism. Then we discovered the nuclear interactions. Electromagnetism and weak interactions were unified in the 70s of the past 20th century. Then, it was conjectured that GUT unification would happen at high energy with Quantum Chromodynamics (the gauge theory of strong nuclear forces), and finally, the unification with gravity at Planck energy. Diagramatically speaking:

\mbox{EM}\longrightarrow \mbox{Nuclear Forces}\longrightarrow \mbox{EW theory}+\mbox{QCD}\longrightarrow \mbox{EW+QCD}+\mbox{Gravity}

2nd. The Faraday-Einstein unification path. It begins with the unification of gravity and electromagnetism first! Today, it can be said that the entropic gravity/force approach by Verlinde is a revival of this second path. It is also the classical road followed by Kaluza-Klein theories: gauge fields are higher dimensional components of a “big metric tensor” which becomes “quantized” somehow. Diagramatically:

\mbox{EM}\longrightarrow \mbox{Gravity}\longrightarrow \mbox{EM theory}+\mbox{Gravity}\longrightarrow \mbox{EM+Gravity}+\mbox{nuclear forces}

An interesting question is if these two paths are related and how we bring out together the best ideas of both of them. From a purely historical reason, the first path has been favoured and it has succeeded somehow. The classical “second” path is believed to be “wrong” since it neglects Quantum Mechanics and generally it finds issues to explain what Quantum Field Theories do explain. Is it a proof? Of course, it is NOT, but Physics and Physmatics have experimental foundations we can not avoid. It is not only a question of “pure thought” to invent a “good theory”. You have to test it. It has to explain everything you do know until now. That is how the Occam’s razor works in Science. You have experiments to do and observations to explain…You can not come with a new theory if it is in contradiction with well-tested theories. The new theory has to include the previous theories in some limit. Otherwise, you have a completely nonsense theory.

The second path to unification has lots of “hidden” stories and “strange theories”. Einstein’s works about teleparallelism and non-symmetrical metric tensor theories were induced by this road to unification. Has someone else followed this path?

3. Final relativity

Answer to the last question: Yes! I am going to explain you the generally unknown theory of projective relativity! It was originally created by the italian physicist Fantappie, and it was studied and extended to multiple time-like dimensions via a bulgarian guy called Kalitzin and an italian physicist known as G. Arcidiacono. Perhaps it shares some points with the current five-dimensional theory advocated by P.Wesson, but it is a completely different (parallel likely) history.

Fantappie (1901-1956) built a “projective” version of special relativity the he called “final relativity”. Today, it is known as de Sitter-relativity or de Sitter projective relativity, and according to Levy-Leblond, is one of the maximal deformations of kinematical groups available in classical physics! In fact, we can “see” the Fantappie’s final (projective) relativity as an anticipation of the cosmological constant as a physical reality. The cosmological constant IS a physical parameter in final relativity associated to the radius of the Universe. If you take this statement as “true”, you are driven to think that the cosmological constant is out there as a true “thing”. Giving up the mismatch between our current QFT calculations of vacuum energy, de Sitter relativity/final projective relativity does imply the existence of the cosmological constant! Of course, you should explain why our QFT are wrong in the way they are…But that is a different story. At current time, WMAP/Planck have proved that Dark Energy, a.k.a. the cosmological constant, is real. So, we should rethink about the way in which it enters in physics. Should we include a new symmetry in QFT (de Sitter symmetry) in order to solve the cosmological constant problem? It is a challenge! Usually, QFT are formulated in Minkovski space. But QFT calculations in Minkovski spacetime give no explanation of its cosmological value. Maybe, we should formulate QFT taking into accont the cosmological constant value. As far as I know, QFT defined on de Sitter spaces are much less developed that anti de Sitter spaces, since these ones are popular because of the adS/CFT correspondence. There are some interestings works about QFT in dS spaces in the arxiv. There are issues, though, e.g., the vacuum definition and QFT calculations in dS background are “harder” than the equivalent Minkovskian counterparts! But I believe it is a path to be explored further!

Fantappie had also a hierarchical “vision” on higher dimensional spaces. He defined “hyperspherical” universes S_n contained in rotational groups R_{n+1} with (n+1) euclidean dimensions and n(n+1)/2 group parameters. He conjectured that the hierarchy of hyperspherical universes S_3, S_4, \ldots, S_n provided a generalization of Maxwell equations, and with the known connection between S_n and R_{n+1}, Fantappie tried the construction of a unified theory with extra dimensions (a cosmological theory, indeed), with the aid of his projective relativity principle. He claimed to be able to generalize Einstein’s gravitational field equations to electromagnetism, following then the second path to unification that I explained above. I don’t know why Fantappie final projective relativity (or de Sitter relativity) is not more known. I am not expert in the History of Physics, but some people and ideas remain buried or get new names (de Sitter relativity is “equivalent” to final relativity) without an apparent reason at first sight. Was Fantappie a crackpot? Something tells me that Fantappie was a weird italian scientist like Majorana but he was not so brilliant. After all, Fermi, Pauli and other conteporary physicists don’t quote his works.

From projective relativity to multitemporal relativity

What about “projective relativity”? It is based on projective geometry. And today we do know that projective geometry is related and used in Quantum Mechanics! In fact, if we take the r=R\longrightarrow \infty limit of “projective” geometry, we end with “classical geometry”, and then S_n becomes E_n, the euclidean spaces, when the projective radius tends to “infinity”. Curiously, this idea of projective geometry and projective relativity remained hidden during several decades after Fantappie’s death (it seems so). Only G. Arcidiacono and N. Kalitzin from a completely different multitemporal approach worked in such “absolutely crazy” idea. My next exposition is a personal revision of the Arcidiacono-Kalitzin multitemporal projective relativity. Suppose you are given, beyond the 3 standard spatial dimensions (n-3) new parameters. They are ALL time-like, i.e., you have a (n-3) time vector

\vec{t}=\left( t_1,t_2,\ldots,t_{n-3}\right)

We have (n-3) timelike coordinates and (n-3) “proper times” \tau_s, with s=1,2,\ldots,n-3. Therefore, we will also have (n-3) different notions or “directions” of “velocity” that we can choose mutually orthogonal and normalized. Multitemporal (projective) relativity arise in this n dimensional setting. Moreover, we can introduce (n-3) “different” ( a priori) universal constants/speeds of light c_s and a projective radius of the Universe, R. Kalitzin himself worked with complex temporal dimensions and even he took the limit of \infty temporal dimensions, but we will not follow this path here for simplicity. Furthermore, Kalitzin gave no physical interpretation of those extra timelike dimensions/paramenters/numbers. By the other hand, G. Arcidiacono suggested the following “extension” of Galilean transformations:

\displaystyle{\overline{X}=f(X)=\sum_{n=0}^\infty \dfrac{X^{(n)}(0)t^n}{n!}}



These transformations are nonlinear, but they can be linearized in a standard way. Introduce (n-3) normalized “times” in such a way:

t_1=t, t_2=t^2/2,\ldots, t_s=t^{s}/s!

Remark: To be dimensionally correct, one should introduce here some kind of “elementary unit of time” to match the different powers of time.

Remark(II): Arcidiacono claimed that with 2 temporal dimensions (t,t'), and n=5, one gets “conformal relativity” and 3 universal constants (R,c,c'). In 1946, Corben introduced gravity in such a way he related the two speeds of light (and the temporal dimensions) so you get R=c^2/c' when you consider gravity. Corben speculated that R=c^2/c' could be related to the Planck’s legth L_p. Corben’s article is titled A classical theory of electromagnetism and gravity, Phys. Rev. 69, 225 (1946).

Arcidiacono’s interpretation of Fantappie’s hyperspherical universes is as follows: the Fantappie’s hyperspheres represent spherical surfaces in n dimensions, and these surfaces are embedded in certain euclidean space with (n+1) dimensions. Thus, we can introduce (n+1) parameters or coordinates


and the hypersphere


Define transformations

\xi'_A=\alpha_{AB}\xi_B with A,B=0,1,2,\ldots,n

where \alpha_{AB} are orthogonal (n+1)\times (n+1) matrices with \det \alpha_{AB}=+1 for proper rotations. Then, R_{n+1}\supset R_n and rotations in the (\xi_A,\xi_B) plane are determined by n(n+1)/2 rotation angles. Moreover, you can introduce (n+1) projective coordinates (\overline{x}_0,\overline{x}_1,\ldots,\overline{x}_n) such as the euclidean coordinates (x_1,x_2,\ldots,x_n) are related with projective coordinates in the following way

\boxed{x_i=\dfrac{r\overline{x}_i}{\overline{x}_0}}\;\; \forall i=1,2,\ldots,n

Projective coordinates are generally visualized with the aid of the Beltrami-Reimann sphere, sometimes referred as Bloch or Poincarè sphere in Optics. The Riemann sphere is used in complex analysis. For instance:


This sphere is also used in Quantum Mechanics! In fact, projective geometry is the natural geometry for states in Quantum Physics. It is also useful in the Majorana representation of spin, also called star representation of spin by some authors, and riemann spheres are also the fundamental complex projective objects in Penrose’s twistor theory! To remark these statements, let me use some nice slides I found here http://users.ox.ac.uk/~tweb/00006/


riemannsphere2 riemannsphere4riemannsphere5riemannsphere6

Note: I am not going to explain twistor theory or Clifford algebra today, but I urge you to read the 2 wonderful books by Penrose on Spinors and Spacetime, or, in the case you are mathematically traumatized, you can always read his popular books or his legacy for everyone: The Road to Reality.

Projective coordinates are “normalized” in the sense

\overline{x}_0^2+\ldots+\overline{x}_n^2=r^2 or \overline{x}_A\overline{x}_A=r^2 \forall A=0,1,\ldots,n

It suggests us to introduce a pythagorean (“euclidean-like” ) projective “metric”:


It is sometimes called the Beltrami metric. You can rewrite this metric in the following equivalent notation

A^4ds^2=A^2(dx^idx^i)-(\alpha_i dx^i)^2


A^2=1+\alpha_s\alpha_s and \alpha_s=x_s/r

Some algebraic manipulations provide the fundamental tensor of projective relativity:

\boxed{A^4 g_{ik}=A^2\delta_{ik}-\dfrac{x_ix_k}{r^2}}


\vert g_{ik}\vert =g=A^{-2(n+1)} so

\boxed{g^{ik}=(g_{ik})^{-1}=A^2\left( \delta_{ik}+\dfrac{x_ix_k}{r^2}\right)}

The D’Alembertian operator is defined to be in this projective space

\boxed{\square^2 \varphi =\dfrac{1}{\sqrt{g}}\partial_i\left(\sqrt{g}g^{ik}\partial_k \varphi\right)=0}

Using projective “natural” coordinates with r=1 to be simpler in our analysis, we get




But we know that

\partial_iA^{1- n}=(1-n)A^{-1-n}x_i



And then, if r\neq 1, we have the projective D’Alembertian operator

\boxed{r^2\square^2=A^2\left(r^2\partial_i\partial_i\varphi +x_ix_k\partial_i\partial_k\varphi+2x_k\partial_k\varphi\right)=0}

Here, R_{n+1} is the tangent space (a projective space) with \overline{x}'_A=\alpha_{AB}\overline{x}_B, and where A,B=0,1,\ldots,n. We can return to “normal” unprojective relativistic framework choosing


with x_i=0 and A=1, and \overline{x}_A=(r,0,\ldots,0). That is, in summary, we have that in projective relativity, using a proper relativistic reference frame, the position vector has NULL components excepting the 0th component x_0=r=R! And so, \overline{x}_A=(r,0,\ldots,0) is a “special” reference frame in projective relativity. This phenomenon does not happen in euclidean or pseudoeuclidean relativity, but there is a “similar” phenomenon in group theory when you reduce the de Sitter group to the Poincaré group using a tool named “Inönü-Wigner” group contraction. I will not discuss this topic here!

4. Kalitzin’s metric: multitemporal relativity

It should be clear enough now that from (x_1,\ldots,x_n), via \overline{x}_i=x_i/A and \overline{x}_0=r/A, in the limit of infinite radius R\longrightarrow \infty, it reduces to the cartesian euclidean spaces E_3,E_4,\ldots,E_n. Nicola Kalitzin (1918-1970), to my knowledge, was one of the few (crackpot?) physicists that have studied multitemporal theories during the 20th century. He argued/claimed that the world is truly higher-dimensional, but ALL the extra dimensions are TIME-like! It is quite a claim, specially from a phenomenological aside! As far as I know he wrote a book/thesis, see here http://www.getcited.org/pub/101913498 but I have not been able to read a copy. I learned about his works thanks to some papers in the arxiv and a bulgarian guy (Z.Andonov) who writes about him in his blog e.g. here http://www.space.bas.bg/SENS2008/6-A.pdf

Arcidiacono has a nice review of Kalitzin multitemporal relativity (in the case of finite n temporal dimensions), but I will modify it a litte bit to addapt the introduction to modern times. I define the Kalitzin metric as the following semiriemannian metric

\boxed{\displaystyle{ds^2_{KAL}=dx_1^2+dx_2^2+dx_3^2-c_1^2dt_1^2-c_2^2dt_2^2-\ldots -c_{n-3}^2dt_{n-3}^2=\sum_{i=1}^3dx_i^2-\sum_{j=1}^{n-3}c_j^2dt_j^2}}

Remark (I): It is evident that the above metric reduce to the classical euclidean metric or the Minkovski spacetime metric in the limites where we write c_j=0 and c_1=c, c_{j+1}=0\forall j=1,2,\ldots,n-3. There is ANOTHER way to recover these limits, but it involves some trickery I am not going to discuss it today. After all, new mathematics requires a suitable presentation! And for all practical purposes, the previous reduction makes the job (at least today).

Remark (II): Just an interesting crazy connection with algebraic “stuff” ( I am sure John C. Baez can enjoy this if he reads it)…

i) If n-3=0, then we have n=3+0 or 3D “real” (euclidean) space, with 0 temporal dimensions in the metric.

ii) If n-3=1, then we have n=3+1 or 4D pseudoeuclidean (semiriemannian?) spacetime, or equivalently, the (oldfashioned?) x_4=ict relativity with ONE imaginary time, i.e. with 1 temporal dimension and 1 “imaginary unit” related to time!

iii) If n-3=2, then we have n=3+2=5 or 5D semiriemannian spacetime, a theory with 2 temporal imaginary dimensions, or 1 complex number (after complexification, we can take one real plus one imaginary unit), maybe related to projective dS/adS relativity in 5D, with -i_0^2=-1=i_1^2?

iv) If n-3=3, then we have n=3+3=6 or 6D semiriemannian spacetime, a theory with 3 temporal dimensions and 3 “imaginary units” related to …Imaginary quaternions i^2=j^2=k^2=-1?

v) If n-3=7, then we have n=3+7=10 or 10D semiriemannian spacetime, a theory with 3 temporal dimensions and 7 “imaginary units” related to …Imaginary octonions i_1^2=i_2^2=\ldots =i_7^2=-1?

vi) If n-3=8, then we have n=3+7=11 or 11D semiriemannian spacetime, a theory with 3 temporal dimensions and 8 “units” related to …Octonions -i_0^2=i_1^2=i_2^2=\ldots =i_7^2=-1?

Remark (III): The hidden division algebra connection  with the temporal dimensions of higher dimensional relativities and, in general, multitemporal relativities can be “seen” from the following algebraic facts

n-3=0\leftrightarrow n=3=3+0\leftrightarrow t\in\mathbb{R}

n-3=1\leftrightarrow n=3=3+1\leftrightarrow t\in\mbox{Im}\mathbb{C}

n-3=2\leftrightarrow n=5=3+2\leftrightarrow t\in\mathbb{C}

n-3=3\leftrightarrow n=6=3+3\leftrightarrow t\in\mbox{Im}\mathbb{H}

n-3=4\leftrightarrow n=7=3+4\leftrightarrow t\in\mathbb{H}

n-3=7\leftrightarrow n=10=3+7\leftrightarrow t\in \mbox{Im}\mathbb{O}

n-3=8\leftrightarrow n=11=3+8\leftrightarrow t\in\mathbb{O}

Remark (IV): Was the last remark suggestive? I think it is, but the main problem is how do we understand “additional temporal dimensions”? Are they real? Do they exist? Are they a joke as Feynman said when he derived electromagnetism from a non-associative “octonionic-like” multitemporal argument? I know, all this is absolutely crazy!

Remark (V): What about (n-3)\longrightarrow \infty temporal dimensions. In fact, Kalitzin multitemporal relativity and Kalitzin works speculate about having \infty temporal dimensions! I know, it sounds absolutely crazy, it is ridiculous! Specially due to the constants it would seem that there are convergence issues and some other weird stuff, but it can be avoided if you are “clever and sophisticated enough”.

Kalitzin metric introduces (n-3) (a priori) “different” lightspeed species! If you faced problems understanding “light” in 4D minkovskian relativity, how do you feel about \vec{C}=(c_1,\ldots,c_{n-3})? Therefore, we can introduce (n-3) proper times ( note that as far as I know at current time, N. Kalitzin introduces only a single proper time; I can not be sure since I have no access to his papers at the moment, but I will in future, I wish!):

\boxed{-c_s^2d\tau_s^2=dx_1^2+dx_2^2+dx_3^2-c_1^2dt_1^2-\ldots-c_{n-3}^2dt_{n-3}^2}\;\forall s=1,\ldots,n-3

Therefore, we can define generalized the generalized \beta_s and \Gamma_s parameters, the multitemporal analogues of \beta and \gamma in the following way. Fix some s and c_s, \tau_s. Then, we have





\displaystyle{\dfrac{d\tau_s^2}{dt_s^2}=1-\dfrac{(d\vec{x})^2}{c_s^2(dt_s)^2}+\sum_{k\neq s}\dfrac{c_k^2dt_k^2}{c_s^2dt_s^2}}

Define B_s= v_{(s)}/c_s and B_s= 1/\Gamma_s (be aware with that last notation), where \Gamma_s, B_s are defined via the next equation:

\boxed{\displaystyle{B_s= \dfrac{1}{\Gamma_s}=\sqrt{1-\beta_s^2+\sum_{k\neq s}\left(\dfrac{c_kdt_k}{c_sd\tau_s}\right)^2}=\sqrt{1-\dfrac{v_{(s)}^2}{c_s^2}+\sum_{k\neq s}\left(\dfrac{c_kdt_k}{c_sd\tau_s}\right)^2}}}

and where

\overrightarrow{V}_{(s)}=\vec v_s=\dfrac{d\vec{x}_\alpha}{dt_s}\;\;\forall \alpha=1,2,3


\boxed{d\tau_s=B_sd\tau_s} or \boxed{dt_s=\Gamma_s d\tau_s}

Therefore, we can define (n-3) different notions of “proper” velocity:

\boxed{u_i^{(s)}=V^{(s)}=\dfrac{dx_i}{d\tau_s}=\dfrac{1}{B_s}\dfrac{dx_i}{dt_s}=\Gamma_s\dfrac{dx_i}{dt_s}=\Gamma_s \vec v_s}

5. Spacetime crystals and crystalline relativity: concepts and results

 In the reference frame where x_i=0 AND/IFF B_s=1, then u_i=0 for all i=1,2,3 BUT there are (s+3) “imaginary” components! That is, we have in that particular frame

\boxed{u_{s+3}^{s}=ic_s} \;\;\forall s

and thus


This (very important) last equation is strikingly similar to the relationship of reciprocal vectors in solid state physics but extended to the whole spacetime (in temporal dimensions!)! This is what I call “spacetime crystals” or “crystalline (multitemporal) relativity”. Relativity with extra temporal dimensions allows us to define some kind of “relativity” in which the different proper velocities define some kind of (relativistic) lattice. Wilczek came to the idea of “time crystal” in order to search for “periodicities” in the time dimension. With only one timelike dimension, the possible “lattices” are quite trivial. Perhaps the only solution to avoid that would be consider 1D quasicrystals coming from “projections” from higher dimensional “crystals” (quasicrystals in lower dimensions can be thought as crystals in higher dimensions). However, if we extend the notion of unidimensional time, and we study several time-like dimensions, new possibilities arise to build “time crystals”. Of course, the detection of extra timelike dimensions is an experimental challenge and a theoretical one, but, if we give up or solve the problems associated to multiple temporal dimensions, it becomes clear that the “time crystals” in D>1 are interesting objects in their own! Could elementary particles be “phonons” in a space-time (quasi)crystal? Is crystalline (multitemporal) relativity realized in Nature? Our common experience would suggest to the contrary, but it could be interesting to pursue this research line a little bit! What would it be the experimental consequence of the existence of spacetime crystals/crystalline relativity? If you have followed the previous discussion: spacetime crystals are related to different notions of proper velocity (the analogue of reciprocal vectors in solid state physics) and to the existence of “new” limit velocities or “speeds of light”. We only understand the 5% of the universe, according to WMAP/Planck, so I believe that this idea could be interesting in the near future, but at the moment I can not imagine some kind of experiment to search for these “crystals”. Where are they?

Remark: In Kalitzinian metrics, “hyperphotons” or “photons” are defined in the usual way, i.e., ds_{KAL}^2=0, so

\mbox{Hyperphotons}: ds_{KAL}^2=0\leftrightarrow dx_1^2+dx_2^2+dx_3^2=c_1^2dt_1^2+\ldots+c_{n-3}^2dt_{n-3}^2

Remark(II): In multitemporal or crystalline relativities, we have to be careful with the notion of “point” at local level, since we have different notions of “velocity” and “proper velocity”. Somehow, in every point, we have a “fuzzy” fluctuation along certain directions of time (of course we can neglect them if we take the limit of zero/infinity lightspeed along some temporal direction/time vectors). Then, past, present and future are “fuzzy” notions in every spacetime when we consider a multitemporal approach! In the theory of relativity in Clifford spaces, something similar happens when you consider every possible “grade” and multivector components for a suitable cliffor/polyvector. The notion of “point” becomes meaningless since you attach to the point new “degrees of freedom”. In fact, relativity in Clifford spaces is “more crystalline” than multitemporal relativity since it includes not only vectors but bivectors, trivectors,… See this paper for a nice review: http://vixra.org/pdf/0908.0084v1.pdf

Remark (III):  Define the “big lightspeeds” in the following way

\boxed{C_s^2=v_s^2=\dfrac{(dx_i)^2}{(dt_s)^2}}\;\;\forall s=1,2,\ldots,n-3


\boxed{C_s=v_s=\dfrac{dx_i}{dt_s}}\;\;\forall s=1,2,\ldots,n-3

Then, we have




\displaystyle{C_s^2=c_s^2\left(1+\sum_{k\neq s}^{n-3}\left(\dfrac{c_kdt_k}{c_sdt_s}\right)^2\right)}

where we note that

\boxed{\displaystyle{C_s^2=c_s^2\left(1+\sum_{k\neq s}^{n-3}\left(\dfrac{c_kdt_k}{c_sdt_s}\right)^2\right)}\geq c_s^2}


\boxed{\displaystyle{C_s=c_s\sqrt{\left(1+\sum_{k\neq s}^{n-3}\left(\dfrac{c_kdt_k}{c_sdt_s}\right)^2\right)}}\geq c_s}

The bound is saturated whenever we have c_s\longrightarrow\infty or c_k=0. Such conditions, or the hypothesis of unidimensional time, leave us with the speed of light barrier, but IT IS NO LONGER A BARRIER IN A MULTITEMPORAL SET-UP!

Remark (I): Just for fun…Sci-Fi writers are wrong when they use the “hyperspace” to skip out the lightspeed barrier. What allows to give up such a barrier is MULTITEMPORAL TIME or the hypertime. Of course, if they mean “hyperspacetime”, it would not be so wrong. It is trivial to observe that if you include extra SPACE-LIKE dimensions, and you keep Lorentz Invariance in higher-dimensions, you can NOT scape from the speed of light limit in a classical “way”. Of course, you could use wormholes, Alcubierre drives or quantum “engines”, but they belong to a different theoretical domain I am not going to explain here. Not now.

Remark (II): If we suppose that every speed of light is constant (homogeneity in extradimensional time) and if we suppose, in addition to it, that they are all equal to the same number, say the known c, i.e., if we write


then we can easily obtain that


And then, we have

1) n=3 (0 timelike dimensions) implies that C_s=c_s=0

2) n=4 (1 timelike dimension) implies that C_s=c_s=c

3) n=5 (2 timelike dimensions) implies that C_s=\sqrt{2}c_s\approx 1.4c

3) n=6 (3 timelike dimensions) implies that C_s=\sqrt{3}c_s\approx 1.7c

4) n=7 (4 timelike dimensions) implies that C_s=\sqrt{4}c_s=2c_s

5) n=8 (5 timelike dimensions) implies that C_s=\sqrt{5}c_s\approx 2.2c

6) n=9 (6 timelike dimensions) implies that C_s=\sqrt{6}c_s\approx 2.4c

7) n=10 (7 timelike dimensions) implies that C_s=\sqrt{7}c_s\approx 2.6c

8) n=11 (8 timelike dimensions) implies that C_s=\sqrt{8}c_s\approx 2.8c

9) n=12 (9 timelike dimensions) implies that C_s=\sqrt{9}c_s=3c

10) n=\infty (\infty -3=\infty  timelike dimensions) implies that C_s=\infty, and you can travel to virtually any velocity !!!!!!But of course, it seems this is not real, infinite timelike dimensions sound like a completely crazy stuff!!!!! I should go to the doctor…

Remark(III): The main lesson you should learn from this is that spacelike dimensions can not change the speed of light barrier. By the contrary, the true power of extra timelike dimensions is understood when you realize that “higher dimensional” excitations of “temporal dimensions” provide a way to surpass the speed of light. I have no idea of how to manage this, I am only explaining you what are the consequences of the previous stuff.

Remark (IV): Just for fun (or not). I am a big fan of Asimov’s books. Specially the Foundation series and the Robot stories. When I discovered these facts, long ago, I wondered myself if Isaac Asimov met Kalitzin/Arcidiacono (I think he could not meet Fantappie or Fantappie’s works about projective relativity but I am sure he knew a little bit about hyperspace and hypertime, despite the fact he, as many others at current time, confused the idea of hyperspace and hypertime, but sometimes he seemed to know more than he was explaining. I am not sure. I am not a Sci-fi writer…But I suppose he knew “something”…But not exactly these facts). I think to remember a quote from one of his books in which a character said something like “(…)One of the biggest mistakes of theoretical physicists is to confuse the hyperspace unlimited C with the bounded velocity c in usual relativity(…)”. I think these are not the exact words, but I remember I read something like that in some of his books. I can not remember what and I have no time to search for it right now, so I leave this activity to you…To find out where Asimov wrote something very close to it. Remember my words are not quite exact, I presume…I have not read a “normal” Sci-Fi book since years ago!

6. Enhanced galilean relativity

Arcidiacono worked out a simple example of multitemporal theory. He formulated the enhacen galilean group in the following way



with V_1 the velocity, V_2 the acceleration, V_3 the jerk,…V_{n-3} the (n-3)th order velocity. He linearized that nonlinear group using the transformations

t_s=t^s/s! \forall s=1,2,\ldots,n-3

and it gives




So we have a group matrix

G=\begin{pmatrix}1 & V_1 & \cdots & V_{n-3}\\ 0 & 1 & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots\\ \cdots & \cdots & \cdots & 1\end{pmatrix}

The simplest case is usual galilean relavity



The second simpler example is two time enhaced galilean relativity:


t'_1=t_1 t'_2=t'_2

If we use that V_1=V and t_s=t^s/s!, then we have


and then


With 2 times, we have V_2=V/t, and moreover, the free point particle referred to t_s satisfies (according to Arcidiacono)

\dfrac{d^2x}{dt_s^2}=0\leftrightarrow \dfrac{d^2x}{dt^2}-\left(\dfrac{s-1}{t}\right)\dfrac{dx}{dt}=0

Let us work out this case with more details



where we have 3 spatial coordinates (x,y,z) and two times (t,t’). Performing the above transformations


T=t T'=t'

with velocities

V=\dfrac{dx}{dt} and V'=\dfrac{dx}{dt'}, and with V'=V/t. If V=At, then V'=A, so a second order velocity becomes the constant acceleration in that frame. Furthermore


implies that

\dfrac{dV}{dt}=\dfrac{V}{t} and x=At^2/2

That is, invariant mechanics under uniformly accelerated motion with “multiple” velocities is possible! In fact, in this framework, uniformly accelerated motion seems to be “purely inertial”, or equivalently, it seems to be “fully machian”!!!!

If uniformly accelerated gravitational field is applied to the point particle, then, in this framework, it seems to suggest that it “changes” the time scale a quantity


and it becomes a uniform motion! If a body moves unofrmorly, changing the scale of time, in multitemporal relativity, ib becomes uniformaly accelerated! I don’t understand this claim well enough, but it seems totally crazy or completely …Suggestive of a purely machian relativity? Wilczek called it “total relativity” long ago…

7. Conformal two-time relativity and gravitation

A conformal relativity with two time dimensions and two time dimensions was also studied by Arcidiacono (quite naively, I believe). He studied also a metric


with a conformal time


Note that c\longrightarrow \infty implies that t'=t^2/2. It implies some kind of hyperbolic motion




Remark: Ax^2+2c^2x-Ac^2t^2=0\leftrightarrow x=\dfrac{A}{2c^2}\left(c^2t^2-x^2\right). Introductin a second time x=At', then V'=A, where


and again V'=A produces the “classical relativity”.

Remark(II): Projective special relativity should produce some kind of “projective general relativity” (Arcidiacono claimed). This is quite a statement, since the diffeomorphism group in general relativity contains “general coordinate transformations”. I am not sure what he meant with that. Anyway, a projective version of “general relavity” is provided by twistor theory or similar theories, due to the use of complex projective spaces and generalizations of them. Conformal special relativity should imply some class of conformal general relativity. However, physical laws are not (apparently) invariant under conformal transformations in general. What about de Sitter/anti de Sitter spaces? I have to learn more about that and tell you about it in the future. Classical electromagnetism and even pure Yang-Mills theories at classical level can be made invariant under conformal transformations only with special care. Quantum Mechanics seems  to break that symmetry due to the presence of mass terms that spoil the gauge invariance of the theory, not only the conformal symmetry. Only the Higgs mechanism and “topological” terms allow us to introduce “mass terms” in a gauge invariant way! Any way, remember that Classical Mechanics is based on symplectic geometry, very similar to projective geometry in some circumstances, and Classical Field Theories also contain fiber fundles and some special classes of field theories, like Conformal Field Theories or even String Theory, have some elements of projective geometry in their own manner. Moreover, conformal symmetries are also an alternative approach to new physics. For instante, Georgi created the notion of a “hidden conformal sector” BSM theory, something that he called “unparticles”. People generalized the concept and you can read about “ungravity” as well. Unparticles, ungravity, unforces…Really weird stuff!!! Did you think multiple temporal dimensions were the only uncommon “ugly ducks” in the city? No, they weren’t…Crazy ideas are everywhere in theoretical physics. The real point is to find them applications and/or to find them in real experiments! It happened with this Higgs-like particle about 127GeV/c². And I think Higgs et alii will deserve a Nobel Prize this year due to it.

Remark (III): Final relativity, in the sense of Fantappie’s ideas, has to own a different type of Cosmology… In fact it has. It has a dS relativity Cosmology! The Stantard Cosmological Model fits the vacuum energy (more precisely we “fit” \Omega_\Lambda). It is important to understand what \Lambda is. The Standard Cosmological Model does not explain it at all. We should explore the kinematical and cosmological models induced by the de Sitter group, and its associated QFT. However, QFT on dS spaces are not fully developed. So, that is an important research line for the future.

8. Hyperspherical electromagnetism and multitemporal relativity

Arcidiacono generalizes electromagnetism to multitemporal dimensions (naively he “wrongly” thought he had unified electromagnetism and hydrodynamics) with the followin equations



where A,B=0,1,\ldots, n. The tensor H_{AB } have n(n +1)/2 components. The integrability conditions are




We can build some potentials U_A, and V_{ABC}, so



with H_{AB}=\mbox{Div}V_{ABC}+\mbox{Rot}U_A

we have

\square^2V_{ABC}=J_{ABC} and \square^2 U_A=I_A

A generalized electromagnetic force is introduce


If f_A=\mbox{Div}T_{AB}, then the energy-momentum tensor will be


For position vectors \overline{x}_A, we have (n-3) projectie velocities $late \overline{u}_A^s, such as



where \overline{x}_A\overline{x}_A=r^2 and \overline{x}_A\overline{u}_A^s=0. From H_{AB} we get

(1) c_A hydrodynamics vector plus (n-3) magnetic vectors h_A^s such as



and where

c_Ax_A=0 and h_A^su^s_A=0.

(2) Fluid indices for




(n-3)+\begin{pmatrix}n-3\\ 2\end{pmatrix}=\begin{pmatrix}n-2\\ 2\end{pmatrix}=\dfrac{(n-2)(n-3)}{2} total components. Note that if you introduce n=4 you get only 1 single independent component.

(3) The dual tensor \star H_{ABC\ldots D} to H_{AB} has (n-1) undices, so we can make

K_{AB}=\star H_{ABC\ldots D}u_A^1u_B^2\ldots u_C^{n-3} and then K_{AB}u_B^s=0. The generalized electric field reads


so e_Ax_A=e_Au_A^s=0

Note that in this last equation, projective relativity means a total equivalence in a transformation changing position and multitemporal velocities, i.e., invariance under x_A\leftrightarrow u_A^s is present in the last equation for electric fields in a multitemporal setting.

9. Conclusions

1) Multitemporal theories of relativity do exist. In fact, Dirac himself and De Donder studied this types of theories. However, they did not publish too much papers about this crazy subject.

2) Fantappie’s final relativity is an old idea that today can be seen as de Sitter Relativity. The contraction of the de Sitter group provides the Lorentz groupo. Final relativity/de Sitter relativity is based on “projective geometry” somehow.

3) Kalitzin’s and Arcidiacono’s ideas, likely quite naive and likely wrong, does not mean that multitemporal dimensions don’t exist. The only problem is to explain why the world is 3+1 if they exist or, equivalently, just as the space-like dimensions, the perception of multiple temporal dimensions is an experimental issue.

4) The main issues for extra timelike dimensions are: closed time-like curves, causality and vacuum instabilities (“imposible” processes) when Quantum Mechanics is taken into account in multi-time setting.

5) Beyond multi-time theories, there are interesting extensions of special relativity, e.g., C-space relativity.

6) Multiple temporal dimensions make the notion of point and event a little “fuzzy”.

7) Multiple time-like dimensions are what make possible to overpass the invariant speed of light. I am not going to prove it here, in the case of c_k=c\forall k the maximum invariant velocity is equal to \sqrt{n-3}c. When the speeds of light are “different” the invariant velocity is a harder formula, but it does exist. From this viewpoint, it is hypertime dimensions and not hyperspace dimensions what make possible the faster than light travel (Giving up CTC, causality issues and vacuum instabilities triggered by quantum theories).

8) Hyperphotons are the equivalent concept of photons in multitemporal relativities and they are not tachyons, but they have a different invariant speed.

9) Philosophers have discussed the role of multitemporal dimensions. For instance, I read about Bennett 3d time, with 3 components he called time, hyparxis and eternity long ago, see here http://en.wikipedia.org/wiki/John_G._Bennett.

10) Isaac Asimov stories, beyond the imagination and intuition Asimov had, match the theory of relavity with extra time-like and space-like dimensions. I don’t know if he met Kalitzin, Dirac or some other physicist working on this field, but it is quite remarkable from the purely layman approach!

11) Theories with extra temporal dimensions have been studied by both mathematicians and physicists. At current time, maybe I can point out that F-theory has two timelike dimensions, Itzhak Bars has papers about two-time physics, semiriemannian (multitemporal) metrics are being studied by the balkan and russian schools and likely many others.

12) The so-called problem of time is even more radical when you deal with multi-time theories because the relation of multitemporal coordinates with the physical time is obscure. We don’t understand time.

13) We can formulate theories in a multi-time setting, but it requires a harder framework than in normal relativity: velocity becomes “a matrix”, there are different notions of accelerations, energy becomes a vector, “mass” is a “tensor”, multi-time electrodynamics becomes more difficult and many other issues arise with a multi-time setting. You have to study: jet theory, Finsler spaces, nonlinear connections, and some more sophisticated machinery in order to understand it.

14) Are multi-time theories important? Maybe…The answer is that we don’t know for sure, despite the fact that they are “controversial” and “problematic”. However, if you think multi-time theories are “dark”, maybe you should thing about that “dark stuff” forming the 95% of the Universe. However, Irina Aref’eva and other authors have studied the physical consequences of multi-time theores. Aref’eva herself, in collaboration with other russian physicists, proved that an additional timelike dimension can solve the cosmological constant problem (giving up any issue that an additional time dimension produces).

15) The idea of “time crystals” is boring in 1d time. It becomes more interesting when you thing about multi-time crystals as some of the ingredients of certain “crystalline relativity”. In fact, a similar idea has been coined by P. Jizba et alii, and it is known as “World Crystal”.

16) Final questions:

i) Can multi-time relativity be used by Nature? The answer can only be answered from an experimental viewpoint!

ii) Do we live in an anisotropic spacetime (quasi)crystal? I have no idea! But particles theirselves could be seen as (quantum) excitations of the spacetime crystal. In fact, I am wondering if the strange spectrum of the Standard Model could be some kind of 3d+1 time quasicrystal. If it is so, it could be that in certain higher dimensions, the spectrum of the SM could be more “simple”. Of course, it is the idea of extra dimensions, but I have not read any paper or article studying the SM particle spectrum from a quasicrystal viewpoint. It could be an interesting project to make some investigations about this idea.

iii) How many lightspeeds are there in the Universe? We can put by hand that every “lightspeed” species is equal to the common speed of light, but is it right? Could exist new lightspeed species out there? Note that if we considered those “higher lightspeeds” very large numbers, they could be unnoticed by us if the “electromagnetism” in the extra temporal dimensions were far different than the known electromagnetism. That is, it could be that c=c_1<<c_2<<c_3<<\ldots or that some of them were very small constants…In both cases, normal relativity could be some kind of “group” reduction.

iv) Could the time be secretly infinite-dimensional? Experiments show that the only invariant speed is c, but could it be an illusion?

v) Can we avoid the main problems of multi-time theories? I mean causality, Closed Timelike Curves (CTC), and vacuum instabilities as the most important of all of them.

vi) Is the problem of time related to the the multitemporality of the world?

LOG#058. LHC: last 2012 data/bounds.

Today, 12/12/12, the following paper  arised in the arxiv http://arxiv.org/abs/1212.2339

This interesting paper reviews the last bounds about Beyond Stantard Model particles (both, fermions and bosons) for a large class of models until the end of this year, 2012. Particle hunters, some theoretical physicists are! The fundamental conclusions of this paper are encoded in a really beautiful table:


There, we have:

1. Extra gauge bosons W', Z'. They are excluded below 1-2 TeV, depending on the channel/decay mode.

2. Heavy neutrinos N. They are excluded with softer lower bounds.

3. Fourth generation quarks t', b' and B, T vector-like quarks are also excluded with \sim 0.5 TeV bounds.

4. Exotic quarks with charge Q= 5/3 are also excluded below 0.6 TeV.

We continue desperately searching for deviations to the Standard Model (SM). SUSY, 4th family, heavy likely right-handed neutrinos, tecnifermions, tecniquarks, new gauge bosons, Kalula-Klein resonances (KK particles), and much more are not appearing yet, but we are seeking deeply insight the core of the matter and the deepest structure of the quantum vacuum. We know we have to find “something” beyond the Higgs boson/particle, but what and where is not clear for any of us.

Probably, we wish, some further study of the total data in the next months will clarify the whole landscape, but these data are “bad news” and “good news” for many reasons. They are bad, since they point out to no new physics beyond the Higgs up to 1 TeV (more or less). They are good since we are collecting many data and hopefully, we will complement the collider data with cosmological searches next year, and then, some path relative to the Standard Model extension and the upcoming quantum theory of gravity should be enlightened, or at least, some critical models and theories will be ruled out! Of course, I am being globally pesimist but some experimental hint beyond the Higgs (beyond collider physics) is necessary in order to approach the true theory of this Universe.

And if it is not low energy SUSY (it could if one superparticle is found, but we have not found any superparticle yet), what stabilizes the Higgs potential and provides a M_H\sim 127GeV Higgs mass, i.e, what does that “job”/role? What is forbidding the Higgs mass to receive Planck mass quantum corrections? For me, as a theoretical physicist, this question is mandatory! If SUSY fails to be the answer, we really need some good theoretical explanation for the “light” mass the Higgs boson seems to have!

Stay tuned!

LOG#046. The Cherenkov effect.

The Cherenkov effect/Cherenkov radiation, sometimes also called Vavilov-Cherenkov radiation, is our topic here in this post.

In 1934, P.A. Cherenkov was a post graduate student of S.I.Vavilov. He was investigating the luminescence of uranyl salts under the incidence of gamma rays from radium and he discovered a new type of luminiscence which could not be explained by the ordinary theory of fluorescence. It is well known that fluorescence arises as the result of transitions between excited states of atoms or molecules. The average duration of fluorescent emissions is about \tau>10^{-9}s and the transition probability is altered by the addition of “quenching agents” or by some purification process of the material, some change in the ambient temperature, etc. It shows that none of these methods is able to quench the fluorescent emission totally, specifically the new radiation discovered by Cherenkov. A subsequent investigation of the new radiation ( named Cherenkov radiation by other scientists after the Cherenkov discovery of such a radiation) revealed some interesting features of its characteristics:

1st. The polarization of luminiscence changes sharply when we apply a magnetic field. Cherenkov radiation luminescence is then causes by charged particles rather than by photons, the \gamma-ray quanta! Cherenkov’s experiment showed that these particles could be electrons produced by the interaction of \gamma-photons with the medium due to the photoelectric effect or the Compton effect itself.

2nd. The intensity of the Cherenkov’s radiation is independent of the charge Z of the medium. Therefore, it can not be of radiative origin.

3rd. The radiation is observed at certain angle (specifically forming a cone) to the direction of motion of charged particles.

The Cherenkov radiation was explained in 1937 by Frank and Tamm based on the foundations of classical electrodynamics. For the discovery and explanation of Cherenkov effect, Cherenkov, Frank and Tamm were awarded the Nobel Prize in 1958. We will discuss the Frank-Tamm formula later, but let me first explain how the classical electrodynamics handle the Vavilov-Cherenkov radiation.

The main conclusion that Frank and Tamm obtained comes from the following observation. They observed that the statement of classical electrodynamics concerning the impossibility of energy loss by radiation for a charged particle moving uniformly and following a straight line in vacuum is no longer valid if we go over from the vacuum to a medium with certain refractive index n>1. They went further with the aid of an easy argument based on the laws of conservation of momentum and energy, a principle that rests in the core of Physics as everybody knows. Imagine a charged partice moving uniformly in a straight line, and suppose it can loose energy and momentum through radiation. In that case, the next equation holds:


This equation can not be satisfied for the vacuum but it MAY be valid for a medium with a refractive index gretear than one n>1. We will simplify our discussion if we consider that the refractive index is constant (but similar conclusions would be obtained if the refractive index is some function of the frequency).

By the other hand, the total energy E of a particle having a non-null mass m\neq 0 and moving freely in vacuum with some momentum p and velocity v will be:


and then

\left(\dfrac{dE}{dp}\right)_{particle}=\dfrac{pc^2}{E}=\beta c=v

Moreover, the electromagnetic radiation in vaccum is given by the relativistic relationship


From this equation, we easily get that


Since the particle velocity is v<c, we obtain that


In conclusion: the laws of conservation of energy and momentum prevent that a charged particle moving with a rectilinear and uniform motion in vacuum from giving away its energy and momentum in the form of electromagnetic radiation! The electromagnetic radiation can not accept the entire momentum given away by the charged particle.

Anyway, we realize that this restriction and constraint is removed and given up when the aprticle moves in a medium with a refractive index n>1. In this case, the velocity of light in the medium would be


and the velocity v of the particle may not only become equal to the velocity of light c' in the medium, but even exceed it when the following phenomenological condition is satisfied:

\boxed{v\geq c'=c/n}

It is obvious that, when v=c' the condition


will be satisfied for electromagnetic radiation emitted strictly in the direction of motion of the particle, i.e., in the direction of the angle \theta=0\textdegree. If v>c', this equation is verified for some direction \theta along with v=c', where


is the projection of the particle velocity v on the observation direction. Then, in a medium with n>1, the conservation laws of energy and momentum say that it is allowed that a charged particle with rectilinear and uniform motion, v\geq c'=c/n can loose fractions of energy and momentum dE and dp, whenever those lost energy and momentum is carried away by an electromagnetic radiation propagating in the medium at an angle/cone given by:


with respect to the observation direction of the particle motion.

These arguments, based on the conservation laws of momenergy, do not provide any ide about the real mechanism of the energy and momentum which are lost during the Cherenkov radiation. However, this mechanism must be associated with processes happening in the medium since the losses can not occur ( apparently) in vacuum under normal circumstances ( we will also discuss later the vacuum Cherenkov effect, and what it means in terms of Physics and symmetry breaking).

We have learned that Cherenkov radiation is of the same nature as certain other processes we do know and observer, for instance, in various media when bodies move in these media at a velocity exceeding that of the wave propagation. This is a remarkable result! Have you ever seen a V-shaped wave in the wake of a ship? Have you ever seen a conical wave caused by a supersonic boom of a plane or missile? In these examples, the wave field of the superfast object if found to be strongly perturbed in comparison with the field of a “slow” object ( in terms of the “velocity of sound” of the medium). It begins to decelerate the object!

Question: What is then the mechanism behind the superfast  motion of a charged particle in a medium wiht a refractive index n>1 producing the Cherenkov effect/radiation?

Answer:  The mechanism under the Cherenkov effect/radiation is the coherent emission by the dipoles formed due to the polarization of the medium atoms by the charged moving particle!

The idea is as follows. Dipoles are formed under the action of the electric field of the particle, which displaces the electrons of the sorrounding atoms relative to their nuclei. The return of the dipoles to the normal state (after the particle has left the given region) is accompanied by the emission of an electromagnetic signal or beam. If a particle moves slowly, the resulting polarization will be distribute symmetrically with respect to the particle position, since the electric field of the particle manages to polarize all the atoms in the near neighbourhood, including those lying ahead in its path. In that case, the resultant field of all dipoles away from the particle are equal to zero and their radiations neutralize one to one.

Then, if the particle move in a medium with a velocity exceeding the velocity or propagation of the electromagnetic field in that medium, i.e., whenever v>c'=c/n, a delayed polarization of the medium is observed, and consequently the resulting dipoles will be preferably oriented along the direction of motion of the particle. See the next figure:

It is evident that, if it occurs, there must be a direction along which a coherent radiation form dipoles emerges, since the waves emitted by the dipoles at different points along the path of the particle may turn our to be in the same phase. This direction can be easiy found experimentally and it can be easily obtained theoretically too. Let us imagine that a charged particle move from the left to the right with some velocity v in a medium with a n>1 refractive index, with c'=c/n. We can apply the Huygens principle to build the wave front for the emitted particle. If, at instant t, the aprticle is at the point x=vt, the surface enveloping the spherical waves emitted by the same particle on its own path from the origin at x=0 to the arbitrary point x. The radius of the wave at the point x=0 at such an instant t is equal to R_0=c't. At the same moment, the wave radius at th epint x is equal to R_x=c'(t-(x/v))=0. At any intermediate point x’, the wave radius at instant t will be R_{x'}=c'(t-(x'/v)). Then, the radius decreases linearly with increasing x'. Thus, the enveloping surface is a cone with angle 2\varphi, where the angle satisfies in addition

\sin\varphi=\dfrac{R_0}{x}=\dfrac{c't}{vt}=\dfrac{c'}{v}=\dfrac{c}{vn}=\dfrac{1}{\beta n}

The normal to the enveloping surface fixes the direction of propagation of the Cherenkov radiation. The angle \theta between the normal and the x-axis is equal to \pi/2-\varphi, and it is defined by the condition

\boxed{\cos\theta=\dfrac{1}{\beta n}}

or equivalently


This is the result we anticipated before. Indeed, it is completely general and Quantum Mechanics instroudces only a light and subtle correction to this classical result. From this last equation, we observer that the Cherenkov radiation propagates along the generators of a cone whose axis coincides with the direction of motion of the particle an the cone angle is equal to 2\theta. This radiation can be registered on a colour film place perpendicularly to the direction of motion of the particle. Radiation flowing from a radiator of this type leaves a blue ring on the photographic film. These blue rings are the archetypical fingerprints of Vavilov-Cherenkov radiation!

The sharp directivity of the Cherenkov radiation makes it possible to determine the particle velocity \beta from the value of the Cherenkov’s angle \theta. From the Cherenkov’s formula above, it follows that the range of measurement of \beta is equal to


For \beta=1/n, the radiation is observed at an angle \theta=0\textdegree, while for the extreme with \beta=1, the angle \theta reaches a maximum value

\theta_{max}=\cos^{-1}\left(\dfrac{1}{n}\right)=arccos \left(\dfrac{1}{n}\right)

For instance, in the case of water, n=1.33 and \beta_{min}=1/1.33=0.75. Therefore, the Cherenkov radiation is observed in water whenever \beta\geq 0.75. For electrons being the charged particles passing through the water, this condition is satisfied if

T_e=m_ec^2\left(\dfrac{1}{\sqrt{1-\beta^2}}-1\right)=0.5\left( \dfrac{1}{\sqrt{1-0.75^2}}-1\right)=0.26MeV

As a consequence of this, the Cherenkov effect should be observed in water even for low-energy electrons ( for isntance, in the case of electrons produced by beta decay, or Compton electrons, or photoelectroncs resulting from the interaction between water and gamma rays from radioactive products, the above energy can be easily obtained and surpassed!). The maximum angle at which the Cherenkov effec can be observed in water can be calculated from the condition previously seen:


This angle (for water) shows to be equal to about \theta\approx 41.5\textdegree=41\textdegree 30'. In agreement with the so-called Frank-Tamm formula ( please, see below what that formula is and means), the number of photons in the frequency interval \nu and \nu+d\nu emitted by some particle with charge Z moving with a velocity \beta in a medium with a refractive indez n is provided by the next equation:

\boxed{N(\nu) d\nu=4\pi^2\dfrac{(Zq)^2}{hc^2}\left(1-\dfrac{1}{n^2\beta^2}\right) d\nu}

This formula has some striking features:

1st. The spectrum is identical for particles with Z=constant, i.e., the spectrum is exactly the same, irespectively the nature of the particle. For instance, it could be produced both by protons, electrons, pions, muons or their antiparticles!

2nd. As Z increases, the number of emitted photons increases as Z^2.

3rd. N(\nu) increases with \beta, the particle velocity, from zero ( with \beta=1/n) to


with \beta\approx 1.

4th. N(\nu) is approximately independent of \nu. We observe that dN(\nu)\propto d\nu.

5th. As the spectrum is uniform in frequency, and E=h\nu, this means that the main energy of radiation is concentrated in the extreme short-wave region of the spectrum, i.e.,

\boxed{dE_{Cherenkov}\propto \nu d\nu}

And then, this feature explains the bluish-violet-like colour of the Cherenkov radiation!

Indeed, this feature also indicates the necessity of choosing materials for practical applications that are “transparent” up to the highest frequencies ( even the ultraviolet region). As a rule, it is known that n<1 in the X-ray region and hence the Cherenkov condition can not be satisfied! However, it was also shown by clever experimentalists that in some narrow regions of the X-ray spectrum the refractive index is n>1 ( the refractive index depends on the frequency in any reasonable materials. Practical Cherenkov materials are, thus, dispersive! ) and the Cherenkov radiation is effectively observed in apparently forbidden regions.

The Cherenkov effect is currently widely used in diverse applications. For instance, it is useful to determine the velocity of fast charged particles ( e.g, neutrino detectors can not obviously detect neutrinos but they can detect muons and other secondaries particles produced in the interaction with some polarizable medium, even when they are produced by (electro)weak intereactions like those happening in the presence of chargeless neutrinos). The selection of the medium fo generating the Cherenkov radiation depends on the range of velocities \beta over which measurements have to be produced with the aid of such a “Cherenkov counter”. Cherenkov detectors/counters are filled with liquids and gases and they are found, e.g., in Kamiokande, Superkamiokande and many other neutrino detectors and “telescopes”. It is worth mentioning that velocities of ultrarelativistic particles are measured with Cherenkov detectors whenever they are filled with some special gasesous medium with a refractive indes just slightly higher than the unity. This value of the refractive index can be changed by realating the gas pressure in the counter! So, Cherenkov detectors and counters are very flexible tools for particle physicists!

Remark: As I mentioned before, it is important to remember that (the most of) the practical Cherenkov radiators/materials ARE dispersive. It means that if \omega is the photon frequency, and k=2\pi/\lambda is the wavenumber, then the photons propagate with some group velocity v_g=d\omega/dk, i.e.,

\boxed{v_g=\dfrac{d\omega}{dk}=\dfrac{c}{\left[n(\omega)+\omega \frac{dn}{d\omega}\right]}}

Note that if the medium is non-dispersive, this formula simplifies to the well known formula v_g=c/n. As it should be for vacuum.

Accodingly, following the PDG, Tamm showed in a classical paper that for dispersive media the Cherenkov radiation is concentrated in a thin  conical shell region whose vertex is at the moving charge and whose opening half-angle \eta is given by the expression

\boxed{cotan \theta_c=\left[\dfrac{d}{d\omega}\left(\omega\tan\theta_c\right)\right]_{\omega_0}=\left(\tan\theta_c+\beta^2\omega n(\omega) \dfrac{dn}{d\omega} cotan (\theta_c)\right)\bigg|_{\omega_0}}

where \theta_c is the critical Cherenkov angle seen before, \omega_0 is the central value of the small frequency range under consideration under the Cherenkov condition. This cone has an opening half-angle \eta (please, compare with the previous convention with \varphi for consistency), and unless the medium is non-dispersive (i.e. dn/d\omega=0, n=constant), we get \theta_c+\eta\neq 90\textdegree. Typical Cherenkov radiation imaging produces blue rings.


When we considered the Cherenkov effect in the framework of QM, in particular the quantum theory of radiation, we can deduce the following formula for the Cherenkov effect that includes the quantum corrections due to the backreaction of the particle to the radiation:

\boxed{\cos\theta=\dfrac{1}{\beta n}+\dfrac{\Lambda}{2\lambda}\left(1-\dfrac{1}{n^2}\right)}

where, like before, \beta=v/c, n is the refraction index, \Lambda=\dfrac{h}{p}=\dfrac{h}{mv} is the De Broglie wavelength of the moving particle and \lambda is the wavelength of the emitted radiation.

Cherenkov radiation is observed whenever \beta_n>1 (i.e. if v>c/n), and the limit of the emission is on the short wave bands (explaining the typical blue radiation of this effect). Moreover, \lambda_{min} corresponds to \cos\theta\approx 1.

By the other hand, the radiated energy per particle per unit of time is equal to:


where \omega=2\pi c/n\lambda is the angular frequency of the radiation, with a maximum value of \omega_{max}=2\pi c/n\lambda_{min}.
Remark: In the non-relativistic case, v<<c, and the condition \beta n>1 implies that n>>1. Therefore, neglecting the quantum corrections (the charged particle self-interaction/backreaction to radiation), we can insert the limit \Lambda/\lambda\rightarrow 0 and the above previous equations will simplify into:


\boxed{-\dfrac{dE}{dt}=\dfrac{e^2 v}{c^2}\int_0^{\omega_{max}}\omega\left(1-\dfrac{c^2}{n^2v^2}\right)d\omega}

Remember: \omega_{max} is determined with the condition \beta n(\omega_{max})=1, where n(\omega_{max}) represents the dispersive effect of the material/medium through the refraction index.


The number of photons produced per unit path length and per unit of energy of a charged particle (charge equals to Zq) is given by the celebrated Frank-Tamm formula:

\boxed{\dfrac{d^2N}{dEdx}=\dfrac{\alpha Z^2}{\hbar c}\sin^2\theta_c=\dfrac{\alpha^2 Z^2}{r_em_ec^2}\left(1-\dfrac{1}{\beta^2n^2(E)}\right)}

In terms of common values of fundamental constants, it takes the value:

\boxed{\dfrac{d^2N}{dEdx}\approx 370Z^2\sin^2\theta_c(E)eV^{-1}\cdot cm^{-1}}

or equivalently it can be written as follows

\boxed{\dfrac{d^2N}{dEdx}=\dfrac{2\pi \alpha Z^2}{\lambda^2}\left(1-\dfrac{1}{\beta^2n^2(\lambda)}\right)}

The refraction index is a function of photon energy E=\hbar \omega, and it is also the sensitivity of the transducer used to detect the light with the Cherenkov effect! Therefore, for practical uses, the Frank-Tamm formula must be multiplied by the transducer response function and integrated over the region for which we have \beta n(\omega)>1.

Remark: When two particles are close toghether ( to be close here means to be separated a distance d<1 wavelength), the electromagnetic fields form the particles may add coherently and affect the Cherenkov radiation. The Cherenkov radiation for a electron-positron pair at close separation is suppressed compared to two independent leptons!

Remark (II): Coherent radio Cherenkov radiation from electromagnetic showers is significant and it has been applied to the study of cosmic ray air showers. In addition to this, it has been used to search for electron neutrinos induced showers by cosmic rays.


The applications of Cherenkov detectors for particle identification (generally labelled as PID Cherenkov detectors) are well beyond the own range of high-energy Physics. Its uses includes: A) Fast particle counters. B) Hadronic particle indentifications. C) Tracking detectors performing complete event reconstruction. The PDG gives some examples of each category: a) Polarization detector of SLD, b) the hadronic PID detectors at B factories like BABAR or the aerogel threshold Cherenkov in Belle, c) large water Cherenkov counters liket those in Superkamiokande and other neutrino detector facilities.

Cherenkov detectors contain two main elements: 1) A radiator/material through which the particle passes, and 2) a photodetector. As Cherenkov radiation is a weak source of photons, light collection and detection must be as efficient as possible. The presence of a refractive material specifically designed to detect some special particles is almost vindicated in general.

The number of photoelectrons detected in a given Cherenkov radiation detector device is provided by the following formula (derived from the Tamm-Frank formula simply taking into account the efficiency in a straightforward manner):

\boxed{N=L\dfrac{\alpha^2 Z^2}{r_em_ec^2}\int \epsilon (E)\sin^2\theta_c(E)dE}

where L is the path length of the particle in the radiator/material, \epsilon (E) is the efficiency for the collector of Cherenkov light and transducing it in photoelectrons, and


Remark: The efficiencies and the Cherenkov critical angle are functions of the photon energy, generally speaking. However, since the typical energy dependen variation of the refraction index is modest, a quantity sometimes called Cherenkov detector quality fact N_0 can be defined as follows

\boxed{N_0=\dfrac{\alpha^2Z^2}{r_em_ec^2}\int \epsilon dE}

In this case, we can write

\boxed{N\approx LN_0<\sin^2\theta_c>}

Remark(II): Cherenkov detectors are classified into imaging or threshold types, depending on its ability to make use of Cherenkov angle information. Imaging counters may be used to track particles as well as identify particles.

Other main uses/applications of the Vavilov-Cherenkov effect are:

1st. Detection of labeled biomolecules. Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. For instance, radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.

2nd. Nuclear reactors. Cherenkov radiation is used to detect high-energy charged particles. In pool-type nuclear reactors, the intensity of Cherenkov radiation is related to the frequency of the fission events that produce high-energy electrons, and hence is a measure of the intensity of the reaction. Similarly, Cherenkov radiation is used to characterize the remaining radioactivityof spent fuel rods.

3rd. Astrophysical experiments. The Cherenkov radiation from these charged particles is used to determine the source and intensity of the cosmic ray,s which is used for example in the different classes of cosmic ray detection experiments. For instance, Ice-Cube, Pierre-Auger, VERITAS, HESS, MAGIC, SNO, and many others. Cherenkov radiation can also be used to determine properties of high-energy astronomical objects that emit gamma rays, such as supernova remnants and blazars. In this last class of experiments we place STACEE, in new Mexico.

4th. High-energy experiments. We have quoted already this, and there many examples in the actual LHC, for instance, in the ALICE experiment.


Vacuum Cherenkov radiation (VCR) is the alledged and  conjectured phenomenon which refers to the Cherenkov radiation/effect of a charged particle propagating in the physical vacuum. You can ask: why should it be possible? It is quite straightforward to understand the answer.

The classical (non-quantum) theory of relativity (both special and general)  clearly forbids any superluminal phenomena/propagating degrees of freedom for material particles, including this one (the vacuum case) because a particle with non-zero rest mass can reach speed of light only at infinite energy (besides, the nontrivial vacuum itself would create a preferred frame of reference, in violation of one of the relativistic postulates).

However, according to modern views coming from the quantum theory, specially our knowledge of Quantum Field Theory, physical vacuum IS a nontrivial medium which affects the particles propagating through, and the magnitude of the effect increases with the energies of the particles!

Then, a natural consequence follows: an actual speed of a photon becomes energy-dependent and thus can be less than the fundamental constant c=299792458m/s of  speed of light, such that sufficiently fast particles can overcome it and start emitting Cherenkov radiation. In summary, any charged particle surpassing the speed of light in the physical vacuum should emit (Vacuum) Cherenkov radiation. Note that it is an inevitable consequence of the non-trivial nature of the physical vacuum in Quantum Field Theory. Indeed, some crazy people saying that superluminal particles arise in jets from supernovae, or in colliders like the LHC fail to explain why those particles don’t emit Cherenkov radiation. It is not true that real particles become superluminal in space or collider rings. It is also wrong in the case of neutrino propagation because in spite of being chargeless, neutrinos should experiment an analogue effect to the Cherenkov radiation called the Askaryan effect. Other (alternative) possibility or scenario arises in some Lorentz-violating theories ( or even CPT violating theories that can be equivalent or not to such Lorentz violations) when a speed of a propagating particle becomes higher than c which turns this particle into the tachyon.  The tachyon with an electric charge would lose energy as Cherenkov radiation just as ordinary charged particles do when they exceed the local speed of light in a medium. A charged tachyon traveling in a vacuum therefore undergoes a constant proper-time acceleration and, by necessity, its worldline would form an hyperbola in space-time. These last type of vacuum Cherenkov effect can arise in theories like the Standard Model Extension, where Lorentz-violating terms do appear.

One of the simplest kinematic frameworks for Lorentz Violating theories is to postulate some modified dispersion relations (MODRE) for particles , while keeping the usual energy-momentum conservation laws. In this way, we can provide and work out an effective field theory for breaking the Lorentz invariance. There are several alternative definitions of MODRE, since there is no general guide yet to discriminate from the different theoretical models. Thus, we could consider a general expansion  in integer powers of the momentum, in the next manner (we set units in which c=1):

\boxed{E^2=f(p,m,c_n)=p^2+m^2+\sum_{n=-\infty}^{\infty}c_n p^n}

However, it is generally used a more soft expansion depending only on positive powers of the momentum in the MODRE. In such a case,

\boxed{E^2=f(p,m,a_n)=p^2+m^2+\sum_{n=1}^{\infty}a_n p^n}

and where p=\vert \mathbf{p}\vert. If Lorentz violations are associated to the yet undiscovered quantum theory of gravity, we would get that ordinary deviations of the dispersion relations in the special theory of relativity should appear at the natural scale of the quantum gravity, say the Planck mass/energy. In units where c=1 we obtain that Planck mass/energy is:

\boxed{M_P=\sqrt{\hbar^5/G_N}=1.22\cdot 10^{19}GeV=1.22\cdot 10^{16}TeV}

Lets write and parametrize the Lorentz violations induced by the fundamental scale of quantum gravity (naively this Planck mass scale) by:


Here, \Xi_n is a dimensionless quantity that can differ from one particle (type) to another (type). Considering, for instance n=3,4, since the n<3 seems to be ruled out by previous terrestrial experiments, at higer energies the lowest non-null term will dominate the expansion with n\geq 3. The MODRE reads:

E^2=p^2+m^2+\dfrac{\Xi_a p^n}{M_P^{n-2}}

and where the label a in the term \Xi_a is specific of the particle type. Such corrections might only become important at the Planck scale, but there are two exclusions:

1st. Particles that propagate over cosmological distances can show differences in their propagation speed.
2nd. Energy thresholds for particle reactions can be shifted or even forbidden processes can be allowed. If the p^n-term is comparable to the m^2-term in the MODRE. Thus, threshold reactions can be significantly altered or shifted, because they are determined by the particle masses. So a threshold shift should appear at scales where:


Imposing/postulating that \Xi\approx 1, the typical scales for the thresholds for some diffent kind of particles can be calculated. Their values for some species are given in the next table:

We can even study some different sources of modified dispersion relationships:

1. Measurements of time of flight.

2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum.

3. Shift in the so-called GZK cut-off.

4. Modified dispersion relationships induced by non-commutative theories of spacetime. Specially, there are time shifts/delays of photon signals induced by non-commutative spacetime theories.

We will analyse this four cases separately, in a very short and clear fashion. I wish!

Case 1. Time of flight. This is similar to the recently controversial OPERA experiment results. The OPERA experiment, and other similar set-ups, measure the neutrino time of flight. I dedicated a post to it early in this blog


In fact, we can measure the time of flight of any particle, even photons. A modified dispersion relation, like the one we introduced here above, would lead to an energy dependent speed of light. The idea of the time of flight (TOF) approach is to detect a shift in the arrival time of photons (or any other massless/ultra-relativistic particle like neutrinos) with different energies, produced simultaneous in a distant object, where the distance gains the usually Planck suppressed effect. In the following we use the dispersion relation for n=3 only, as modifications in higher orders are far below the sensitivity of current or planned experiments. The modified group velocity becomes:

v=\dfrac{\partial E}{\partial p}

and then, for photons,

v\approx 1-\Xi_\gamma\dfrac{p}{M}

The time difference in the photon shift detection time will be:

\Delta t=\Xi_\gamma \dfrac{p}{M}D

where D is the distance multiplied (if it were the case) by the redshift (1+z) to correct the energy with the redshift. In recent years, several measurements on different objects in various energy bands leading to constraints up to the order of 100 for \Xi. They can be summarized in the next table ( note that the best constraint comes from a short flare of the Active Galactic Nucleus (AGN) Mrk 421, detected in the TeV band by the Whipple Imaging Air Cherenkov telescope):

There is still room for improvements with current or planned experiments, although the distance for TeV-observations is limited by absorption of TeV photons in low energy metagalactic radiation fields. Depending on the energy density of the target photon field one gets an energy dependent mean free path length, leading to an energy and redshift dependent cut off energy (the cut off energy is defined as the energy where the optical depth is one).

2. Thresholds creation for: A) Vacuum Cherenkov effect, B) Photon decay in vacuum. By the other hand, the interaction vertex in quantum electrodynamics (QED) couples one photon with two leptons. When we assume for photons and leptons the following dispersion relations (for simplicity we adopt all units with M=1). Then:

\omega_k^2=k^2+\xi k^n                E^2_p=p^2+m^2+\Xi p^n

Let us write the photon tetramomentum like \mathbb{P}=(\omega_k,\mathbf{k}) and the lepton tetramomentum \mathbb{P}=(E_p,\mathbf{p}) and \mathbb{Q}=(E_q,\mathbf{q}). It can be shown that the transferred tetramomentum will be

\xi k^n+\Xi p^n-\Xi q^n=2(E_p\omega_k-\mathbf{p}\cdot\mathbf{k})

where the r.h.s. is always positive. In the Lorentz invariant case the parameters \xi, \Xi  are zero, so that this equation can’t be solved and all processes of the single vertex are forbidden. If these parameters are non-zero, there can exist a solution and so these processes can be allowed. We now consider two of these interactions to derive constraints on the parameters \Xi, \xi. The vacuum
Cherenkov effect e^-\rightarrow \gamma e^- and the spontaneous photon-decay \gamma\rightarrow e^+e^-.

A) As we have studied here, the vacuum Cherenkov effect is a spontaneous emission of a photon by a charged particle 0<E_\gamma<E_{par}.  These effect occurs if the particle moves faster than the slowest possible radiated photon in vacuum!
In the case of \Xi>0, the maximal attainable speed for the particle c_{max} is faster than c. This means, that the particle can always be faster than a zero energy photon with

\displaystyle{c_{\gamma_0}=c\lim_{k\rightarrow 0}\dfrac{\partial \omega}{\partial k}=c\lim_{k\rightarrow 0}\dfrac{2k+n\xi k^{n-1}}{2\sqrt{k^2+\xi k^n}}=c}

and it is independent of \xi. In the case of \Xi<0, i.e., c_{par} decreases with energy, you need a photon with c_\gamma<c_{par}<x. This is only possible if \xi<\Xi.

Therefore, due to the radiation of photons such an electron loose energy. The observation of high energetic electrons allows to derive constraints on \Xi and \xi.  In the case of \Xi<0, in the case with n=3, we have the bound


Moreover, from the observation of 50 TeV photons in the Crab Nebula (and its pulsar) one can conclude the existens of 50 TeV electrons due to the inverse Compton scattering of these electrons with those photons. This leads to a constraint on \Xi of about

\Xi<1.2\times 10^{-2}

where we have used \Xi>0 in this case.

B) The decay of photons into positrons and electrons \gamma\rightarrow e^+e^- should be a very rapid spontaneous decay process. Due to the observation of Gamma rays from the Crab Nebula on earth with an energy up to E\sim 50TeV. Thus, we can reason that these rapid decay doesn’t occur on energies below 50 TeV. For the constraints on \Xi and \xi these condition means (again we impose n=3):

\xi<\dfrac{\Xi}{2}+0.08, \mbox{for}\; \xi\geq 0

\xi<\Xi+\sqrt{-0.16\Xi}, \mbox{for}\;\Xi<\xi<0.

3. Shift in the GZK cut-off. As the energy of a proton increases,the pion production reaction can happen with low energy photons of the Cosmic Microwave Background (CMB).

This leads to an energy dependent mean free path length of the particles, resulting in a cutoff at energies around E_{GZK}\approx 10^{20}eV. This is the the celebrated Greisen-Kuzmin-Zatsepin (GZK) cut off. The resonance for the GZK pion photoproduction with the CMB backgroud can be read from the next condition (I will derive this condition in a future post):

\boxed{E_{GZK}\approx\dfrac{m_p m_\pi}{2E_\gamma}=3\times 10^{20}eV\left(\dfrac{2.7K}{E_\gamma}\right)}

Thus in Lorentz invariant world, the mean free path length of a particle of energy 5.1019 eV is 50 Mpc i.e. particle over this energy are readily absorbed due to pion photoproduction reaction. But most of the sources of particle of ultra high energy are outside 50 Mpc. So, one expects no trace of particles of energy above 10^{20}eV on Earth. From the experimental point of view AGASA has found
a few particles having energy higher than the constraint given by GZK cutoff limit and claimed to be disproving the presence of GZK cutoff or at least for different threshold for GZK cutoff, whereas HiRes is consistent with the GZK effect. So, there are two main questions, not yet completely unsolved:

i) How one can get definite proof of non-existence GZK cut off?
ii) If GZK cutoff doesn’t exist, then find out the reason?

The first question could by answered by observation of a large sample of events at these energies, which is necessary for a final conclusion, since the GZK cutoff is a statistical phenomena. The current AUGER experiment, still under construction, may clarify if the GZK cutoff exists or not. The existence of the GZK cutoff would also yield new limits on Lorentz or CPT violation. For the second question, one explanation can be derived from Lorentz violation. If we do the calculation for GZK cutoff in Lorentz violated world we would get the modified proton dispersion relation as described in our previous equations with MODRE.

4. Modified dispersion relationships induced by non-commutative theories of spacetime. As we said above, there are time shifts/delays of photon signals induced by non-commutative spacetime theories. Noncommutative spacetime theories introduce a new source of MODRE: the fuzzy nature of the discreteness of the fundamental quantum spacetime. Then, the general ansatz of these type of theories comes from:


where \theta^{\mu\nu} are the components of an antisymmetric Lorentz-like tensor which components are the order one. The fundamental scale of non-commutativity \Lambda^2_{NC} is supposed to be of the Planck length. However, there are models with large extra dimensions that induce non-commutative spacetime models with scale near the TeV scale! This is interesting from the phenomenological aside as well, not only from the theoretical viewpoint. Indeed, we can investigate in the following whether astrophysical observations are able to constrain certain class of models with noncommutative spacetimes which are broken at the TeV scale or higher. However, there due to the antisymmetric character of the noncommutative tensor, we need a magnetic and electric background field in order to study these kind of models (generally speaking, we need some kind of field inducing/producing antisymmetric field backgrounds), and then the dispersion relation for photons remains the same as in a commutative spacetime. Furthermore, there is no photon energy dependence of the dispersion relation. Consequently, the time-of-flight experiments are inappopriate because of their energy-dependent dispersion. Therefore, we suggest the next alternative scenario: suppose, there exists a strong magnetic field  (for instance, from a star or a cluster of stars) on the path photons emitted at a light source (e.g. gamma-ray bursts). Then, analogous to gravitational lensing, the photons experience deflection and/or change in time-of-arrival, compared to the same path without a magnetic background field. We can make some estimations for several known objects/examples are shown in this final table:

In summary:

1st. Vacuum Cherenkov and related effects modifying the dispersion relations of special relativity are natural in many scenarios beyond the Standard Relativity (BSR) and beyond the Standard Model (BSM).

2nd. Any theory allowing for superluminal propagation has to explain the null-results from the observation of the vacuum Cherenkov effect. Otherwise, they are doomed.

3rd. There are strong bounds coming from astrophysical processes and even neutrino oscillation experiments that severely imposes and kill many models. However, it is true that current MODRE bound are far from being the most general bounds. We expect to improve these bounds with the next generation of experiments.

4th. Theories that can not pass these tests (SR obviously does) have to be banned.

5th. Superluminality has observable consequences, both in classical and quantum physics, both in standard theories and theories beyond standard theories. So, it you buid a theory allowing superluminal stuff, you must be very careful with what kind of predictions can and can not do. Otherwise, your theory is complentely nonsense.

As a final closing, let me include some nice Cherenkov rings from Superkamiokande and MiniBoone experiments. True experimental physics in action. And a final challenge…

FINAL CHALLENGE: Are you able to identify the kind of particles producing those beautiful figures? Let me know your guesses ( I do know the answer, of course).

Figure 1. Typical SuperKamiokande Ring.  I dedicate this picture to my admired Japanase scientists there. I really, really admire that country and their people, specially after disasters like the 2011 Earthquake and the Fukushima accident. If you are a japanase reader/follower, you must know we support your from abroad. You were not, you are not and you shall not be alone.

Figure 2. Typical MiniBooNe ring. History: I used this nice picture in my Master Thesis first page, as the cover/title page main picture!