What's new in
Physics in Many-Sheeted Space-Time
Note: Newest contributions are at the top!
The famous Michelson-Morley experiment carried out for about century ago demonstrated that the velocity of light does not depend on the velocity of the source with respect to the receiver and killed the ether hypothesis. This could have led to the discovery of Special Relativity. Reality is not so logical however: actually Einstein ended up with his Special Relativity from the symmetries of Maxwell's equations. Amusingly, for hundred years later Sampo Pentikäinen told me about a Youtube video reporting a modern version of Michelson-Morley experiment by Martin Grusenick in which highly non-trivial results are obtained. If I were a "real" scientists enjoying monthly salary I would not of course pay a slightest attention to this kind of stuff. But I am not a "real" scientists as many of my colleagues are happy to testify (without "":s of course) and have therefore nothing to loose. This gives me the luxury of thinking and I can even try to understand what is involved assuming that the discovery is real.
To my best knowledge there is no written document about the experiment of Martin Grusenick in web but the Youtube video is excellent. The only detail, which might give a reason to suspect that fraud might be in question is when Grusenick states that the mirror used to magnify and reflect the interference pattern to a plywood screen is planar: from the geometry of the arrangement it must be concave and I have the strong impression that this is just a linguistic lapsus. The reader willing to learn in more detail how Michelson-Morley interferometer works can look very short video sketching how the interference pattern is created. This longer video describes in more detail the principles involved.
I do not bother to transform latex to html since a lot of formulas are involved and automatic translators do not work properly. Instead, I give a link to a pdf file representing the results of Grusenick and their analysis and interpretation in detail.
The results are following.
Addition: The change of the distance between beam splitter and mirror in the vertical position might explain the observations in terms of existing physics. A simple estimate however shows that this effect is by a factor of order 10-3 too small. I am grateful for Samppa for suggesting the estimate.
For details and background see the chapter TGD and GRT.
Mark Williams has used to email me links to interesting "../articles/. Last Sunday I realized that my mind is completely empty of thoughts and in lack of anything better decided to scan the emails. The link about Snowball Earth model for pre-Cambrian climate brought to my mind the Expanding Earth model that I developed earlier to explain Cambrian Explosion and the strange finding that continents seem to fit nicely along their boundaries to form single super-continent provided that the radius of Earth is one half of the recent radius. I realized that this model forces a profound revision of models of pre-Cambrian geology, climate, and biology. I glue below the abstract of the new chapter Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life of "Genes and Memes".
TGD inspired quantum cosmology predicts that astrophysical objects do not follow cosmic expansion except in jerk-wise quantum leaps increasing the gigantic value of the gravitational Planck constant characterizing space-time mediating gravitational interactions between two masses or gravitational self interactions. This assumption provides explanation for the apparent cosmological constant.
For details see the chapter Quantum Astrophysics.
I learned this morning about highly interesting new results challenging general relativity based cosmology. Sean Carroll and Lubos Motl commented the article A weak lensing detection of a deviation from General Relativity on cosmic scales by Rachel Bean. The article Cosmological Perturbation Theory in the Synchronous and Conformal Newtonian Gauges by Chung-Pei Ma and Edmund Bertschinger allows to understand the mathematics related to the cosmological perturbation theory necessary for a deeper understanding of the article of Bean.
The message of the article is that under reasonable assumptions General Relativity leads to a wrong prediction for cosmic density perturbations in the scenario involving cold dark matter and cosmological constant to explain accelerated expansion. The following represents my first impressions after reading the article of Rachel Bean and the paper about cosmological perturation theory.
"Reasonable" means at least following assumptions about the perturbation of the metric and of energy momentum tensor.
These assumptions boil down to a simple equation
2. The results
The prediction can be tested and Rachel Bean indeed did it.
From these two data sources Rachel Bean deduces that η differs significantly from the GRT value and concentrates around η=1/3 meaning that the scaling of the time component of the metric perturbation is roughly 3 times larger than for spatial scaling.
3. What could be the interpretation of the discrepancy?
What η=1/3 could mean physically and mathematically?
The result would not challenge General Relativity (if one accepts the notion of dark energy) but only the assumption about the character of the density perturbation. Instead of matter it would be the density of dark energy which is perturbed.
4. TGD point of view
What TGD could say about this.
Zero energy ontology has meant a real quantum leap in the understanding of the exact structure of the world of classical worlds (WCW). There are however still open questions and interpretational problems. The following comments are about a quantal interpretation of Robertson-Walker cosmology provided by zero energy ontology.
Consider now the possible cosmological implications of this picture. In TGD framework Robertson-Walker cosmologies correspond to Lorentz invariant space-time surfaces in M4+ and the parameter a corresponds to cosmic time.
For a background see the chapter TGD and Cosmology.
There is an intense flood of exciting news from both biology, neuroscience, cosmology and particle physics which are very interesting from TGD point of view. Unfortunately, I do not have time and energy to comment all of them. Special thanks for Mark Williams and Ulla for sending links: I try to find time to write comments.
One of the most radical parts of quantum TGD is the view about dark matter as a hierarchy of phases of matter with varying values of Planck constant realized in terms of generalization of the 8-D imbedding space to a book like structure. The latest blow against existing models of dark matter is the discovery of a new strange aspect of dark matter discussed in the popular article Galaxy study hints at cracks in dark matter theories in New Scientist. The original article in Nature is titled as Universality of galactic surface densities within one dark halo scale-length. I glue here a short piece of the New Scientist article.
A galaxy is supposed to sit at the heart of a giant cloud of dark matter and interact with it through gravity alone. The dark matter originally provided enough attraction for the galaxy to form and now keeps it rotating. But observations are not bearing out this simple picture. Since dark matter does not radiate light, astronomers infer its distribution by looking at how a galaxy's gas and stars are moving. Previous studies have suggested that dark matter must be uniformly distributed within a galaxy's central region � a confounding result since the dark matter's gravity should make it progressively denser towards a galaxy's centre. Now, the tale has taken a deeper turn into the unknown, thanks to an analysis of the normal matter at the centres of 28 galaxies of all shapes and sizes. The study shows that there is always five times more dark matter than normal matter where the dark matter density has dropped to one-quarter of its central value.
In TGD framework both dark energy and dark matter are assumed to correspond to dark matter but with widely different values of Planck constant. The point is that very large value of Planck constant for dark matter implies that its density is in an excellent approximation constant as is also the density of dark energy. Planck constant is indeed predicted to be gigantic at the space-time sheets mediating gravitational interaction.
The appearance of number five as a ratio of mass densities sounds mysterious. Why the average mass in a large volume should be proportional to hbar at least if hbar is not too large? Intriguingly, number five appears also in the Bohr model for planetary orbits. The value of the gravitational Planck constant GMm/v0 assignable to the space-time sheets mediating gravitational interaction between planet and star is gigantic: v0/c ∼2-11 holds true inner planes. For outer planets v0/c is by a factor 1/5 smaller so that coresponding gravitational Planck constant is 5 times larger. Do these two fives represent a mere coincidence?
Some further observations about number five are in order. The angle 2π/5 relates closely to Golden Mean appearing almost everywhere in biology. n=5 makes itself manifest also in the geometry of DNA (the twist per single nucleotide is π/5 and aromatic 5-cycles appear in DNA nucleotides). Could it be that electron pairs associated with aromatic rings correspond to hbar=5×hbar0 as I have proposed? Note that DNA as topological quantum computer hypothesis plays a key role in TGD inspired quantum biology.
For the background see the chapter TGD and Astrophysics.
There have been continual claims that the speed of light in solar system is decreasing. The latest paper about this is by Sanejouand and to my opinion must be taken seriously. The situation is summarized by an excerpt from the abstract of the article:
The empirical evidences in favor of the hypothesis that the speed of light decreases by a few centimeters per second each year are examined. Lunar laser ranging data are found to be consistent with this hypothesis, which also provides a straightforward explanation for the so-called Pioneer anomaly, that is, a time-dependent blue-shift observed when analyzing radio tracking data from distant spacecrafts, as well as an alternative explanation for both the apparent time-dilation of remote events and the apparent acceleration of the Universe.
Before one can speak about change of c seriously, one must specify precisely what the measurement of speed of light means. In GRT framework speed of light is by definition a constant in local Minkowski coordinates. It seems very difficult to make sense about varying speed of light since c is purely locally defined notion.
What TGD then predicts?
For background see for instance the chapter TGD and Astrophysics of "p-Adic length Scale Hypothesis and Dark Matter Hierarchy".
Lubos had an interesting posting about how Jacobsen has derived Einstein's equations from thermodynamical considerations as kind of equations of state. This has been actually one the basic ideas of quantum TGD, where Einstein's equations do not make sense as microscopic field equations. The argument involves approximate Poincare invariance, Equivalence principle, and proportionality of entropy to area (dS = kdA) so that the result is perhaps not a complete surprise.
One starts from an expression for the variation of the area element dA for certain kind of variations in direction of light-like Killing vector field and ends up with Einstein's equations. Ricci tensor creeps in via the variation of dA expressible in terms of the analog of geodesic deviation involving curvature tensor in its expression. Since geodesic equation involves first variation of metric, the equation of geodesic deviation involves its second variation expressible in terms of curvature tensor.
The result raises the question whether it makes sense to quantize Einstein Hilbert action and in light of quantum TGD the worry is justified. In TGD (and also in string models) Einstein's equations result in long length scale approximation whereas in short length scales stringy description provides the space-time correlate for Equivalence Principle. In fact in TGD framework Equivalence Principle at fundamental level reduces to a coset construction for two super-conformal algebras: super-symplectic and super Kac-Moody. The four-momenta associated with these algebras correspond to inertial and gravitational four-momenta.
In the following I will consider different -more than 10 year old - argument implying that empty space vacuum equations state the vanishing of first and second variation of the volume element in freely falling coordinate system and will show how the argument implies empty space vacuum equations in the "world of classical worlds". I also show that empty space Einstein equations at space-time level allow interpretation in terms of criticality of volume element - perhaps serving as a correlate for vacuum criticality of TGD Universe. I also demonstrate how one can derive non-empty space Einstein equations in TGD Universe and consider the interpretation.
1. Vacuum Einstein's equations from the vanishing of the second variation of volume element in freely falling frame
The argument of Jacobsen leads to interesting considerations related to the second variation of the metric given in terms of Ricci tensor. In TGD framework the challenge is to deduce a good argument for why Einstein's equations hold true in long length scales and reading the posting of Lubos led to an idea how one might understand the content of these equations geometrically.
2. The world of classical worlds satisfies vacuum Einstein equations
In quantum TGD this observation about second variation of metric led for two decades ago to Einstein's vacuum equations for the Kähler metric for the space of light-like 3-surfaces ("world of classical worlds"), which is deduced to be a union of constant curvature spaces labeled by zero modes of the metric. The argument is very simple. The functional integration over configuration space degrees of freedom (union of constant curvature spaces a priori: Rij=kgij) involves second variation of the metric determinant. The functional integral over small deformations of 3-surface involves also second variation of the volume element �g. The propagator for small deformations around 3-surface is contravariant metric for Kähler metric and is contracted with Rij = lgij to give the infinite-dimensional trace gijRij = lD=l×∞. The result is infinite unless Rij=0 holds. Vacuum Einstein's equations must therefore hold true in the world of classical worlds.
4. Non-vacuum Einstein's equations: light-like projection of four-momentum projection is proportional to second variation of four-volume in that direction
An interesting question is whether Einstein's equations in non-empty space-time could be obtained by generalizing this argument. The question is what interpretation one should give to the quantity
at a given point of space-time.
That light-like vectors play a key role in these arguments is interesting from TGD point of view since light-like 3-surfaces are fundamental objects of TGD Universe.
5. The interpretation of non-vacuum Einstein's equations as breaking of maximal quantum criticality in TGD framework
What could be the interpretation of the result in TGD framework.
The news of yesterday morning came in email from Jack Sarfatti. The news was that gravitational detectors in GEO600 experiment have been plagued by unidentified noise in the frequency range 300-1500 Hz. Craig J. Hogan has proposed an explanation in terms of holographic Universe. By reading the paper I learned that assumptions needed are essentially those of quantum TGD. Light-like 3-surfaces as basic objects, holography, effective 2-dimensionality, are some of the terms appearing repeatedly in the article.
Maybe this means a new discovery giving support for TGD. I hope that it does not make my life even more difficult in Finland. Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD since the reported findings could be explained at quantitative level. The side effect was that Helsinki University did not allow me to use the computer for homepage anymore and they also refused to redirect visitors to my new homepage. The goal was achieved: I have more or less disappeared from the web. It seems that TGD is becoming really dangerous and power holds of science are getting nervous.
In any case, I could not resist the temptation to spend the day with this problem although I had firmly decided to use all my available time to the updating of basic chapters of quantum TGD.
1. The experiment
Consider first the graviton detector used in GEO600 experiment. The detector consists of two long arms (the length is 600 meters)- essentially rulers of equal length. The incoming gravitational wave causes a periodic stretch of the arms: the lengths of the rulers vary. The detection of gravitons means that laser beam is used to keep record about the varying length difference. This is achieved by splitting the laser beam into two pieces using a beam splitter. After this the beams travel through the arms and bounce back to interfere in the detector. Interference pattern tells whether the beam spent slightly different times in the arms due to the stretching of arm caused by the incoming gravitational radiation. The problem of experimenters has been the presence of an unidentified noise in the range 100-1500 Hz.
The prediction of Measurement of quantum fluctuations in geometry by Craig Hogan published in Phys. Rev. D 77, 104031 (2008) is that holographic geometry of space-time should induce fluctuations of classical geometry with a spectrum which is completely fixed . Hogan's prediction is very general and - if I have understood correctly - the fluctuations depend only on the duration (or length) of the laser beam using Planck length as a unit. Note that there is no dependence on the length of the arms and the fluctuations characterize only the laser beam. Although Planck length appears in the formula, the fluctuations need not have anything to with gravitons but could be due to the failure of the classical description of laser beams. The great surprise was that the prediction of Hogan for the noise is of same order of magnitude as the unidentified noise bothering experiments in the range 100-700 Hz.
2. Hogan's theory
Let us try to understand Hogan's theory in more detail.
The model starts from an optics inspired heuristic argument.
This argument has some aspects which I find questionable.
2.2 Argument based on uncertainty principle for waves with Planck wave length
Second argument can do without diffraction but still uses Planck wave length waves.
2.3 Description in terms of equivalent gravitonic wave packet
Hogan discusses also an effective description of holographic noise in terms of gravitational wave packet passing through the system.
3. TGD based model
In TGD based model for the claimed noise on can avoid the assumption about waves with Planck wave length. Rather Planck length corresponds to the transversal cross section of so called massless extremals (MEs) assignable to MEs and orthogonal to the direction of propagation. Further elements are so called number theoretic braids leading to the discretization of quantum TGD at fundamental level. The mechanism inducing the distribution for the travel times of reflected photon is due to the transverse extension of MEs, discretization in terms of number theoretic braids. Note that also in Hogan's model it is essential that one can speak about position of particle in the beam.
3.1 Some background
Consider first the general picture behind the TGD inspired model.
3.2 The model
Consider now the TGD inspired model for a laser beam of fixed duration T.
3.3 The relationship with hierarchy of Planck constants
It is interesting to combine this picture with the vision about the hierarchy of Planck constants (I am just now developing in detail the representation of the ideas involved from a perspective given by the intense work during last five years).
For details and background see the updated chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time".