What's new in

Physics in Many-Sheeted Space-Time

Note: Newest contributions are at the top!



Year 2009



Mickelson-Morley experiment revisited

The famous Michelson-Morley experiment carried out for about century ago demonstrated that the velocity of light does not depend on the velocity of the source with respect to the receiver and killed the ether hypothesis. This could have led to the discovery of Special Relativity. Reality is not so logical however: actually Einstein ended up with his Special Relativity from the symmetries of Maxwell's equations. Amusingly, for hundred years later Sampo Pentikäinen told me about a Youtube video reporting a modern version of Michelson-Morley experiment by Martin Grusenick in which highly non-trivial results are obtained. If I were a "real" scientists enjoying monthly salary I would not of course pay a slightest attention to this kind of stuff. But I am not a "real" scientists as many of my colleagues are happy to testify (without "":s of course) and have therefore nothing to loose. This gives me the luxury of thinking and I can even try to understand what is involved assuming that the discovery is real.

To my best knowledge there is no written document about the experiment of Martin Grusenick in web but the Youtube video is excellent. The only detail, which might give a reason to suspect that fraud might be in question is when Grusenick states that the mirror used to magnify and reflect the interference pattern to a plywood screen is planar: from the geometry of the arrangement it must be concave and I have the strong impression that this is just a linguistic lapsus. The reader willing to learn in more detail how Michelson-Morley interferometer works can look very short video sketching how the interference pattern is created. This longer video describes in more detail the principles involved.

I do not bother to transform latex to html since a lot of formulas are involved and automatic translators do not work properly. Instead, I give a link to a pdf file representing the results of Grusenick and their analysis and interpretation in detail.

The results are following.

  1. The findings of Grusenick can be understood if the radial component grr of the metric of Earth at the Earth's surface deviates from Schwartschild metric by a factor 1+Δ, where Δ is of order Δ ≈10-4.

  2. If one requires that Gtt vanishes for the modification of Schwartshild metric, Δ(r) behaves as Δ(R) R/r outside Earth's surface in good approximation. If the gravitational fields of stars, say Sun, have similar radial component grr, the predicted effects on planetary orbits are significant only for elliptic orbits sufficiently near to the surface of the star.

  3. In General Relativity the presence of non-vanishing "pressure" terms Grr, Gθθ, G&phiφ in Einstein tensor with a vanishing energy density are difficult to understand. In TGD framework these terms could be due to the sub-manifold constraint forcing the allowed space-time surfaces to be extremals of Kähler action with Einstein equations with energy momentum tensor of matter (not containing the contribution of Kähler action) being satisfied.

  4. The effect of the gravitational field of Sun on the interference pattern measured at Earth surface can be visible (fraction of order 10-2 about the effect of Earth itself) and the experiments indeed demonstrate a diurnal variation of the interference pattern.

  5. The extended Michelson-Morley interferometer could provide a new high precision tool to measure the behavior of grr as a function of the distance from the Earth and to test the proposed model.

Addition: The change of the distance between beam splitter and mirror in the vertical position might explain the observations in terms of existing physics. A simple estimate however shows that this effect is by a factor of order 10-3 too small. I am grateful for Samppa for suggesting the estimate.

For details and background see the chapter TGD and GRT.



Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life

Mark Williams has used to email me links to interesting "../articles/. Last Sunday I realized that my mind is completely empty of thoughts and in lack of anything better decided to scan the emails. The link about Snowball Earth model for pre-Cambrian climate brought to my mind the Expanding Earth model that I developed earlier to explain Cambrian Explosion and the strange finding that continents seem to fit nicely along their boundaries to form single super-continent provided that the radius of Earth is one half of the recent radius. I realized that this model forces a profound revision of models of pre-Cambrian geology, climate, and biology. I glue below the abstract of the new chapter Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life of "Genes and Memes".

TGD inspired quantum cosmology predicts that astrophysical objects do not follow cosmic expansion except in jerk-wise quantum leaps increasing the gigantic value of the gravitational Planck constant characterizing space-time mediating gravitational interactions between two masses or gravitational self interactions. This assumption provides explanation for the apparent cosmological constant.

Also planets are predicted to expand in a stepwise manner. This provides a new version of Expanding Earth theory originally postulated to explain the intriguing findings suggesting that continents have once formed a connected continent covering almost the entire surface of Earth but with radius which was one half of the recent one.

This leads also to a rather fascinating vision about biology. The mysterious Cambrian Explosion in which a large number of new species emerged suddenly (realized already Darwin as the strongest objection against his theory) could be understood if the life would have gone to underground lakes and seas formed during the expansion period as fractures were formed and the underground cavities expanded and were filled with water. This would have allowed the life to escape cosmic radiation, meteoric bombardment, and the extremely cold climate during Proterozoic period preceding the Cambrian Explosion and migrate back as highly developed life forms as the period of glaciations ended.

Before the Proterozoic era the radius of Earth would have been one half of its recent value and started to grow with gradually accelerating rate. This forces to rewrite the entire geological and climate history of Earth during the Proterozoic period.

  1. The postulated physically implausible cyclic appearance of single connected super-continent containing all land mass can be given up and replaced with a single continent containing large inland seas. There is no need to postulate the existence of series of super-oceans whose ocean floor would have subduced totally so that no direct information about them would exist nowadays.

  2. The dominating model for pre-Cambrian climate is so called Snowball Earth model inspired by the finding that signatures of glaciations have been found at regions of Earth, which should have been near Equator during the Proterozoic. Snowball Earth model has several difficulties: in particular, there is a lot of evidence that a series of ordinary glaciations was in question. For R/2 option the regions located to Equator would have actually been near North Pole so that the glaciations would have indeed been ordinary glaciations proceeding from the poles. A killer prediction is the existence of non-glaciated regions at apparent southern latitudes around about 45 degrees and there is evidence for these indeed exists (the article is in finnish but contains a brief abstract in english)! The model makes also testable paleomagnetic killer predictions. In particular, during periods when the magnetic dipole in the direction of rotation axis the directions of the magnetic fields for R/2 model are predicted to be same at South Pole and apparent Equator and opposite for the standard option.

For details see the chapter Quantum Astrophysics.



A new cosmological finding challenging General Relativity

I learned this morning about highly interesting new results challenging general relativity based cosmology. Sean Carroll and Lubos Motl commented the article A weak lensing detection of a deviation from General Relativity on cosmic scales by Rachel Bean. The article Cosmological Perturbation Theory in the Synchronous and Conformal Newtonian Gauges by Chung-Pei Ma and Edmund Bertschinger allows to understand the mathematics related to the cosmological perturbation theory necessary for a deeper understanding of the article of Bean.

The message of the article is that under reasonable assumptions General Relativity leads to a wrong prediction for cosmic density perturbations in the scenario involving cold dark matter and cosmological constant to explain accelerated expansion. The following represents my first impressions after reading the article of Rachel Bean and the paper about cosmological perturation theory.

1. Assumptions

"Reasonable" means at least following assumptions about the perturbation of the metric and of energy momentum tensor.

  1. The perturbations to the Robertson-Walker metric contain only two local scalings parameterized as dτ2→ (1+2Ψ)dτ2 and dxidxi→ (1-2Φ)dxidxi. Vector perturbations and tensor perturbations (gravitational radiation classically) are neglected.

  2. The traceless part (in 3-D sense) of the perturbation of energy momentum tensor vanishes. Geometrically this means that the perturbation does not contain a term for which the contribution to 3-curvature would vanish. In hydrodynamical picture the vanishing of this term would mean that the mass current for the perturbation contains only a term representing incompressible flow. During the period when matter and radiation were coupled this assumption makes sense. The non-vanishing of this term would mean the presence of a flow component - say radiation of some kind- which couples only very weakly to the background matter. Neutrinos would represent one particular example of this kind of contribution.

  3. The model of cosmology used is so called ΛCDM (cosmological constant and cold dark matter).

These assumptions boil down to a simple equation

η= Φ/Ψ=1.

2. The results

The prediction can be tested and Rachel Bean indeed did it.

  1. Ψ makes itself visible in the motion of massive objects such as galaxies since they couple to Newton's potential. This motion in turn makes itself visible as detected modifications of the microwave background from ideal. The so called Integrated Sachs-Wolfe effect is due to the redshift of microwave photons between last surface of scattering and Earth and caused by the gravitational fields of massive objects. Ordinary matter does not contribute to this effect but dark energy does.

  2. Φ makes itself visible in the motion of light. The so called Weak lensing effect distorts the images of the distant objects: apparent size is larger than the real one and there is also distortion of the shape of the object.

From these two data sources Rachel Bean deduces that η differs significantly from the GRT value and concentrates around η=1/3 meaning that the scaling of the time component of the metric perturbation is roughly 3 times larger than for spatial scaling.

3. What could be the interpretation of the discrepancy?

What η=1/3 could mean physically and mathematically?

  1. From Cosmological Perturbation Theory in the Synchronous and Conformal Newtonian Gauges one learns that for neutrinos causing shear stress one has Φ= (1+2Rν/5)Ψ, where Rν is mass fraction of neutrinos: hence η should increase rather than decrease! If this formula generalizes, a negative mass fraction R= -5/3 would be present! Something goes badly wrong if one tries to interpret the result in terms of the perturbations of the density of matter - irrespective of whether it is visible or dark!

  2. What about the perturbations of the density of dark energy? Geometrically η=1/3 would mean that the trace of the metric tensor defined in terms of the background metric is not affected. This means conservation of the metric determinant for the deformations so that small four-volumes are not affected. As a consequence, the interaction term Tαβ δgαβ receives a contribution from Gαβ but not from the cosmological term Λgαβ. This would suggest that the perturbation is not that of matter but of the vacuum energy density for which one would have

    Λgαβ δ gαβ=0 .

The result would not challenge General Relativity (if one accepts the notion of dark energy) but only the assumption about the character of the density perturbation. Instead of matter it would be the density of dark energy which is perturbed.

4. TGD point of view

What TGD could say about this.

  1. In TGD framework one has many-sheeted space-time, dark matter hierarchy represented by the book like structure of the generalized imbedding space, and dark energy is replaced with dark matter at pages of the book with gigantic Planck constant so that the Compton lengths of ordinary p"../articles/ are gigantic and the density of matter is constant in long scales so that one can speak about cosmological constant in General Relativity framework. The periods with vanishing 3-curvature are replaced by phase transitions changing the value of Planck constant at some space-time sheets and inducing lengthening of quantum scales: the cosmology during this kind of periods is fixed apart from the parameter telling the maximal duration of the period. Also early inflationary period would correspond to his kind of phase transition. Obviously, many new elements are involved so that it is difficult to say anything quantitative.

  2. Quantum criticality means the existence of deformations of space-time surface for which the second variation of Kähler action vanishes. The first guess would be that cosmic perturbations correspond to this kind of deformations. In principle this would allow a quantitative modeling in TGD framework. Robertson-Walker metrics correspond to vacuum extremals of Kähler action with infinite spectrum of this kind of deformations (this is expected to hold true quite generally although deformations disappear as one deforms more and more the vacuum extremal).

  3. Why the four-volumes defined by the Robertson-Walker metric should remain invariant under these perturbations as η=1/3 would suggest? Are the critical perturbations of the energy momentum tensor indeed those for the dominating part of dark matter with gigantic values of Planck constant and having an effective representation in terms of cosmological constant in GRT so that the above mentioned equations implying conservation of four-volume result as a consequence?

  4. The most natural interpretation for the space-time sheets mediating gravitation is as magnetic flux tubes connecting gravitationally interacting objects and thus string like objects of astrophysical size. For this kind of objects the effectively 2-dimensional energy momentum tensor is proportional to the induced metric. Could this mean -as I proposed many years ago when I still took seriously the notion of the cosmological constant as something fundamental in TGD framework- that in the GRT description based on the replacement string like objects with energy momentum tensor the resulting energy momentum tensor is proportional to the induced metric? String tension would explain the negative pressure preventing the identification of dark energy in terms of ordinary p"../articles/.

For a background see the chapters TGD and Cosmology and Cosmic Strings.



Zero energy ontology and quantum version of Robertson-Walker cosmology

Zero energy ontology has meant a real quantum leap in the understanding of the exact structure of the world of classical worlds (WCW). There are however still open questions and interpretational problems. The following comments are about a quantal interpretation of Robertson-Walker cosmology provided by zero energy ontology.

  1. The light-like 3-surfaces -or equivalently corresponding space-time sheets- inside a particular causal diamond (CD) is the basic structural unit of world of classical worlds (WCW)). CD (or strictly speaking CD×CP2) is characterized by the positions of the tips for the intersection of the future and past directed light-cones defining it. The Lorentz invariant temporal distance a between the tips allows to characterize the CDs related by Lorentz boosts and SO(3) acts as the isotropy group of a given CD. CDs with a given value of a are parameterized by Lobatchevski space -call it L(a)- identifiable as a2=constant hyperboloid of the future light-cone and having interpretation as a constant time slice in TGD inspired cosmology.

  2. The moduli space for CDs characterized by a given value of a is M4×L(a). If one poses no restrictions on the values of a, the union of all CDs corresponds to M4×M4+, where M4+ corresponds to the interior of future light-cone. F-theorist might get excited about dimension 12 for M4×M4+×CP2: this is of course just a numerical co-incidence.

  3. p-Adic length scale hypothesis follows if it is assumed that a comes as octaves of CP2 time scale: an = 2nTCP2. For this option the moduli space would be discrete union of spaces M4×L(an). A weaker condition would be that a comes as prime multiples of TCP2. In this case the preferred p-adic primes p ≈ 2n correspond to a=an and would be natural winners in fight for survival. If continuum is allowed, p-adic length scale hypothesis must be be a result of dynamics alone. Algebraic physics favors quantization at the level of moduli spaces.

  4. Also unions of CDs are possible. The proposal has been that CDs form a fractal hierarchy in the sense that there are CDs within CDs but that CDs to not intersect. A more general option would allow also intersecting CDs.

Consider now the possible cosmological implications of this picture. In TGD framework Robertson-Walker cosmologies correspond to Lorentz invariant space-time surfaces in M4+ and the parameter a corresponds to cosmic time.

  1. First some questions. Could Robertson Walker coordinates label CDs rather than points of space-time surface at deeper level? Does the parameter a labeling CDs really correspond to cosmic time? Do astrophysical objects correspond to sub-CDs?

  2. An affirmative answer to these questions is consistent with classical causality since the observer identified as -say- upper boundary of CD receives classical positive/negative energy signals from sub-CDs arriving with a velocity not exceeding light-velocity. M4×L(a) decomposition provides also a more precise articulation of the answer to the question how the non-conservation of energy in cosmological scales can be consistent with Poincare invariance. Note also that the empirically favored sub-critical Robertson-Walker cosmologies are unavoidable in this framework whereas the understanding of sub-criticality is one of the fundamental open problems in General Relativity inspired cosmology.

  3. What objections against this interpretation can one imagine?

    1. Robertson-Walker cosmology reduces to future light-cone only at the limit of vanishing density of gravitational mass. One could however argue that the scaling factor of the metric of L(a) need not be a2 corresponding to M4+ but can be more general function of a. This would allow all Robertson-Walker cosmologies with sub-critical mass density. This argument makes sense also for a = an option.

    2. Lorentz invariant space-time surfaces in CD provide an elegant and highly predictive model for cosmology. Should one give up this model in favor of the proposed model? This need not to be the case. Quantum classical correspondence requires that also the quantum cosmology has a representation at space-time level.

  4. What is then the physical interpretation for the density of gravitational mass in Robertson- Walker cosmology in the new framework? A given CD characterized by a point of M4×L(a), has certainly a finite gravitational mass identified as the mass assignable to positive/negative energy state at either upper or lower light-like boundary or CD. In zero energy ontology this mass is actually an average over a superposition of pairs of positive and negative energy states with varying energies. Since quantum TGD can be seen as square root of thermodynamics the resulting mass has only statistical meaning. One can assign a probability amplitude to CD as a wave function in M4×L(a) as a function of various quantum numbers. The cosmological density of gravitational mass would correspond to the quantum average of the mass density determined by this amplitude. Hence the quantum view about cosmology would be statistical as is also the view provided by standard cosmology.

  5. Could cosmological time be really quantized as a=an = 2nT(CP2)? Note that other values of a are possible at the pages of the book like structure representing the generalized imbedding space since a scales as r=hbar/hbar0 at these pages. All rational multiples of an are possible for the most general option. The quantization of a does not lead to any obvious contradiction since M4 time would correspond to the time measured in laboratory and there is no clock keeping count about the flow of a and telling whether it is really discrete or continuous. It might be however possible to deduce experimental tests for this prediction since it holds true in all scales. Even for elementary p"../articles/ the time scale a is macroscopic. For electron it is .1 seconds, which defines the fundamental bio-rhythm.

  6. The quantization for a encourages also to consider the quantization for the space of Lorentz boosts characterized by L(a) obtained by restricting the boosts to a subgroup of Lorentz group. A more concrete picture is obtained from the representation of SL(2,C) as Möbius transformations of plane.

    1. The restriction to a discrete subgroup of Lorentz group SL(2,C) is possible. This would allow an extremely rich structure. The most general discrete subgroup would be subgroup of SL(2,QC), where QC could be any algebraic extension of complex rational numbers. In particular, discrete subgroups of rotation group and powers Ln of a basic Lorentz boost L=exp(η) to a motion with a fixed velocity v0 = tanh(η) define lattice like structures in L(a). This would effectively mean a cosmology in 4-D lattice. Note that everything is fully consistent with the basic symmetries.

    2. The alternative possibility is that all points of L(a) are possible but that the probability amplitude is invariant under some discrete subgroup of SL(2,QC). The first option could be seen as a special case of this.

    3. One can consider also the restriction to a discrete subgroup of SL(2,R) known as Fuschian groups. This would mean a spontaneous breaking of Lorentz symmetry since only boosts in one particular direction would be allowed. The modular group SL(2,Z) and its subgroups known as congruence subgroups define an especially interesting hierarchy of groups if this kind: the tesselations of hyperbolic plane provide a concrete representation for the resulting hyperbolic geometries.

    4. Is there any experimental support for these ideas. There are indeed claims for the quantization of cosmic recession velocities of quasars (See Fang, L., Z. and Sato, H. (1985): Is the Periodicity in the Distribution of Quasar Red Shifts an Evidence of Multiple Connectedness of the Universe?, Gen. Rel. and Grav. Vol 17 , No 11.). For non-relativistic velocities this means that in a given direction there are objects for which corresponding Lorentz boosts are powers of a basic boost exp(η). The effect could be due to a restriction of allowed Lorentz boosts to a discrete subgroup or to the invariance of the cosmic wave function under this kind of subgroup. These effects should take place in all scales: in particle physics they could manifest themselves as a periodicity of production rates as a function of η closely related to the so called rapidity variable y.

  7. The possibility of CDs would mean violent collisions of sub-cosmologies. One could consider a generalized form of Pauli exclusion principle denying the intersections.

For a background see the chapter TGD and Cosmology.



A new dark matter anomaly

There is an intense flood of exciting news from both biology, neuroscience, cosmology and particle physics which are very interesting from TGD point of view. Unfortunately, I do not have time and energy to comment all of them. Special thanks for Mark Williams and Ulla for sending links: I try to find time to write comments.

One of the most radical parts of quantum TGD is the view about dark matter as a hierarchy of phases of matter with varying values of Planck constant realized in terms of generalization of the 8-D imbedding space to a book like structure. The latest blow against existing models of dark matter is the discovery of a new strange aspect of dark matter discussed in the popular article Galaxy study hints at cracks in dark matter theories in New Scientist. The original article in Nature is titled as Universality of galactic surface densities within one dark halo scale-length. I glue here a short piece of the New Scientist article.

A galaxy is supposed to sit at the heart of a giant cloud of dark matter and interact with it through gravity alone. The dark matter originally provided enough attraction for the galaxy to form and now keeps it rotating. But observations are not bearing out this simple picture. Since dark matter does not radiate light, astronomers infer its distribution by looking at how a galaxy's gas and stars are moving. Previous studies have suggested that dark matter must be uniformly distributed within a galaxy's central region � a confounding result since the dark matter's gravity should make it progressively denser towards a galaxy's centre. Now, the tale has taken a deeper turn into the unknown, thanks to an analysis of the normal matter at the centres of 28 galaxies of all shapes and sizes. The study shows that there is always five times more dark matter than normal matter where the dark matter density has dropped to one-quarter of its central value.

In TGD framework both dark energy and dark matter are assumed to correspond to dark matter but with widely different values of Planck constant. The point is that very large value of Planck constant for dark matter implies that its density is in an excellent approximation constant as is also the density of dark energy. Planck constant is indeed predicted to be gigantic at the space-time sheets mediating gravitational interaction.

The appearance of number five as a ratio of mass densities sounds mysterious. Why the average mass in a large volume should be proportional to hbar at least if hbar is not too large? Intriguingly, number five appears also in the Bohr model for planetary orbits. The value of the gravitational Planck constant GMm/v0 assignable to the space-time sheets mediating gravitational interaction between planet and star is gigantic: v0/c ∼2-11 holds true inner planes. For outer planets v0/c is by a factor 1/5 smaller so that coresponding gravitational Planck constant is 5 times larger. Do these two fives represent a mere coincidence?

  1. In accordance with TGD inspired cosmology suppose that visible matter and also the matter which is conventionally called dark matter has emerged from the decay and widening of cosmic strings to magnetic flux tubes. Assume that the string tension can be written as k×hbar/G, k a numerical constant.

  2. Suppose that the values of hbar come as pairs hbar=n× hbar0 and 5×hbar. Suppose also that for a given value of hbar the length of the cosmic string (if present at all) inside a sphere or radius R is given by L=x(n)R, x(n) a numerical constant which can depend on the pair but is same for the members of the pair (hbar,5×hbar). This assumption is supported by the velocity curves of distant stars around galaxies.

  3. These assumptions imply that the masses of matter for a pair (hbar,5×hbar) corresponding to a given value of hbar in a volume of size R are given by M(hbar)= k× x(hbar)× hbar×R/G and M(5×hbar)= 5×M(hbar). This would explain the finding if visible matter corresponds to hbar0, and x(n) is much smaller for pairs (n>1,5×n) than for the pair (1,5).

  4. One can explain the pairing in TGD framework. Let us accept the earlier hypothesis that the preferred values of hbar correspond to number theoretically maximally simple quantum phases q= exp(i2π/n) emerging first in the number theoretical evolution having a nice formulation in terms of algebraic extensions of rationals and p-adics and the gradual migration of matter to the pages of the book like structure labelled by large values of Planck constant. These number theoretically simple quantum phases correspond to n-polygons drawable by ruler and compass construction. This predicts that the preferred values of hbar correspond to a power of 2 multiplied by a product of Fermat primes Fk=22k+1. The list of known Fermat primes is short and given by Fk, k=0,1,2,3,4 giving the Fermat primes 3,5,17,257, 216+1. This hypothesis indeed predicts that Planck constants hbar and 5×hbar appear as pairs.

  5. Why the pair (1, F1=5) should be then favored? Could the reason be that n=5 corresponds also to the smallest integer making possible universal topological quantum computer: the quantum phase q=exp(i2π/5) characterizes the braiding coding for the topological quantum computer program. Or is the reason simply that this pair corresponds to the number theoretically simplest pair which must have emerged first in the number theoretic evolution?

  6. This picture supports the view that ordinary matter and what is usually called dark matter are characterized by Planck costants hbar0 and 5×hbar0, and that the space-time sheets mediating gravitational interaction correspond to dark energy because the density of matter at these space-time sheets must be constant in an excellent approximation since Compton lengths are so gigantic.

  7. Using the fact that 4 per cent of matter is visible this means that n=5 corresponds to 20 per cent of dark matter in standard sense. Pairs (n>1,5×n) should contribute the remaining 2 per cent of dark matter. The fractal scaling law

    x(n) proportional to 1/nr

    allowing pairs defined by all Fermat integers not divisible by 5 would give for the mass fraction of conventional dark matter with n>1 the expression

    p = 6× ∑k 2-kr×[2-r+ ∑ nF-r]× (4/100)= (24/100)× (1-2-r)-1×[2-r+ ∑ nF-r] .

    Here nF denotes a Fermat integer which is product of some Fermat primes in the set {3,17,257, 216+1}. The contribution from n=2k, k>0, gives the term not included to the sum over nF. r=4.945 predicts p=2.0035 and that the mass density of dark matter should scale down as 1/hbarr-1= 1/hbar3.945.

  8. The prediction brings in mind the scaling 1/ar-1 for the cosmological mass density. a-4 scaling for the radiation dominated cosmology is very near to this scaling. r=5 would predict p=1.9164 which is of course consistent with the data. This inspires the working hypothesis that the density of dark matter as function of hbar scales just like the density of matter as function of cosmic time at particular epoch. In matter dominated cosmology with mass density behaving as 1/a3 one would have r=4 and p=4.45. In asymptotic cosmology with mass density behaving as 1/a2 (according to TGD) one would have r=3 and p=11.68.
  9. Living systems would represent a deviation from the "fractal thermodynamics" for hbar since for the typical values of hbar associated with the magnetic bodies in living systems (say hbar= 244hbar0 for EEG to guarantee the the energies of EEG photons are above the thermal threshold) the density of the dark matter would be extremely small. Bio-rhythms are assumed to come as powers of 2 in the simplest model for the bio-system: the above considerations raise the question whether these rhythms could be accompanied by 5-multiples and perhaps also by Fermat integer multiples. For instance, the fundamental 10 Hz alpha frequency could be accompanied by 2 Hz frequency and the 40 Hz thalamocortical resonance frequency by 8 Hz frequency.
This model is an oversimplification obtained by assuming only singular coverings of CD. In principle both coverings and factor spaces of both CD and CP2 are possible. If singular covering of both CP2 and CD is involved and if one has n=5 for both then the ratio of mass densities is 1/25 or about 4 per cent. This is not far from the experimental ratio of about 4 per cent of the density of visible matter to the density of ordinary, dark matter and dark energy. I interpret this as an accident: dark energy can correspond to dark matter only if the Planck constant is very large and a natural place for dark energy is at the space-time sheets mediating gravitational interaction.

Some further observations about number five are in order. The angle 2π/5 relates closely to Golden Mean appearing almost everywhere in biology. n=5 makes itself manifest also in the geometry of DNA (the twist per single nucleotide is π/5 and aromatic 5-cycles appear in DNA nucleotides). Could it be that electron pairs associated with aromatic rings correspond to hbar=5×hbar0 as I have proposed? Note that DNA as topological quantum computer hypothesis plays a key role in TGD inspired quantum biology.

For the background see the chapter TGD and Astrophysics.



In what sense speed of light could be changing in solar system?

There have been continual claims that the speed of light in solar system is decreasing. The latest paper about this is by Sanejouand and to my opinion must be taken seriously. The situation is summarized by an excerpt from the abstract of the article:

The empirical evidences in favor of the hypothesis that the speed of light decreases by a few centimeters per second each year are examined. Lunar laser ranging data are found to be consistent with this hypothesis, which also provides a straightforward explanation for the so-called Pioneer anomaly, that is, a time-dependent blue-shift observed when analyzing radio tracking data from distant spacecrafts, as well as an alternative explanation for both the apparent time-dilation of remote events and the apparent acceleration of the Universe.

Before one can speak about change of c seriously, one must specify precisely what the measurement of speed of light means. In GRT framework speed of light is by definition a constant in local Minkowski coordinates. It seems very difficult to make sense about varying speed of light since c is purely locally defined notion.

  1. In TGD framework space-time as abstract manifold is replaced by 4-D surface in H=M4×CP2 (forgetting complications due to the hierarchy of Planck constants). This brings in something new: the sub-manifold geometry allowing to look space-time surfaces "from outside", from H-perspective. The shape of the space-time surface appears as new degrees of freedom. This leads to the explanation of standard model symmetries, elementary particle quantum numbers and geometrization of classical fields, the dream of Einstein. Furthermore, CP2 length scale provides a universal unit of length and p-adic length scale hypothesis brings in an entire hierarchy of fixed meter sticks defined by p-adic length scales. The presence of imbedding space M4×CP2 brings in light-like geodesics of M4 for which c is maximal and by a suitable choice of units could be taken c=1.

  2. In TGD framework the operational definition for the speed of light at given space-time sheet is in terms of the time taken for light to propagate from point A to B along space-time sheet. In TGD framework this can occur via several routes because of many-sheeted structure and each sheet gives its own value for c. Even if space-time surface is only warped (no curvature) this time is longer than along light-like geodesic of M4(×CP2) and the speed of light measured in this manner is reduced from its maximal value. The light-like geodesics of M4 serve as universal comparison standards when one measures speed of light - something which GRT does not provide.

What TGD then predicts?

  1. TGD inspired cosmology predicts that c measured in this manner increases in cosmological scales, just the opposite for what Louise Riofrio claims. The reason is that strong gravitation makes space-surface strongly curved and it takes more time to travel from A to B during early cosmology. This means that TGD based explanation has different cosmological consequences as that of Riofrio. For instance, Hubble constant depends on the space-time sheet in TGD framework.

  2. The paradox however disappears since local systems like solar system do not normally participate in cosmic expansion as predicted by TGD. This is known also experimentally. In TGD Universe local systems could however participate cosmic expansion in average sense via phase transitions increasing Planck constant of the appropriate space-time sheet and thus increasing its size. The transition would occur in relatively short time scales: this provides new support for expanding Earth hypothesis needed to explain the fact that continents fit nicely together to form single super continent covering entire Earth if the radius of Earth is by a factor 1/2 smaller than its recent radius (see this).

  3. If one measures the speed of light in local system and uses its cosmic value taken constant by definition (fixing particular coordinate time) then one indeed finds that the speed of light is decreasing locally and the decrease should be expressible in terms of Hubble constant.

  4. TGD based explanation of Pioneer anomaly can be based on completely analogous reasoning.

For background see for instance the chapter TGD and Astrophysics of "p-Adic length Scale Hypothesis and Dark Matter Hierarchy".



Einstein's equations and second variation of volume element

Lubos had an interesting posting about how Jacobsen has derived Einstein's equations from thermodynamical considerations as kind of equations of state. This has been actually one the basic ideas of quantum TGD, where Einstein's equations do not make sense as microscopic field equations. The argument involves approximate Poincare invariance, Equivalence principle, and proportionality of entropy to area (dS = kdA) so that the result is perhaps not a complete surprise.

One starts from an expression for the variation of the area element dA for certain kind of variations in direction of light-like Killing vector field and ends up with Einstein's equations. Ricci tensor creeps in via the variation of dA expressible in terms of the analog of geodesic deviation involving curvature tensor in its expression. Since geodesic equation involves first variation of metric, the equation of geodesic deviation involves its second variation expressible in terms of curvature tensor.

The result raises the question whether it makes sense to quantize Einstein Hilbert action and in light of quantum TGD the worry is justified. In TGD (and also in string models) Einstein's equations result in long length scale approximation whereas in short length scales stringy description provides the space-time correlate for Equivalence Principle. In fact in TGD framework Equivalence Principle at fundamental level reduces to a coset construction for two super-conformal algebras: super-symplectic and super Kac-Moody. The four-momenta associated with these algebras correspond to inertial and gravitational four-momenta.

In the following I will consider different -more than 10 year old - argument implying that empty space vacuum equations state the vanishing of first and second variation of the volume element in freely falling coordinate system and will show how the argument implies empty space vacuum equations in the "world of classical worlds". I also show that empty space Einstein equations at space-time level allow interpretation in terms of criticality of volume element - perhaps serving as a correlate for vacuum criticality of TGD Universe. I also demonstrate how one can derive non-empty space Einstein equations in TGD Universe and consider the interpretation.

1. Vacuum Einstein's equations from the vanishing of the second variation of volume element in freely falling frame

The argument of Jacobsen leads to interesting considerations related to the second variation of the metric given in terms of Ricci tensor. In TGD framework the challenge is to deduce a good argument for why Einstein's equations hold true in long length scales and reading the posting of Lubos led to an idea how one might understand the content of these equations geometrically.

  1. The first variation of the metric determinant gives rise to

    δ g1/2 = ∂μg1/2dxμ propto g1/2 Cρρμdxμ.

    Here Cρμν denotes Christoffel symbol.

    The possibility to find coordinates for which this variation vanishes at given point of space-time realizes Equivalence Principle locally.

  2. Second variation of the metric determinant gives rise to the quantity

    δ2 g1/2= ∂μνg1/2dxμdxν = g1/2Rμνdxμdxν.

    The vanishing of the second variation gives Einstein's equations in empty space. Einstein's empty space equations state that the second variation of the metric determinant vanishes in freely moving frame. The 4-volume element is critical in this frame.

2. The world of classical worlds satisfies vacuum Einstein equations

In quantum TGD this observation about second variation of metric led for two decades ago to Einstein's vacuum equations for the Kähler metric for the space of light-like 3-surfaces ("world of classical worlds"), which is deduced to be a union of constant curvature spaces labeled by zero modes of the metric. The argument is very simple. The functional integration over configuration space degrees of freedom (union of constant curvature spaces a priori: Rij=kgij) involves second variation of the metric determinant. The functional integral over small deformations of 3-surface involves also second variation of the volume element g. The propagator for small deformations around 3-surface is contravariant metric for Kähler metric and is contracted with Rij = lgij to give the infinite-dimensional trace gijRij = lD=l×∞. The result is infinite unless Rij=0 holds. Vacuum Einstein's equations must therefore hold true in the world of classical worlds.

4. Non-vacuum Einstein's equations: light-like projection of four-momentum projection is proportional to second variation of four-volume in that direction

An interesting question is whether Einstein's equations in non-empty space-time could be obtained by generalizing this argument. The question is what interpretation one should give to the quantity

g41/2Tμνdxμdxν

at a given point of space-time.

  1. If one restricts the consideration to variations for which dxm is of form kme, where k is light-like vector, one obtains a situation similar to used by Jacobsen in his argument. In this case one can consider the component dPk of four-momentum in direction of k associated with 3-dimensional coordinate volume element dV3=d3x. It is given by dPk= g41/2TμνkμkνdV3 .

  2. Assume that dPk is proportional to the second variation of the volume element in the deformation dxm =εkm, which means pushing of the volume element in the direction of k in second order approximation:

    (d2g41/2/dε2)dV3= (∂2g41/2/∂ xμ∂ xν) kμkνg41/2dV3= Rμνkμkνg41/2 dV3 .

    By light-likeness of kμ one can replace Rμν by Gμν and add also gμν for light-like vector kμ to obtain covariant conservation of four-momentum. Einstein's equations with cosmological term are obtained.

That light-like vectors play a key role in these arguments is interesting from TGD point of view since light-like 3-surfaces are fundamental objects of TGD Universe.

5. The interpretation of non-vacuum Einstein's equations as breaking of maximal quantum criticality in TGD framework

What could be the interpretation of the result in TGD framework.

  1. In TGD one assigns to the small deformations of vacuum extremals average four-momentum densities (over ensemble of small deformations), which satisfy Einstein's equations. It looks rather natural to assume that statistical quantities are expressible in terms of the purely geometric gravitational energy momentum tensor of vacuum extremal (which as such is not physical). The question why the projections of four-momentum to light-like directions should be proportional to the second variation of 4-D metric determinant.

  2. A possible explanation is the quantum criticality of quantum TGD. For induced spinor fields the modified Dirac equation gives rise to conserved Noether currents only if the second variation of Kähler action vanishes. The reason is that the modified gamma matrices are contractions of the first variation of Kähler action with ordinary gamma matrices.

  3. A weaker condition is that the vanishing occurs only for a subset of deformations representing dynamical symmetries. This would give rise to an infinite hierarchy of increasingly critical systems and generalization of Thom's catastrophe theory would result. The simplest system would live at the V shaped graph of cusp catastrophe: just at the verge of phase transition between the two phases.

  4. Vacuum extremals are maximally quantum critical since both the first and second variation of Kähler action vanishes identically. For the small deformations second variation could be non-vanishing and probably is. Could it be that vacuum Einstein equations would give gravitational correlate of the quantum criticality as the criticality of the four-volume element in the local freely falling frame. Non-vacuum Einstein equations would characterize the reduction of the criticality due to the presence of matter implying also the breaking of dynamical symmetries (symplectic transformations of CP2 and diffeomorphisms of M4 for vacuum extremals).

For the recent updated view about the relationship between General Relativity and TGD see the chapter TGD and GRT.



Quantum fluctuations in geometry as a new kind of noise?

The news of yesterday morning came in email from Jack Sarfatti. The news was that gravitational detectors in GEO600 experiment have been plagued by unidentified noise in the frequency range 300-1500 Hz. Craig J. Hogan has proposed an explanation in terms of holographic Universe. By reading the paper I learned that assumptions needed are essentially those of quantum TGD. Light-like 3-surfaces as basic objects, holography, effective 2-dimensionality, are some of the terms appearing repeatedly in the article.

Maybe this means a new discovery giving support for TGD. I hope that it does not make my life even more difficult in Finland. Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD since the reported findings could be explained at quantitative level. The side effect was that Helsinki University did not allow me to use the computer for homepage anymore and they also refused to redirect visitors to my new homepage. The goal was achieved: I have more or less disappeared from the web. It seems that TGD is becoming really dangerous and power holds of science are getting nervous.

In any case, I could not resist the temptation to spend the day with this problem although I had firmly decided to use all my available time to the updating of basic chapters of quantum TGD.

1. The experiment

Consider first the graviton detector used in GEO600 experiment. The detector consists of two long arms (the length is 600 meters)- essentially rulers of equal length. The incoming gravitational wave causes a periodic stretch of the arms: the lengths of the rulers vary. The detection of gravitons means that laser beam is used to keep record about the varying length difference. This is achieved by splitting the laser beam into two pieces using a beam splitter. After this the beams travel through the arms and bounce back to interfere in the detector. Interference pattern tells whether the beam spent slightly different times in the arms due to the stretching of arm caused by the incoming gravitational radiation. The problem of experimenters has been the presence of an unidentified noise in the range 100-1500 Hz.

The prediction of Measurement of quantum fluctuations in geometry by Craig Hogan published in Phys. Rev. D 77, 104031 (2008) is that holographic geometry of space-time should induce fluctuations of classical geometry with a spectrum which is completely fixed . Hogan's prediction is very general and - if I have understood correctly - the fluctuations depend only on the duration (or length) of the laser beam using Planck length as a unit. Note that there is no dependence on the length of the arms and the fluctuations characterize only the laser beam. Although Planck length appears in the formula, the fluctuations need not have anything to with gravitons but could be due to the failure of the classical description of laser beams. The great surprise was that the prediction of Hogan for the noise is of same order of magnitude as the unidentified noise bothering experiments in the range 100-700 Hz.

2. Hogan's theory

Let us try to understand Hogan's theory in more detail.

  1. The basic quantitative prediction of the theory is very simple. The spectral density of the noise for high frequencies is given by hH= tP1/2, where tP=(hbar G)1/2 is Planck time. For low frequencies hH is proportional to 1/f just like 1/f noise. The power density of the noise is given by tP and a connection with poorly understood 1/f noise appearing in electronic and other systems is suggestive. The prediction depends only Planck scale so that it should very easy to kill the model if one is able to reduce the noise from other sources below the critical level tP1/2. The model predicts also the the distribution characterizing the uncertainty in the direction of arrival for photon in terms of the ratio lP/L. Here L is the length or beam of equivalently its duration. A further prediction is that the minimal uncertainty in the arrival time of photons is given by Δ t= (tPt)1/2 and increases with the duration of the beam.

  2. Both quantum and classical mechanisms are discussed as an explanation of the noise. Gravitational holography is the key assumption behind both models. Gravitational holography states that space-time geometry has two space-time dimensions instead of three at the fundamental level and that third dimension emerges via holography. A further assumption is that light-like (null) 3-surfaces are the fundamental objects. Sounds familiar!

2.1 Heuristic argument

The model starts from an optics inspired heuristic argument.

  1. Consider a light ray with length L, which ends to aperture of size D. This gives rise to a diffraction spot of size λL/D. The resulting uncertainty of the transverse position of source is minimized when the size of diffraction spot is same as aperture size. This gives for the transverse uncertainty of the position of source Δ x= (λ L)1/2. The orientation of the ray can be determined with a precision Δ θ= (λ/L)1/2. The shorter the wavelength the better the precision. Planck length is believed to pose a fundamental limit to the precision. The conjecture is that the transverse indeterminacy of Planck wave length quantum paths corresponds to the quantum indeterminacy of the metric itself. What this means is not quite clear to me.

  2. The basic outcome of the model is that the uncertainty for the arrival times of the photons after reflection is proportional to

    Δ t =tP1/2× (sin(θ))1/2×sin(2θ),

    where θ denotes the angle of incidence on beam splitter. In normal direction Δ t vanishes. The proposed interpretation is in terms of Brownian motion for the distance between beam splitter and detector the interpretation being that each reflection from beam splitter adds uncertainty. This is essentially due to the replacement of light-like surface with a new one orthogonal to it inducing a measurement of distance between detector and bean splitter.

This argument has some aspects which I find questionable.

  1. The assumption of Planck wave length waves is certainly questionable. The underlying is that it lead to the classical formula involving the aperture size which is eliminated from the basic formula by requiring optimal angular resolution. One might argue that a special status of waves with Planck wave length breaks Lorentz invariance but since the experimental apparatus defines a preferred coordinate system this ned not be a problem.

  2. Unless one is ready to forget the argument leading to the formula for Δ θ, one can argue that the description of the holographic interaction between distant points induced by these Planck wave length waves in terms of aperture with size D= (lPL)1/2 should have some more abstract physical counterpart. Could elementary p"../articles/ as extended 2-D objects (as in TGD) play the role of ideal apertures to which a radiation with Planck wave length arrives? If one gives up the assumption about Planck wave radiation the uncertainty increases as λ. To my opinion one should be able to deduced the basic formula without this kind of argument.

2.2 Argument based on uncertainty principle for waves with Planck wave length

Second argument can do without diffraction but still uses Planck wave length waves.

  1. The interactions of Planck wave length radiation at null surface at two different times corresponding to normal coordinates z1 and z2 at these times are considered. From the standard uncertainty relation between momentum and position of the incoming particle one deduces uncertainty relation for transverse position operators x(zi), i=1,2. The uncertainty comes from uncertainty of x(z2) induced by uncertainty of the transverse momentum px(zi). The uncertainty relation is deduced by assuming that (x(z2)-x(z1)/(z2-z1) is the ratio of transversal and longitudinal wave vectors. This relates x(z2) to px(zi) and the uncertainty relation can be deduced. The uncertainty increases linearly with z2-z1. Geometric optics is used to describing the propagating between the two points and this should certainly work for a situation in which wavelength is Planck wavelength if the notion of Planck wave length wave makes sense. From this formula the basic predictions follow.

  2. Hogan emphasizes that the basic result is obtained also classically by assuming that light-like surfaces describing the propagation of light between ends points of arm describe Brownian like random walk in directions transverse to the direction of propagation. I understand that this means that Planck wave length wave is not absolutely necessary for this approach.

2.3 Description in terms of equivalent gravitonic wave packet

Hogan discusses also an effective description of holographic noise in terms of gravitational wave packet passing through the system.

  1. The holographic noise at frequency f has equivalent description in terms of a gravitational wave packet of frequency f and duration T=1/f passing through the system. In this description the variance for the length difference of arms using standard formula for gravitational wave packet

    Δl2/l2= h2f ,

    where h characterizes the spectral density of gravitational wave.

  2. For high frequencies one obtains

    h= hP= (tP)1/2 .

  3. For low frequencies the model predicts

    h= (fres/f)(tP)1/2 .

    Here fres characterized the inverse residence time in detector and is estimated to be about 700 Hz in GEO600 experiment.

  4. The predictions of the theory are compared to the unidentified noise in the frequency range 100-600 Hz which introduces amplifying factor varying from 7 to 1. The orders of magnitude are same.

3. TGD based model

In TGD based model for the claimed noise on can avoid the assumption about waves with Planck wave length. Rather Planck length corresponds to the transversal cross section of so called massless extremals (MEs) assignable to MEs and orthogonal to the direction of propagation. Further elements are so called number theoretic braids leading to the discretization of quantum TGD at fundamental level. The mechanism inducing the distribution for the travel times of reflected photon is due to the transverse extension of MEs, discretization in terms of number theoretic braids. Note that also in Hogan's model it is essential that one can speak about position of particle in the beam.

3.1 Some background

Consider first the general picture behind the TGD inspired model.

  1. What authors emphasize can be condensed to the following statement: The transverse indeterminacy of Planck wave length seems likely to be a feature of 3+1 D space-time emerge is as a dual of quantum theory on a 2+1-D null surface. In TGD light-like 3-surfaces indeed are the fundamental objects and 4-D space-time surface is in a holographic relation to these light-like 3-surfaces. The analog of conformal invariance in light-like radial direction implies that partonic 2-surfaces are actually basic objects in short scales in the sense that one 3-dimensionality only in discretized sense.

  2. Both the interpretation as almost topological quantum field theory, the notion of finite measurement resolution, number theoretical universality making possible p-adicization of quantum TGD, and the notion of quantum criticality lead to a fundamental description in terms of discrete points sets. These are defined as intersections of what I call number theoretic braids with partonic 2-surfaces X2 at the boundaries of causal diamonds identified as intersections of future and paste directed light-cones forming a fractal hierarchy. These 2-surfaces X2 correspond to the ends of light-like three surfaces. Only the data from this discrete point set is used in the definition of M-matrix: there is however continuum of selections of this data set corresponding to different directions of light-like ray at the boundary of light-cone, and in detection one of these direction is selected and corresponds to the direction of beam in the recent case.

  3. Fermions correspond to CP2 vacuum extremal with Euclidian signature of induced metric condensed to space-time sheet with Minkowskian signature and light-like wormhole throat for which 4-metric is degenerate carries the quantum numbers. Bosons correspond to wormhole contacts consisting of a piece of CP2 vacuum extremal connecting two two space-time sheets with Minkowskian signature of induced metric. The strands of number theoretic braids carry fermionic quantum numbers and discretization is interpreted as a space-time correlate for the finite measurement resolution implying the effective grainy nature of 2-surfaces.

3.2 The model

Consider now the TGD inspired model for a laser beam of fixed duration T.

  1. In TGD framework the beams of photons and perhaps also photons themselves would have so called massless extremals as space-time correlates. The identification of gauge bosons as wormhole contacts means that there is a pair of MEs connected by a pieces of CP2 type vacuum extremal and carrying fermion and antifermion at the wormhole throats defining light-like 3-surfaces. The intersection of ME with light-cone boundary would represent partonic 2-surface and any transverse cross section of the M4 projection of ME is possible.

  2. The reflection of ME has description in terms of generalized Feynman diagrams for which the incoming lines correspond to the light-like three surfaces and vertices to partonic 2-surfaces at which the MEs are glued together. In this simplest model this surface defines transverse cross section of both incoming and outgoing ME. The incoming and outgoing braid strands end to different points of the cross section because if two points coincide the N-point correlation function vanishes. This means that in the reflection the distribution for the positions of braid points representing exact positions of photon change in non-deterministic manner. This induces a quantum distribution of transverse coordinates associated with braid strands and in the detection state function reduction occurs fixing the position of braid strands.

  3. The transversal cross section has maximum area when it is parallel to ME. In this case the area is apart from a numerical constant equal to d×L, L the length defined by the duration of laser beam defining the length of ME and d the diameter of orthogonal cross section of ME. This makes natural the assumption about Gaussian distribution for the positions of points in the cross section as Gaussian with variance equal to d×L. The distribution proposed by Hogan is obtained if d is given by Planck length. This would mean that the minimum area for a cross section of ME is very small, about S=hbar×G. This might make sense if the ME represents laser beam.

  4. The assumption susceptible to criticism is that for the primordial ME representing photon the area of cross section orthogonal to the direction of propagation is assumed to be always given by Planck length. This assumption of course replaces Hogan's Planck wave. Note that the classical four-momentum of ME is massless. One could however argue that in quantum situation transverse momentum square is well defined quantum number and of order Planck mass mass squared.

  5. In TGD Universe single photon would differ from infinitely narrow ray by having thickness defined by Planck length. There would be just single braid strand and its position would change in the reflection. The most natural interpretation indeed is that the pair of space-time sheets associated with photon consists of MEs with different transversal size scales: larger ME could represent laser beam. The noise would come from the lowest level in the hierarchy. One could argue that the natural size for M4 projection of wormhole throat is of order CP2 size R and therefore roughly 104 Planck lengths. If the cross section has area of order R2, where R is CP2 size, the spectral density would be roughly by a factor 100 larger than for Planck length and this might predict too large holographic noise in GEO600 experiment if the value of fres is correct. The assumption that the Gaussian characterizing the position distribution of the wormhole throat is very strongly concentrated near the center of ME with transverse size given by R looks un-natural.

  6. It is important to notice that single reflection of primordial ME corresponds to a minimum spectral noise. Repeated reflections of ME in different directions gradually increase the transversal size of ME so that the outcome is cylindrical ME with radius of order L =cT, where T is the duration of ME. At this limit the spectral density of noise would be T1/2 meaning that the uncertainty in the frequency assignable to the arrival time of photons would of same order as the oscillation period f=1/T assignable to the original ME. The interpretation is that the repeated reflections gradually generate noise and destroy the coherence of the laser beam. This would however happen at single particle level rather than for a member of fictive ensemble. Quite literally, photon would get old! This interpretation conforms with the fact that in TGD framework thermodynamics becomes part of quantum theory and thermodynamical ensemble is represented at single particle level in the sense and time like entanglement coefficients between positive and negative energy parts of zero energy state define M-matrix as a product of square root of diagonal density matrix and of S-matrix.

  7. The notion of number theoretic braid is essential for the interpretation for what happens in detection. In detection the positions of ends of number theoretic braid are measured and this measurement fixes the exact time spent by photons during the travel. Similar position measurement appears also in Hogan's argument. Thus the overall picture is more or less same as in the popular representation where also the grainy nature of space-time is emphasized.

  8. I already mentioned the possible connection with poorly understood 1/f noise appearing in very many systems. The natural interpretation would be in terms of MEs.

3.3 The relationship with hierarchy of Planck constants

It is interesting to combine this picture with the vision about the hierarchy of Planck constants (I am just now developing in detail the representation of the ideas involved from a perspective given by the intense work during last five years).

  1. If one accepts that dark matter corresponds to a hierarchy of phases of matter labeled by a hierarchy of Planck constants with arbitrarily large values, one must conclude that Planck length lP proportional to hbar1/2, has also spectrum. Primordial photons would have transversal size scalings as hbar1/2. One can consider the possibility that for large values of hbar the transversal size saturates to CP2 length R ≈104× lP. The spectral density of the noise would scale as hbar1/4 at least up to the critical value hbarcr=R2/G, which is in the range [2.3683, 2.5262]× 107. The preferred values of hbar number theoretically simple integers expressible as product of distinct Fermat primes and power of 2. hbarcrit/hbar0=3× 223 is integer of this kind and belongs to the allowed range of critical values.

  2. The order of magnitude for gravitational Planck constant assignable to the space-time sheets mediating gravitational interaction is gigantic - of order hbargr≈ GM2 - so that the noise assignable to gravitons would be gigantic in astrophysical scales unless R serves as the upper bound for the transverse size of both primordial gauge bosons and gravitons.

  3. If ordinary photonic space-time sheets are in question hbar has its standard value. For dark photons which I have proposed to play a key role in living matter, the situation changes and Δl2/l2 would scale like hbar1/2 at least up to critical value of Planck constant. Above this value of Planck constant spectral density would be given by R and Δl2/l2 would scale like R/l and Δ θ like (R/l)1/2.

For details and background see the updated chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time".



To the index page