What's new in

Physics in Many-Sheeted Space-Time

Note: Newest contributions are at the top!

Year 2007

Does TGD allow description of accelerated expansion in terms of cosmological constant?

The introduction of cosmological constant seems to be the only manner to explain accelerated expansion and related effects in the framework of General Relativity. As summarized in the previous posting, TGD allows different explanation of these effects. I will not however go to this here but represent comments about the notion of vacuum energy and the possibility to describe accelerated expansion in terms of cosmological constant in TGD framework.

The term vacuum energy density is bad use of language since De Sitter space is a solution of field equations with cosmological constant at the limit of vanishing energy momentum tensor carries vacuum curvature rather than vacuum energy. Thus theories with non-vanishing cosmological constant represent a family of gravitational theories for which vacuum solution is not flat so that Einstein's basic identification matter = curvature is given up. No wonder, Einstein regarded the introduction of cosmological constant as the biggest blunder of his life.

De Sitter space is representable as a hyperboloid a2-u2= -R2, where one has a2=t2-r2 and r2=x2+y2+z2. The symmetries of de Sitter space are maximal but Poincare group is replaced with Lorentz group of 5-D Minkowski space and translations are not symmetries. The value of cosmological constant is Λ= 3/R2. The presence of non-vanishing dimensional constant is from the point of view of conformal invariance a feature raising strong suspicions about the correctness of the underlying physics.

1. Imbedding of De Sitter space as a vacuum extremal

De Sitter Space is possible as a vacuum extremal in TGD framework. There exists infinite number of imbeddings as a vacuum extremal into M4×CP2. Take any infinitely long curve X in CP2 not intersecting itself (one might argue that infinitely long curve is somewhat pathological) and introduce a coordinate u for it such that its induced metric is ds2=du2. De Sitter space allows the standard imbedding to M4×X as a vacuum extremal. The imbedding can be written as u= ±[a2+R2]1/2 so that one has r2< t2+R2. The curve in question must fill at least 2-D submanifold of CP2 densely. An example is torus densely filled by the curve φ = αψ where α is irrational number. Note that even a slightest local deformation of this object induces an infinite number of self-intersections. Space-time sheet fills densely 5-D set in this case. One can ask whether this kind of objects might be analogs of D>4 branes in TGD framework. As a matter fact, CP2 projections of 1-D vacuum extremals could give rise to both the analogs of branes and strings connecting them if space-time surface contains both regular and "brany" pieces.

It might be that the 2-D Lagrangian manifolds representing CP2 projection of the most general vacuum extremal, can fill densely D> 3-dimensional sub-manifold of CP2. One can imagine construction of very complex Lagrange manifolds by gluing together pieces of 2-D Lagrangian sub-manifolds by arbitrary 1-D curves. One could also rotate 2-Lagrangian manifold along a 2-torus - just like one rotates point along 2-torus in the above example - to get a dense filling of 4-D volume of CP2.

The M4 projection of the imbedding corresponds to the region a2>-R2 containing future and past lightcones. If u varies only in range [0,u0] only hyperboloids with a2 in the range [-R2,-R2+u02] are present in the foliation. In zero energy ontology the space-like boundaries of this piece of De Sitter space, which must have u02>R2, would be carriers of positive and negative energy states. The boundary corresponding to u0=0 is space-like and analogous to the orbit of partonic 2-surface. For u02<R2 there are no space-like boundaries and the interpretation as zero energy state is not possible. Note that the restriction u02>R2 plus the choice of the branch of the imbedding corresponding to future or past directed lightcone is natural in TGD framework.

2. Could negative cosmological constant make sense in TGD framework?

The questionable feature of slightly deformed De Sitter metric as a model for the accelerated expansion is that the value of R would be same order of magnitude as the recent age of the Universe. Why should just this value of cosmic time be so special? In TGD framework one could of course consider space-time sheets having De Sitter cosmology characterized by a varying value of R. Also the replacement of one spatial coordinate with CP2 coordinate implies very strong breaking of translational invariance. Hence the explanation relying on quantization of gravitational Planck constant looks more attractive to me.

It is however always useful to make an exercise in challenging the cherished beliefs.

  1. Could the complete failure of the perturbation theory around canonically imbedded M4 make De Sitter cosmology natural vacuum extremal. De Sitter space appears also in the models of inflation and long range correlations might have something to do with the intersections between distant points of 3-space resulting from very small local deformations.Could both the slightly deformed De Sitter space and quantum critical cosmology represent cosmological epochs in TGD Universe?

  2. Gravitational energy defined as a non-conserved Noether charge in terms of Einstein tensor TGD is infinite for De Sitter cosmology (Λ as characterizer of vacuum energy). If one includes to the gravitational momentum also metric tensor gravitational four-momentum density vanishes (Λ as characterizer of vacuum curvature). TGD does not involve Einstein-Hilbert action as fundamental action and gravitational energy momentum tensor should be dictated by finiteness condition so that negative cosmological constant might make sense in TGD.

  3. The imbedding of De Sitter cosmology involves the choice of a preferred lightcone as does also quantization of Planck constant. Quantization of Planck constant involves the replacement of the lightcones of M4× CP2 by its finite coverings and orbifolds glued to together along quantum critical sub-manifold. Finite pieces of De Sitter space are obtained for rational values of α and there is a covering of lightcone by CP2 points. How can I be sure that there does not exist a deeper connection between the descriptions based on cosmological constant and on phase transitions changing the value Planck constant?

Note that Anti de Sitter space having similar imbedding to 5-D Minkowski space with two time like dimensions does not possess this kind of imbedding. Very probably no imbeddings exist so that TGD would allow only imbeddings of cosmologies with correct sign of Λ whereas M-theory predicts a wrong sign for it. Note also that Anti de Sitter space appearing in AdS-CFT dualities contains closed time-like loops and is therefore also physically questionable.

For details see the chapter Quantum Astrophysics.

Two stellar components in the halo of Milky Way

Bohr orbit model for astrophysical objects suggests that also galactic halo should have a modular structure analogous to that of planetary system or the rings of Saturn rather than that predicted by continuous mass distribution. Quite recently it was reported that the halo of Milky Way - earlier thought to consist of single component - seems to consist of two components (see the article of Carolle et al in Nature. See also this and this).

Even more intriguingly, the stars in these halos rotate in opposite directions. The average velocities of rotation are about 25 km/s and 50 km/s for inner and outer halos respectively. The inner halo corresponds to a range 10-15 kpc of orbital radii and outer halo to 15-20 kpc. Already the constancy of rotational velocity is strange and its increase even stranger. The orbits in inner halo are more eccentric with axial ratio rmin/rmax≈ .6. For outer halo the ratio varies in the range .9-1.0. The abundances of elements heavier than Lithium are about 3 times higher in the inner halo which suggests that it has been formed earlier.

Bohr orbit model would explain halos as being due to the concentration of visible matter around ring like structures of dark matter in macroscopic quantum state with gigantic gravitational Planck constant. This would explain also the opposite directions of rotation.

One can consider two alternative models predicting constant rotation velocity for circular orbits. The first model allows circular orbits with arbitrary plane of rotation, second model and the hybrid of these models only for the orbits in galactic plane.

  1. The original model assumes that galactic matter has resulted in the decay of cosmic string like object so that the mass inside sphere of radius R is M(R) propto R.
  2. In the second model the gravitational acceleration is due to gravitational field of a cosmic string like object transversal to the galactic plane. String creates no force parallel to string but 1/ρ radial acceleration orthogonal to the string. Of course, there is the gravitational force created by galactic matter itself. One can also associate cosmic string like objects with the circular halos themselves and it seems that this is needed in order to explain the latest findings.

The big difference in the average rotation velocities < vφ>; for inner and outer halos cannot be understood solely in terms of the high eccentricity of the orbits in the inner halo tending to reduce < vφ>. Using the conservation laws of angular momentum (L= mvminρmax) and of energy in Newtonian approximation one has < vφ>= ρmaxvmin< 1/ρ>. This gives the bounds

vmin< < vφ>< vmax= vminmaxmin]≈ 1.7 vmin .

For both models v=v0= k1/2, k=TG, (T is the effective string tension) for circular orbits. Internal consistency would require vmin<< vφ>≈.5v0<vmax≈ 1.7 vmin. On the other hand, vmax<v0 and thus vmin>.6v0 must hold true since the sign of the radial acceleration for ρmin is positive. Obviously 0.5v0>v>sub>min>.6v0 means a contradiction.

The big increase of the average rotation velocity suggests that inner and outer halos correspond to closed cosmic string like objects around which the visible matter has condensed. The inner string like object would create an additional gravitational field experienced by the stars of the outer halo. The increase of the effective string tension by factor x corresponds to the increase of < vφ> by a factor x1/2. The increase by a factor 2 plus higher eccentricity could explain the ratio of average velocities.

For details see the new chapter Quantum Astrophysics.

Experimental evidence for accelerated expansion is consistent with TGD based model

There are several pieces of evidence for accelerated expansion, which need not mean cosmological constant, although this is the interpretation adopted here. It is interesting to see whether this evidence is indeed consistent with TGD based interpretation.

A. The four pieces of evidence for accelerated expansion

A.1. Supernovas of type Ia

Supernovas of type Ia define standard candles since their luminosity varies in an oscillatory manner and the period is proportional to the luminosity. The period gives luminosity and from this the distance can be deduced by using Hubble's law: d= cz/H0, H0 Hubble's constant. The observation was that the farther the supernova was the more dimmer it was as it should have been. In other words, Hubble's constant increased with distance and the cosmic expansion was accelerating rather than decelerating as predicted by the standard matter dominated and radiation dominated cosmologies.

A.2 Mass density is critical and 3-space is flat

It is known that the contribution of ordinary and dark matter explaining the constant velocity of distance stars rotating around galaxy is about 25 per cent from the critical density. Could it be that total mass density is critical?

From the anisotropy of cosmic microwave background one can deduce that this is the case. What criticality means geometrically is that 3-space defined as surface with constant value of cosmic time is flat. This reflects in the spectrum of microwave radiation. The spots representing small anisotropies in the microwave background temperature is 1 degree and this correspond to flat 3-space. If one had dark matter instead of dark energy the size of spot would be .5 degrees!

Thus in a cosmology based on general relativity cosmological constant remains the only viable option. The situation is different in TGD based quantum cosmology based on sub-manifold gravity and hierarchy of gravitational Planck constants.

A.3 The energy density of vacuum is constant in the size scale of big voids

It was observed that the density of dark energy would be constant in the scale of 108 light years. This length scale corresponds to the size of big voids containing galaxies at their boundaries.

A.4 Integrated Sachs-Wolf effect

Also so called integrated Integrated Sachs-Wolf effect supports accelerated expansion. Very slow variations of mass density are considered. These correspond to gravitational potentials. Cosmic expansion tends to flatten them but mass accretion to form structures compensates this effect so that gravitational potentials are unaffected and there is no effect of CMB. Situation changes if dark matter is replaced with dark energy the accelerated expansion flattening the gravitational potentials wins the tendency of mass accretion to make them deeper. Hence if photon passes by an over-dense region, it receives a little energy. Similarly, photon loses energy when passing by an under-dense region. This effect has been observed.

B. Comparison with TGD

The minimum TGD based explanation for accelerated expansion involves only the fact that the imbeddings of critical cosmologies correspond to accelerated expansion. A more detailed model allows to understand why the critical cosmology appears during some periods.

B.1. Accelerated expansion in classical TGD

The first observation is that critical cosmologies (flat 3-space) imbeddable to 8-D imbedding space H correspond to negative pressure cosmologies and thus to accelerating expansion. The negativity of the counterpart of pressure in Einstein tensor is due to the fact that space-time sheet is forced to be a 4-D surface in 8-D imbedding space. This condition is analogous to a force forcing a particle at the surface of 2-sphere and gives rise to what could be called constraint force. Gravitation in TGD is sub-manifold gravitation whereas in GRT it is manifold gravitation. This would be minimum interpretation involving no assumptions about what mechanism gives rise to the critical periods.

B.2 Accelerated expansion and hierarchy of Planck constants

One can go one step further and introduce the hierarchy of Planck constants. The basic difference between TGD and GRT based cosmologies is that TGD cosmology is quantum cosmology. Smooth cosmic expansion is replaced by an expansion occurring in discrete jerks corresponding to the increase of gravitational Planck constant. At space-time level this means the replacement of 8-D imbedding space H with a book like structure containing almost-copies of H with various values of Planck constant as pages glued together along critical manifold through which space-time sheet can leak between sectors with different values of hbar. This process is the geometric correlate for the the phase transition changing the value of Planck constant.

During these phase transition periods critical cosmology applies and predicts automatically accelerated expansion. Neither genuine negative pressure due to "quintessence" nor cosmological constant is needed. Note that quantum criticality replaces inflationary cosmology and predicts a unique cosmology apart from single parameter. Criticality also explains the fluctuations in microwave temperature as long range fluctuations characterizing criticality.

B.3 Accelerated expansion and flatness of 3-cosmology

Observations 1) and 2) about super-novae and critical cosmology (flat 3-space) are consistent with this cosmology. In TGD dark energy must be replaced with dark matter because the mass density is critical during the phase transition. This does not lead to wrong sized spots since it is the increase of Planck constant which induces the accelerated expansion understandable also as a constraint force due to imbedding to H.

B.4 The size of large voids is the characteristic scale

The TGD based model in its simplest form model assigns the critical periods of expansion to large voids of size 108 ly. Also larger and smaller regions can express similar periods and dark space-time sheets are expected to obey same universal "cosmology" apart from a parameter characterizing the duration of the phase transition. Observation 3) that just this length scale defines the scale below which dark energy density is constant is consistent with TGD based model.

The basic prediction is jerkwise cosmic expansion with jerks analogous to quantum transitions between states of atom increasing the size of atom. The discovery of large voids with size of order 108 ly but age much longer than the age of galactic large voids conforms with this prediction (see this). One the other hand, it is known that the size of galactic clusters has not remained constant in very long time scale so that jerkwise expansion indeed seems to occur.

B.5 Do cosmic strings with negative gravitational mass cause the phase transition inducing accelerated expansion

Quantum classical correspondence is the basic principle of quantum TGD and suggest that the effective antigravity manifested by accelerated expansion might have some kind of concrete space-time correlate. A possible correlate is super heavy cosmic string like objects at the center of large voids which have negative gravitational mass under very general assumptions. The repulsive gravitational force created by these objects would drive galaxies to the boundaries of large voids. At some state the pressure of galaxies would become too strong and induce a quantum phase transition forcing the increase of gravitational Planck constant and expansion of the void taking place much faster than the outward drift of the galaxies. This process would repeat itself. In the average sense the cosmic expansion would not be accelerating.

For details see the chapter Quantum Astrophysics.

Quantum version of Expanding Earth theory

TGD predicts that cosmic expansion at the level of individual astrophysical systems does not take place continuously as in classical gravitation but through discrete quantum phase transitions increasing gravitational Planck constant and thus various quantum length and time scales. The reason would be that stationary quantum states for dark matter in astrophysical length scales cannot expand. One would have the analog of atomic physics in cosmic scales. Increases of hbar by a power of two are favored in these transitions but also other scalings are possible.

This has quite far reaching implications.

  1. These periods have a highly unique description in terms of a critical cosmology for the expanding space-time sheet. The expansion is accelerating. The accelerating cosmic expansion can be assigned to this kind of phase transition in some length scale (TGD Universe is fractal). There is no need to introduce cosmological constant and dark energy would be actually dark matter.

  2. The recently observed void which has same size of about 108 light years as large voids having galaxies near their boundaries but having an age which is much higher than that of the large voids, would represent one example of jerk-wise expansion.

  3. This picture applies also to solar system and planets might be perhaps seen as having once been parts of a more or less connected system, the primordial Sun. The Bohr orbits for inner and outer planets correspond to gravitational Planck constant which is 5 times larger for outer planets. This suggests that the space-time sheet of outer planets has suffered a phase transition increasing the size scale by a factor of 5. Earth can be regarded either as n=1 orbit for Planck constant associated with outer planets or n= 5 orbit for inner planetary system. This might have something to do with the very special position of Earth in planetary system. One could even consider the possibility that both orbits are present as dark matter structures. The phase transition would also explain why n=1 and n=2 Bohr orbits are absent and one only n=3,4, and 5 are present.

  4. Also planets should have experienced this kind of phase transitions increasing the radius: the increase by a factor two would be the simplest situation.

The obvious question - that I did not ask - is whether this kind of phase transition might have occurred for Earth and led from a completely granite covered Earth -Pangeia without seas- to the recent Earth. Neither it did not occur to me to check whether there is any support for a rapid expansion of Earth during some period of its history.

Situation changed when my son Paavo visited me last Saturday and told me about a Youtube video by Neal Adams, an American comic book and commercial artist who has also produced animations for geologists. We looked the amazing video a couple of times and I looked it again yesterday. The video is very impressive (no wonder!) but in the lack of references skeptic probably cannot avoid the feeling that Neal Adams might use his highly developed animation skills to cheat you. I found also a polemic article of Adams but again the references were lacking. Perhaps the reason of polemic tone was that the concrete animation models make the expanding Earth hypothesis very convincing but geologists dare not consider seriously arguments by a layman without a formal academic background.

1. The claims of Adams

The basic claims of Adams were following.

  1. The radius of Earth has increased during last 185 million years (dinosaurs appeared for about 230 million years ago) by about factor 2. If this is assumed all continents have formed at that time a single super-continent, Pangeia, filling the entire Earth surface rather than only 1/4 of it since the total area would have grown by a factor of 4. The basic argument was that it is very difficult to imagine Earth with 1/4 of surface containing granite and 3/4 covered by basalt. If the initial situation was covering by mere granite -as would look natural- it is very difficult for a believer in thermodynamics to imagine how the granite would have gathered to a single connected continent.

  2. Adams claims that Earth has grown by keeping its density constant, rather than expanded, so that the mass of Earth has grown linearly with radius. Gravitational acceleration would have thus doubled and could provide a partial explanation for the disappearance of dinosaurs: it is difficult to cope in evolving environment when you get slower all the time.

  3. Most of the sea floor is very young and the areas covered by the youngest basalt are the largest ones. This Adams interprets this by saying that the expansion of Earth is accelerating. The alternative interpretation is that the flow rate of the magma slows down as it recedes from the ridge where it erupts. The upper bound of 185 million years for the age of sea floor requires that the expansion period - if it is already over - lasted about 185 million years after which the flow increasing the area of the sea floor transformed to a convective flow with subduction so that the area is not increasing anymore.

  4. The fact that the continents fit together -not only at the Atlantic side - but also at the Pacific side gives strong support for the idea that the entire planet was once covered by the super-continent. After the emergence of subduction theory this evidence as been dismissed: sounds very odd to me. It seems that geologists are doing "Wegeners" again.

  5. I am not sure whether Adams mentions this objection. Subduction only occurs on the other side of the subduction zone so that the other side should show evidence of being much older in the case that oceanic subduction zones are in question. This is definitely not the case. This is explained in plate tectonics as a change of the subduction direction. My explanation would be that by the symmetry of the situation both oceanic plates bend down so that this would represent new type of boundary not assumed in the tectonic plate theory.

  6. As a master visualizer Adams notices that Africa and South-America do not actually fit together in absence of expansion unless one assumes that these continents have suffered a deformation. Continents are not easily deformable stuff. The assumption of expansion implies a perfect fit of all continents without deformation.

Knowing that the devil is in the details, I must admit that some of these arguments look rather convincing to me and what I learned from Wikipedia "../articles/ supports this picture.

2. The critic of Adams of the subduction mechanism

The prevailing tectonic plate theory has been compared to the Copernican revolution in geology. The theory explains the young age of the seafloor in terms of the decomposition of the litosphere to tectonic plates and the convective flow of magma to which oceanic tectonic plates participate. The magma emerges from the crests of the mid ocean ridges representing a boundary of two plates and leads to the expansion of sea floor. The variations of the polarity of Earth's magnetic field coded in sea floor provide a strong support for the hypothesis that magma emerges from the crests.

The flow back to would take place at so called oceanic trenches near continents which represent the deepest parts of ocean. This process is known as subduction. In subduction oceanic tectonic plate bends and penetrates below the continental tectonic plate, the material in the oceanic plate gets denser and sinks into the magma. In this manner the oceanic tectonic plate suffers a metamorphosis returning back to the magma: everything which comes from Earth's interior returns back. Subduction mechanism explains elegantly formation of mountains (orogeny), earth quake zones, and associated zones of volcanic activity.

Adams is very polemic about the notion of subduction, in particular about the assumption that it generates steady convective cycle. The basic objections of Adams against subduction are following.

  1. There are not enough subduction zones to allow a steady situation. According to Adams, the situation resembles that for a flow in a tube which becomes narrower. In a steady situation the flow should accelerate as it approaches subduction zones rather than slow down. Subduction zones should be surrounded by large areas of sea floor with constant age. Just the opposite is suggested by the fact that the youngest portion of sea-floor near the ridges is largest. The presence of zones at which both ocean plates bend down could improve the situation. Also jamming of the flow could occur so that the thickness of oceanic plate increases with the distance from the eruption ridge. Jamming could increase also the density of the oceanic plate and thus the effectiveness of subduction.

  2. There is no clear evidence that subduction has occurred at other planets. The usual defense is that the presence of sea is essential for the subduction mechanism.

  3. One can also wonder what is the mechanism that led to the formation of single super continent Pangeia covering 1/4 of Earth's surface. How probable the gathering of all separate continents to form single cluster is? The later events would suggest that just the opposite should have occurred from the beginning.

3. Expanding Earth theories are not new

After I had decided to check the claims of Adams, the first thing that I learned is that Expanding Earth theory, whose existence Adams actually mentions, is by no means new. There are actually many of them.

The general reason why these theories were rejected by the main stream community was the absence of a convincing physical mechanism of expansion or of growth in which the density of Earth remains constant.

  1. 1888 Yarkovski postulated some sort of aether absorbed by Earth and transforming to chemical elements (TGD version of aether could be dark matter). 1909 Mantovani postulated thermal expansion but no growth of the Earth's mass.

  2. Paul Dirac's idea about changing Planck constant led Pascual Jordan in 1964 to a modification of general relativity predicting slow expansion of planets. The recent measurement of the gravitational constant imply that the upper bound for the relative change of gravitational constant is 10 time too small to produce large enough rate of expansion. Also many other theories have been proposed but they are in general conflict with modern physics.

  3. The most modern version of Expanding Earth theory is by Australian geologist Samuel W. Carey. He calculated that in Cambrian period (about 500 million years ago) all continents were stuck together and covered the entire Earth. Deep seas began to evolve then.

4. Summary of TGD based theory of Expanding Earth

TGD based model differs from the tectonic plate model but allows subduction which cannot imply considerable back flow of magma. Let us sum up the basic assumptions and implications.

  1. The expansion is due to a quantum phase transition increasing the value of gravitational Planck constant and forced by the cosmic expansion in the average sense.

  2. Tectonic plates do not participate to the expansion and therefore new plate must be formed and the flow of magma from the crests of mid ocean ridges is needed. The decomposition of a single plate covering the entire planet to plates to create the mid ocean ridges is necessary for the generation of new tectonic plate. The decomposition into tectonic plates is thus prediction rather than assumption.

  3. The expansion forced the decomposition of Pangeia super-continent covering entire Earth for about 530 million years ago to split into tectonic plates which began to recede as new non-expanding tectonic plate was generated at the ridges creating expanding sea floor. The initiation of the phase transition generated formation of deep seas.

  4. The eruption of plasma from the crests of ocean ridges generated oceanic tectonic plates which did not participate to the expansion by density reduction but by growing in size. This led to a reduction of density in the interior of the Earth roughly by a factor 1/8. From the upper bound for the age of the seafloor one can conclude that the period lasted for about 185 million years after which it transformed to convective flow in which the material returned back to the Earth interior. Subduction at continent-ocean floor boundaries and downwards double bending of tectonic plates at the boundaries between two ocean floors were the mechanisms. Thus tectonic plate theory would be more or less the correct description for the recent situation.

  5. One can consider the possibility that the subducted tectonic plate does not transform to magma but is fused to the tectonic layer below continent so that it grows to an iceberg like structure. This need not lead to a loss of the successful predictions of plate tectonics explaining the generation of mountains, earthquake zones, zones of volcanic activity, etc...

  6. From the video of Adams it becomes clear that the tectonic flow is East-West asymmetric in the sense that the western side is more irregular at large distances from the ocean ridge at the western side. If the magma rotates with a slightly lower velocity than the surface of Earth (like liquid in a rotating vessel), the erupting magma would rotate slightly slower than the tectonic plate and asymmetry would be generated.

  7. If the planet has not experienced a phase transition increasing the value of Planck constant, there is no need for the decomposition to tectonic plates and one can understand why there is no clear evidence for tectonic plates and subduction in other planets. The conductive flow of magma could occur below this plate and remain invisible.

The biological implications might provide a possibility to test the hypothesis.
  1. Great steps of progress in biological evolution are associated with catastrophic geological events generating new evolutionary pressures forcing new solutions to cope in the new situation. Cambrian explosion indeed occurred about 530 years ago (the book Wonderful Life of Stephen Gould explains this revolution in detail) and led to the emergence of multicellular creatures, and generated huge number of new life forms living in seas. Later most of them suffered extinction: large number of phylae and groups emerged which are not present nowadays.

    Thus Cambrian explosion is completely exceptional as compared to all other dramatic events in the evolution in the sense that it created something totally new rather than only making more complex something which already existed. Gould also emphasizes the failure to identify any great change in the environment as a fundamental puzzle of Cambrian explosion. Cambrian explosion is also regarded in many quantum theories of consciousness (including TGD) as a revolution in the evolution of consciousness: for instance, micro-tubuli emerged at this time. The periods of expansion might be necessary for the emergence of multicellular life forms on planets and the fact that they unavoidably occur sooner or later suggests that also life develops unavoidably.

  2. TGD predicts a decrease of the surface gravity by a factor 1/4 during this period. The reduction of the surface gravity would have naturally led to the emergence of dinosaurs 230 million years ago as a response coming 45 million years after the accelerated expansion ceased. Other reasons led then to the decline and eventual catastrophic disappearance of the dinosaurs. The reduction of gravity might have had some gradually increasing effects on the shape of organisms also at microscopic level and manifest itself in the evolution of genome during expansion period.

  3. A possibly testable prediction following from angular momentum conservation (ωR2= constant) is that the duration of day has increased gradually and was four times shorter during the Cambrian era. For instance, genetically coded bio-clocks of simple organisms during the expansion period could have followed the increase of the length of day with certain lag or failed to follow it completely. The simplest known circadian clock is that of the prokaryotic cyanobacteria. Recent research has demonstrated that the circadian clock of Synechococcus elongatus can be reconstituted in vitro with just the three proteins of their central oscillator. This clock has been shown to sustain a 22 hour rhythm over several days upon the addition of ATP: the rhythm is indeed faster than the circadian rhythm. For humans the average innate circadian rhythm is however 24 hours 11 minutes and thus conforms with the fact that human genome has evolved much later than the expansion ceased.

  4. Addition: My son told that scientists have found a fossil of a sea scorpion with size of 2.5 meters which has lived for about 10 million years for 400 million years ago in Germany (see also the article in Biology Letters). The finding would conform nicely with the much smaller value of surface gravity at that time. Also the emergence of trees could be understood in terms of a gradual growth of the maximum plant size as the surface gravity was reduced. The fact that the oldest known tree fossil is 385 million years old conforms with this picture.

5. Did intra-terrestrial life burst to the surface of Earth during Cambrian expansion?

Intra-terrestrial hypothesis is one of the craziest TGD inspired ideas about the evolution of life and it is quite possible that in its strongest form the hypothesis is unrealistic. One can however try to find what one obtains from the combination of the IT hypothesis with the idea of pre-Cambrian granite Earth. Could the harsh pre-Cambrian conditions have allowed only intra-terrestrial multicellular life? Could the Cambrian explosion correspond to the moment of birth for this life in the very concrete sense that the magma flow brought it into the day-light?

  1. Gould emphasizes the mysterious fact that very many life forms of Cambrian explosion looked like final products of a long evolutionary process. Could the eruption of magma from the Earth interior have induced a burst of intra-terrestrial life forms to the Earth's surface? This might make sense: the life forms living at the bottom of sea do not need direct solar light so that they could have had intra-terrestrial origin. It is quite possible that Earth's mantle contained low temperature water pockets, where the complex life forms might have evolved in an environment shielded from meteoric bombardment and UV radiation.

  2. Sea water is salty (for why this is the case see this). It is often claimed that the average salt concentration inside cell is that of the primordial sea: I do not know whether this claim can be really justified. If the claim is true, the cellular salt concentration should reflect the salt concentration of the water inside the pockets. The water inside water pockets could have been salty due to the diffusion of the salt from ground but need not have been same as that for the ocean water (higher than for cell interior and for obvious reasons). Indeed, the water in the underground reservoirs in arid regions such as Sahara is salty, which is the reason for why agriculture is absent in these regions. Note also that the cells of marine invertebrates are osmoconformers able to cope with the changing salinity of the environment so that the Cambrian revolutionaries could have survived the change in the salt concentration of environment.

  3. What applies to Earth should apply also to other similar planets and Mars is very similar to Earth. The radius is .533 times that for Earth so that after quantum leap doubling the radius and thus Schumann frequency scale (7.8 Hz would be the lowest Schumann frequency) would be essentially same as for Earth now. Mass is .131 times that for Earth so that surface gravity would be .532 of that for Earth now and would be reduced to .131 meaning quite big dinosaurs! We have learned that Mars probably contains large water reservoirs in it's interior and that there is an un-identified source of methane gas usually assigned with the presence of life. Could it be that Mother Mars is pregnant and just waiting for the great quantum leap when it starts to expand and gives rise to a birth of multicellular life forms. Or expressing freely how Bible describes the moment of birth: in the beginning there was only darkness and water and then God said: Let the light come!

To sum up, TGD would provide only the long sought mechanism of expansion and a possible connection with the biological evolution. It would be indeed fascinating if Planck constant changing quantum phase transitions in planetary scale would have profoundly affected the biosphere.

For more details see the chapter Quantum Astrophysics.

Shrinking kilogram

The definition of kilogram is not the topics number one in coffee table discussions and definitely not so because it could lead to heated debates. The fact however is that even the behavior of standard kilogram can open up fascinating questions about the structure of space-time.

The 118-year old International Prototype Kilogram is an alloy with 90 per cent Platinum and 10 per cent Iridium by weight (gravitational mass). It is held in an environmentally monitored vault in the basement of the BIPM�s House of Breteuil in S�vres on the outskirts of Paris. It has forty copies located around the world which are compared with Sevres copy with a period of 40 years.

The problem is that the Sevres kilogram seems to behave in a manner totally in-appropriate taking into account its high age if the behaviour of its equal age copies around the world is taken as the norm (see Wikipedia article and the more popular article here). The unavoidable conclusion from the comparisons is that the weight of Sevres kilogram has been reduced by about 50 μg during 118 years which makes about

dlog(m)/dt= -4.2×10-10/year

for Sevres copy or relative increase of same amout for its copies.

Specialists have not been able to identify any convincing explanation for the strange phenomenon. For instance, there is condensation of matter from the air in the vault which increases the weight and there is periodic cleaning procedure which however should not cause the effect.

1. Could the non-conservation of gravitational energy explain the mystery?

The natural question is whether there could be some new physics mechanism involved. If the copies were much younger than the Sevres copy, one could consider the possibility that gravitational mass of all copies is gradually reduced. This is not the case. One can still however look what this could mean.

In TGD Equvalence Principle is not a basic law of nature and in the generic case gravitational energy is non-conserved whereas inertial energy is conserved (I will not go to the delicacies of zero energy ontology here). This occurs even in the case of stationary metrics such as Reissner-Nordström exterior metric and the metrics associated with stationary spherically symmetric star models imbedded as vacuum extremals (for details see this).

The basic reason is that Schwartschild time t relates by a shift to Minkowski time m0:

m0= t+h(r)

such that the shift depends on the distance r to the origin. The Minkowski shape of the 3-volume containing the gravitational energy changes with M4 time but this does not explain the effect. The key observation is that the vacuum extremal of Kähler action is not an extremal of the curvature scalar (these correspond to asymptotic situations). What looks first really paradoxical is that one obtains a constant value of energy inside a fixed constant volume but a non-vanishing flow of energy to the volume. The explanation is that the system simply destroys the gravitational energy flowing inside it! The increase of gravitational binding energy compensating for the feed of gravitational energy gives a more familiar looking articulation for the non-conservation.

Amusingly, the predicted rate for the destruction of the inflowing gravitational energy is of same order of magnitude as in the case of kilogram. Note also that the relative rate is of order 1/a, a the value of cosmic time of about 1010 years. The spherically symmetric star model also predicts a rate of same order.

This approach of course does not allow to understand the behavior of the kilogram since it predicts no change of gravitational mass inside volume and does not even apply in the recent situation since all kilograms are in same age. The co-incidence however suggests that the non-conservation of gravitational energy might be part of the mystery. The point is that if the inflow satisfies Equivalence Principle then the inertial mass of the system would slowly increase whereas gravitational mass would remain constant: this would hold true only in steady state.

2. Is the change of inertial mass in question?

It would seem that the reduction in weight should correspond to a reduction of the inertial mass in Sevres or its increase of its copies. What would distinguish between Sevres kilogram and its cousins? The only thing one can imagine is that the cousins are brought to Sevres periodically. The transfer process could increase the kilogram or stop its decrease.

Could it be that the inertial mass of every kilogram increases gradually until a steady state is achieved? When the system is transferred to another place the saturation situation is changed to a situation in which genuine transfer of inertial and gravitational mass begins again and leads to a more massive steady state. The very process of transferring the comparison masses to Sevres would cause their increase.

In TGD Universe the increase of the inertial (and gravitational) mass is due to the flow of matter from larger space-time sheets to the system. The additional mass would not enter in via the surface of the kilogram but like a Trojan horse from the interior and it would be thus impossible to control using present day technology. The flow would continue until a flow equilibrium would be reached with as much mass leaving the kilogram as entering it.

3. A connection with gravitation after all?

Why the in-flow of the inertial energy should be of same order of magnitude as that for the gravitational energy predicted by simple star models? Why Equivalence Principle should hold for the in-flow alhough it would not hold for the body itself? A possible explanation is in terms of the increasing gravitational binding energy which in a steady situation leaves gravitational energy constant although inertial energy could still increase.

This would however require rather large value of gravitational binding energy since one has

Δ Egr=ΔMI/M .

The Newtonian estimate for E/M is of order GM/R, where R ≈ 1 m the size of the system. This is of order 10-26 and too small by 16 orders of magnitude.

TGD predicts that gravitational constant is proportional to p-adic length scale squared

G propto Lp2.

Ordinary gravitation can be assigned to the Mersenne prime M127 associated with electron and thus to p-adic length scale of L(127)≈ 2.5×10-14 meters. The open question has been whether the gravities corresponding to other p-adic length scales are realized or not.

This question together with the discrepancy encourages to ask whether the value of the p-adic prime could be larger inside massive bodies (analogous to black holes in many respects in TGD framework) and make gravitation strong? In the recent case the p-adic length scale should correspond to a length scale of order 108L(127). L(181)≈ 3.2× 10-4 m (size of a large neuron by the way) would be a good candidate for the p-adic scale in question and considerably smaller than the size scale of order .1 meter defining the size of the kilogram.

This discrepancy brings in mind the strange finding of Tajmar and collaborators suggesting that rotating super-conductors generate a gravimagnetic field with a field strength by a factor of order 1020 larger than predicted by General Relativity. I have considered a model of the finding based on dark matter (see this). An alternative model could rely on the assumption that Newton's constant can in some situations correspond to p larger than M127. In this case the p-adic length scale needed would be around L(193)≈ 2 cm.

For more details see the chapter TGD and GRT.

Evidence for many-sheeted space-time from gamma ray flares

MAGIC collaboration has found evidence for a gamma ray anomaly. Gamma rays are different energy ranges seem to arrive with different velocities from Mkn 501 (see this). The delay in arrival times is about 4 minutes. The proposed explanation is in terms of broken Lorentz invariance. TGD allows to explain the finding in terms of many-sheeted space-time and there is no need to invoke breaking of Lorentz invariance.

1. TGD based explanation at qualitative level

One of the oldest predictions of many-sheeted space-time is that the time for photons to propagate from point A to B along given space-time sheet depends on space-time sheet because photon travels along lightlike geodesic of space-time sheet rather than lightlike geodesic of the imbedding space and thus increases so that the travel time is in general longer than using maximal signal velocity.

Many-sheetedness predicts a spectrum of Hubble constants and gamma ray anomaly might be a demonstration for the many-sheetedness. The spectroscopy of arrival times would give information about how many sheets are involved.

Before one can accept this explanation, one must have a good argument for why the space-time sheet along which gamma rays travel depends on their energy and why higher energy gamma rays would move along space-time sheet along which the distance is longer.

  1. Shorter wavelength means that that the wave oscillates faster. Space-time sheet should reflect in its geometry the matter present at it. Could this mean that the space-time sheet is more "wiggly" for higher energy gamma rays and therefore the distance travelled longer? A natural TGD inspired guess is that the p-adic length scales assignable to gamma ray energy defines the p-adic length scale assignable to the space-time sheet of gamma ray connecting two systems so that effective velocities of propagation would correspond to p-adic length scales coming as half octaves. Note that there is no breaking of Lorentz invariance since gamma ray connects the two system and the rest system of receiver defines a unique coordinate system in which the energy of gamma ray has Lorentz invariant physical meaning.

  2. One can invent also an objection. In TGD classical radiation field decomposes into topological light rays ("massless extremals", MEs) which could quite well be characterized by a large Planck constant in which case the decay to ordinary photons would take place at the receiving end via decoherence (Allais effect discussed in previous posting is an application of this picture in the case of gravitonal interaction). Gamma rays could propagate very much like a laser beam along the ME. For the simplest MEs the velocity of propagation corresponds to the maximal signal velocity and there would be no variation of propagation time. One can imagine two manners to circumvent to the counter argument.
    1. Also topological light rays for which light-like geodesics are replaced with light-like curves of M4 are highly suggestive as solutions of field equations. For these MEs the distance travelled would be in general longer than for the simplest MEs.
    2. The gluing of ME to background space-time by wormhole contacts (actually representation for photons!) could force the classical signal to propagate along a zigzag curve formed by simple MEs with maximal signal velocity. The length of each piece would be of order p-adic length scale. The zigzag character of the path of arrival would increase the distance between source and receiver.

2. Quantitative argument

A quantitative estimate runs as follows.

  1. The source in question is quasar Makarian 501 with redshift z= .034. Gamma flares of duration about 2 minutes were observed with energies in bands .25-.6 TeV and 1.2-10 TeV. The gamma rays in the higher energy band were near to its upper end and were delayed by about Δ τ=4 min with respect to those in the lower band. Using Hubble law v=Hct with H= 71 km/Mparsec/s, one obtains the estimate Δτ/τ= 1.6×10-14.

  2. A simple model for the induced metric of the space-time sheet along which gamma rays propagate is as a flat metric associated with the flat imbedding Φ= ωt, where Φ is the angle coordinate of the geodesic circle of CP2. The time component of the metric is given by


    ω appears as a parameter in the model. Also the embeddings of Reissner-Norström and Schwartschild metrics contain frequency as free parameter and space-time sheets are quite generally parametrized by frequencies and momentum or angular momentum like vacuum quantum numbers.

  3. ω is assumed to be expressible in terms of the p-adic prime characterizing the space-time sheet. The parametrization to assumed in the following is


    It turns out that r=1/2 is the only option consistent with the p-adic length scale hypothesis. The naive expectation would have been r=1. The result suggests the formula

    ω2 = m0mp with m0= K/R

    so that ω would be the geometric mean of a slowly varying large p-adic mass scale and p-adic mass scale.

    The explanation for the p-adic length scale hypothesis leading also to a generalization of Hawking-Bekenstein formula assumes that for the strong form of p-adic length scale hypothesis stating p≈ 2k, k prime, there are two p-adic length scales involved with a given elementary particle. Lp characterizes particle's Compton length and Lk the size of the wormhole contact or throat representing the elementary particle. The guess is that ω is proportional to the geometric mean of these two p-adic length scales:

    ω2R2 = x/[2k/2k1/2].

  4. A relatively weak form of the p-adic length scale hypothesis would be p≈ 2k, k an odd integer. M127 corresponds to the mass scale me5-1/2 in a reasonable approximation. Using me≈.5 MeV one finds that the mass scales m(k) for k=89-2n, n=0,1,2...,6 are m(k)/TeV= x, with x=0.12, 0.23, 0.47, 0.94, 1.88, 3.76, 7.50. The lower energy range contains the scales corresponding to k=87 and 85. The higher energy range contains the scales corresponding to k=83,81,79,77. In this case the proposed formula does not make sense.

  5. The strong form of p-adic length scale hypothesis allows only prime values for k. This would allow Mersenne prime M89 (intermediate gauge boson mass scale) for the lower energy range and k=83 and 79 for the upper energy range. A rough estimate is obtained by assuming that the two energy ranges correspond to k1=89 and k2=79.

  6. The expression for τ reads as τ= (gtt)1/2t. The expression for Δτ/τ is given by

    Δ τ/τ=(gtt)-1/2Δ gtt/2≈ R2Δ ω2 = x[(k2p2)-1/2-(k1p1)-1/2] ≈x(k2p2)-1/2= x×2-79/2(79)-1/2.

    Using the experimental value for Δτ/τ one obtains x≈.45. x=1/2 is an attractive guess.

It seems that one can fairly well say that standard cosmology is making a crash down while TGD is making a breakthrough after breakthrough as the interpretation becomes more and more accurate. TGD is patiently waiting;-). Interesting to see how long it still will take before sociology of science finally gives up and the unavoidable happens.

For details and background see the chapter The Relationship Between TGD and GRT.

Allais effect as evidence for large values of gravitational Planck constant?

I have considered two models for Allais effect. The first model was constructed for several years ago and was based on classical Z0 force. For a couple of weeks ago I considered a model based on gravitational screening. It however turned that this model does not work. The next step was the realization that the effect might be a genuine quantum effect made possible by the gigantic value of the gravitational Planck constant: the pendulum would act as a highly sensitive gravitational interferometer.

One can represent rather general counter arguments against the models based on Z0 conductivity and gravitational screening if one takes seriously the puzzling experimental findings concerning frequency change.

  1. Allais effect identified as a rotation of oscillation plane seems to be established and seems to be present always and can be understood in terms of torque implying limiting oscillation plane.

  2. During solar eclipses Allais effect however becomes much stronger. According to Olenici's experimental work the effect appears always when massive objects form collinear structures.

  3. The behavior of the change of oscillation frequency seems puzzling. The sign of the frequency increment varies from experiment to experiment and its magnitude varies within five orders of magnitude. There is also evidence that the effect is present also before and after the full eclipse. The time scale is 1 hour.

1. What one an conclude about general pattern for Δf/f?

The above findings allow to make some important conclusions about the nature of Allais effect.

  1. Some genuinely new dynamical effect should take place when the objects are collinear. If gravitational screening would cause the effect the frequency would always grow but this is not the case.

  2. If stellar objects and also ring like dark matter structures possibly assignable to their orbits are Z0 conductors, one obtains screening effect by polarization and for the ring like structure the resulting effectively 2-D dipole field behaves as 1/\rho2 so that there are hopes of obtaining large screening effects and if the Z0 charge of pendulum is allow to have both signs, one might hope of being to able to explain the effect. It is however difficult to understand why this effect should become so strong in the collinear case.

  3. The apparent randomness of the frequency change suggests that interference effect made possible by the gigantic value of gravitational Planck constant is in question. On the other hand, the dependence of Δg/g on pendulum suggests a breaking of Equivalence Principle. It however turns out that the variation of the distances of the pendulum to Sun and Moon can explain the experimental findings since the pendulum turns out to act as a sensitive gravitational interferometer. An apparent breaking of Equivalence Principle could result if the effect is partially caused by genuine gauge forces, say dark classical Z0 force, which can have arbitrarily long range in TGD Universe.

  4. If topological light rays (MEs) provide a microscopic description for gravitation and other gauge interactions one can envision these interactions in terms of MEs extending from Sun/Moon radially to pendulum system. What comes in mind that in a collinear configuration the signals along S-P MEs and M-P MEs superpose linearly so that amplitudes are summed and interference terms give rise to an anomalous effect with a very sensitive dependence on the difference of S-P and M-P distances and possible other parameters of the problem. One can imagine several detailed variants of the mechanism. It is possible that signal from Sun combines with a signal from Earth and propagates along Moon-Earth ME or that the interferences of these signals occurs at Earth and pendulum.

  5. Interference suggests macroscopic quantum effect in astrophysical length scales and thus gravitational Planck constants given by hbargr= GMm/v0, where v0=2-11 is the favored value, should appear in the model. Since hbargr= GMm/v0 depends on both masses this could give also a sensitive dependence on mass of the pendulum. One expects that the anomalous force is proportional to hbargr and is therefore gigantic as compared to the effect predicted for the ordinary value of Planck constant.

2. Model for interaction via gravitational MEs with large Planck constant

Restricting the consideration for simplicity only gravitational MEs, a concrete model for the situation would be as follows.

  1. The picture based on topological light rays suggests that the gravitational force between two objects M and m has the following expression

    FM,m=GMm/r2= ∫|S(λ,r)|2 p(λ)dλ

    p(λ)=hgr(M,m)2π/λ , hbargr= GMm/v0(M,m) .

    p(λ) denotes the momentum of the gravitational wave propagating along ME. v0 can depend on (M,m) pair. The interpretation is that |S(λ,r)|2 gives the rate for the emission of gravitational waves propagating along ME connecting the masses, having wave length λ, and being absorbed by m at distance r.

  2. Assume that S(λ,r) has the decomposition

    S(λ,r)= R(λ)exp[iΦ(λ)]exp[ik(λ)r]/r,


    R(λ)= |S(λ,r)|.

    To simply the treatment the phases exp(iΦ(λ)) are assumed to be equal to unity in the sequel. This assumption turns out to be consistent with the experimental findings. Also the assumption v0(M,P)/v0(S,P) will be made for simplicity: these conditions guarantee Equivalence Principle. The substitution of this expression to the above formula gives the condition

    ∫ |R(λ)|2dλ/λ =v0 .

Consider now a model for the Allais effect based on this picture.

  1. In the non-collinear case one obtains just the standard Newtonian prediction for the net forces caused by Sun and Moon on the pendulum since ZS,P and ZM,P correspond to non-parallel MEs and there is no interference.

  2. In the collinear case the interference takes place. If interference occurs for identical momenta, the interfering wavelengths are related by the condition

    p(λS,P)=p(λM,P) .

    This gives

    λM,PS,P= hbarM,P/hbarS,P =MM/MS .

  3. The net gravitational force is given by

    Fgr= ∫ |Z(λ,rS,P)+ Z(λ/x,rM)|2 p(λ) dλ

    =Fgr(S,P)+ Fgr(M,P) + ΔFgr ,

    ΔFgr= 2∫ Re[S(λ,rS,P)S*(λ/x,rM,P))] (hbargr(S,P)2π/λ)dλ,

    x=hbarS,P/hbarM,P= MS/MM.

    Here rM,P is the distance between Moon and pendulum. The anomalous term Δ Fgr would be responsible for the Allais effect and change of the frequency of the oscillator.

  4. The anomalous gravitational acceleration can be written explicitly as

    Δagr= (2GMS/rSrM)×(1/v0(S,P))× I ,

    I= ∫ R(λ)×R(λ/x)× cos[2π(ySrS-xyMrM)/λ] dλ/λ ,

    yM= rM,P/rM , yS=rS,P/rS.

    Here the parameter yM (yS) is used express the distance rM,P (rS,P) between pendulum and Moon (Sun) in terms of the semi-major axis rM (rS)) of Moon's (Earth's) orbit. The interference term is sensitive to the ratio 2π(ySrS-xyMrM)/λ. For short wave lengths the integral is expected to not give a considerable contribution so that the main contribution should come from long wave lengths. The gigantic value of gravitational Planck constant and its dependence on the masses implies that the anomalous force has correct form and can also be large enough.

  5. If one poses no boundary conditions on MEs the full continuum of wavelengths is allowed. For very long wave lengths the sign of the cosine terms oscillates so that the value of the integral is very sensitive to the values of various parameters appearing in it. This could explain random looking outcome of experiments measuring Δf/f. One can also consider the possibility that MEs satisfy periodic boundary conditions so that only wave lengths λn= 2rS/n are allowed: this implies sin(2π ySrS/λ)=0. Assuming this, one can write the magnitude of the anomalous gravitational acceleration as

    Δagr= (2GMS/rS,PrM,P)×(1/v0(S,P)) × I ,

    I=∑n=1 R(2rS,P/n)×R(2rS,P/nx)× (-1)n × cos[nπx×(yM/yS)×(rM/rS)].

    If R(λ) decreases as λk, k>0, at short wavelengths, the dominating contribution corresponds to the lowest harmonics. In all terms except cosine terms one can approximate rS,P resp. rM,P with rS resp. rM.

  6. The presence of the alternating sum gives hopes for explaining the strong dependence of the anomaly term on the experimental arrangement. The reason is that the value of xyrM/rS appearing in the argument of cosine is rather large:

    x(yM/yS))rM/rS)= (yM/yS) (MS/MM)(rM/rS)(v0(M,P)/v0(S,P)) ≈ 6.95671837× 104× (yM/yS).

    The values of cosine terms are very sensitive to the exact value of the factor MSrM/MMrS and the above expression is probably not quite accurate value. As a consequence, the values and signs of the cosine terms are very sensitive to the value of yM/yS.

    The value of yM/yS varies from experiment to experiment and this alone could explain the high variability of Δf/f. The experimental arrangement would act like interferometer measuring the distance ratio rM,P/rS,P.

3. Scaling law

The assumption of the scaling law

R(λ)=R0 (λ/λ0)k

is very natural in light of conformal invariance and masslessness of gravitons and allows to make the model more explicit. With the choice λ0=rS the anomaly term can be expressed in the form

Δ agr≈ (GMS/rSrM) × (22k+1/v0)×(MM/MS)k × R0(S,P)× R0(M,P)× ∑n=1 ((-1)n/n2k)× cos[nπK] ,

K= x× (rM/rS)× (yM/yS).

The normalization condition reads in this case as

R02=v0/[2π∑n (1/n)2k+1]=v0/πζ(2k+1) .

Note the shorthand v0(S/M,P)= v0. The anomalous gravitational acceleration is given by

Δagr=(GMS/rS2) × X Y× ∑n=1 [(-1)n/n2k]×cos[nπK] ,

X= 22k × (rS/rM)× (MM/MS)k ,

Y=1/π∑n (1/n)2k+1=1/πζ(2k+1).

It is clear that a reasonable order of magnitude for the effect can be obtained if k is small enough and that this is essentially due to the gigantic value of gravitational Planck constant.

The simplest model consistent with experimental findings assumes v0(M,P)= v0(S,P) and Φ(n)=0 and gives

Δagr/gcos(Θ)=(GMS/rS2g)× X Y× ∑n=1 [(-1)n/n2k]×cos(nπ K) ,

X= 22k × (rS/rM)× (MM/MS)k,

Y=1/π ∑n (1/n)2k+1 =1/πζ(2k+1) ,

K=x× (rM/rS)× (yM/yS) , x=MS/MM .

Θ denotes in the formula above the angle between the direction of Sun and horizontal plane.

4. Numerical estimates

To get a numerical grasp to the situation one can use MS/MM≈ 2.71× 107, rS/rM≈ 389.1, and (MSrM/MMrS)≈ 1.74× 104. The overall order of magnitude of the effect would be

Δ g/g≈ XY× GMS/RS2gcos(Θ) ,

(GMS/RS2g) ≈6× 10-4 .

The overall magnitude of the effect is determined by the factor XY.

For k=1 and 1/2 the effect is too small. For k=1/4 the expression for Δ agr reads as

(Δagr/gcos(Θ))≈1.97× 10-4n=1 ((-1)n/n1/2)×cos(nπK),

K= (yM/yS)u , u=(MS/MM)(rM/rS)≈ 6.95671837× 104 .

The sensitivity of cosine terms to the precise value of yM/yS gives good hopes of explaining the strong variation of Δf/f and also the findings of Jeverdan. Numerical experimentation indeed shows that the sign of cosine sum alternates and its value increases as yM/yS increases in the range [1,2].

The eccentricities of the orbits of Moon resp. Earth are eM=.0549 resp. eE=.017. Denoting semimajor and semiminor axes by a and b one has Δ=(a-b)/a=1-(1-e2)1/2. ΔM=15× 10-4 resp. ΔE=1.4× 10-4 characterizes the variation of yM resp. yM due to the non-circularity of the orbits of Moon resp. Earth. The ratio RE/rM= .0166 characterizes the range of ΔyM =ΔrM,P/rM< RE/rM due to the variation of the position of the laboratory. All these numbers are large enough to imply large variation of the argument of cosine term even for n=1 and the variation due to the position at the surface of Earth is especially large.

5. Other effects

  1. One should explain also the recent finding by Popescu and Olenici, which they interpret as a quantization of the plane of oscillation of paraconic oscillator during solar eclipse (see this). A possible TGD based explanation would be in terms of quantization of Δg and thus of the limiting oscillation plane. This quantization could reflect the quantization of the angular momentum of the dark gravitons decaying into bunches of ordinary gravitons and providing to the pendulum the angular momentum inducing the change of the oscillation plane. The knowledge of friction coefficients associated with the rotation of the oscillation plane would allow to deduce the value of the gravitational Planck constant if one assumes that each dark graviton corresponds to its own approach to asymptotic oscillation plane.

  2. There is also evidence for the effect before and after the main eclipse. The time scale is 1 hour. A possible explanation is in terms of a dark matter ring analogous to rings of Jupiter surrounding Moon. From the average orbital velocity v = 1.022 km/s of the Moon one obtains that the distance traversed by moon during 1 hour is R1 = 3679 km. The mean radius of moon is R=1737.10 km so that one has R1=2R with 5 per cent accuracy (2×R = 3474 km). The Bohr quantization of the orbits of inner planets discussed in with the value (h/2p)gr = GMm/v0 of the gravitational Planck constant predicts rn n2GM/v02 and gives the orbital radius of Mercury correctly for the principal quantum number n=3 and v0/c = 4.6×10-4 @ 2-11. From the proportionality rn n2GM/v02 one can deduce by scaling that in the case of Moon with M(moon)/M(Sun) = 3.4×10-8 the prediction for the radius of n=1 Bohr orbit would be r1 = (M(Moon)/M(Sun))×RM/9 @ .0238 km for the same value of v0. This is too small by a factor 6.45×10-6. r1=3679 km would require n ~ 382 or n=n(Earth)=5 and v0(Moon)/v0(Sun) @ 2-4.
For details see the chapter The Relationship Between TGD and GRT.

Maxwell hydrodynamics as toy model for TGD

Today Kea told about Terence Taos's posting 2006 ICM: Etienne Ghys, �Knots and dynamics�. Posting tells about really amazing mathematical results related to knots.

1. Chern-Simons as helicity invariant

Tao mentions helicity as an invariant of fluid flow. Chern-Simons action defined by the induced Kähler gauge potential for lightlike 3-surfaces has interpretation as helicity when Kähler gauge potential is identified as fluid velocity. This flow can be continued to the interior of space-time sheet. Also the dual of the induced Kähler form defines a flow at the light-like partonic surfaces but not in the interior of space-time sheet. The lines of this flow can be interpreted as magnetic field lines. This flow is incompressible and represents a conserved charge (Kähler magnetic flux). The question is which of these flows should define number theoretical braids. Perhaps both of them can appear in the definition of S-matrix and correspond to different kinds of partonic matter (electric/magnetic charges, quarks/leptons?,...). Second kind of matter could not flow in the interior of space-time sheet. Or could interpretation in terms of electric magnetic duality make sense?

Helicity is not gauge invariant and this is as it must be in TGD framework since CP2 symplectic transformations induce U(1) gauge transformation which deforms space-time surface an modifies induced metric as well as classical electroweak fields defined by induced spinor connection. Gauge degeneracy is transformed to spin glass degeneracy.

2. Maxwell hydrodynamics

In TGD Maxwell's equations are replaced with field equations which express conservation laws and are thus hydrodynamical in character. With this background the idea that the analogy between gauge theory and hydrodynamics might be applied also in the reverse direction is natural. Hence one might ask what kind of relativistic hydrodynamics results if assumes that the action principle is Maxwell action for the four-velocity uα with the constraint term saying that light velocity is maximal signal velocity.

  1. For massive p"../articles/ the length of four-velocity equals to 1: uα uα=1. In massless case one has uα uα=0. This condition means the addition of constraint term

    λ(uα uα-ε)

    to the Maxwell action. ε=1/0 holds for massive/massless flow. In the following the notation of electrodynamics is used to make easier the comparison with electrodynamics.

  2. The constraint term destroys gauge invariance by allowing to express A0 in terms of Ai but in general the constraint is not equivalent to a choice of gauge in electrodynamics since the solutions to the field equations with constraint term are not solutions of field equations without it. One obtains field equations for an effectively massive em field with Lagrange multiplier λ having interpretation as photon mass depending on space-time point:

    jα= ∂βFαβ= λAα,


    Fαβ= ∂βAα-∂αAβ.

  3. In electrodynamic context the natural interpretation would be in terms of spontaneous massivation of photon and seems to occur for both values of ε. The analog of em current given by λAα is in general non-vanishing and conserved. This conservation law is quite strong additional constraint on the hydrodynamics. What is interesting is that breaking of gauge invariance does not lead to a loss of charge conservation.

  4. One can solve λ by contracting the equations with Aα to obtain λ= jαAα for ε=1. For ε=0 one obtains jαAα=0 stating that the field does not dissipate energy: λ can be however non-vanishing unless field equations imply jα=0. One can say that for ε=0 spontaneous massivation can occur. For ε=1 massivation is present from beginning and dissipation rate determines photon mass: a natural interpretation would be in terms of thermal massivation of photon. Non-tachyonicity fixes the sign of the dissipation term so that the thermodynamical arrow of time is fixed by causality.

  5. For ε=0 massless plane wave solutions are possible and one has ∂αβAβ=λAα. λ=0 is obtained in Lorentz gauge which is consistent with the condition ε=0. Also superpositions of plane waves with same polarization and direction of propagation are solutions of field equations: these solutions represent dispersionless precisely targeted pulses. For superpositions of plane waves λ with 4-momenta, which are not all parallel λ is non-vanishing so that non-linear self interactions due to the constraint can be said to induce massivation. In asymptotic states for which gauge symmetry is not broken one expects a decomposition of solutions to regions of space-time carrying this kind of pulses, which brings in mind final states of particle reactions containing free photons with fixed polarizations.

  6. Gradient flows satisfying the conditions Aα =∂α Φ and Aα Aα=ε give rise to identically vanishing hydrodynamical gauge fields and λ=0 holds true. These solutions are vacua since energy momentum tensor vanishes identically. There is huge number of this kind of solutions and spin glass degeneracy suggests itself. Small deformations of these vacuum flows are expected to give rise to non-vacuum flows.

  7. The counterparts of charged solutions are of special interest. For ε=0 the solution (u0,ur)= (Q/r)(1,1) is a solution of field equations outside origin and corresponds to electric field of a point charge Q. In fact, for ε=0 any ansatz (u0,ur)= f(r)(1,1) satisfies field equations for a suitable choice of λ(r) since the ratio of equations associate with j0 and jr gives an equation which is trivially satisfied. For ε=1 the ansatz (u0,ur)= (cosh(u),sinh(u)) expressing solution in terms of hyperbolic angle linearizes the field equation obtained by dividing the equations for j0 and jr to eliminate λ. The resulting equation is

    r2u+ 2∂ru/r=0

    for ordinary Coulomb potential and one obtains (u0,ur)= (cosh(u0+k/r), sinh(u0+k/r)). The charge of the solution at the limit r→ ∞ approaches to the value Q=sinh(u0)k and diverges at the limit r→ 0. The charge increases exponentially as a function of 1/r near origin rather than logarithmically as in QED and the interpretation in terms of thermal screening suggests itself. Hyperbolic ansatz might simplify considerably the field equations also in the general case.

3. Similarities with TGD

There are strong similarities with TGD which suggests that the proposed model might provide a toy model for the dynamics defined by Kähler action.

  1. Also in TGD field equations are essentially hydrodynamical equations stating the conservation of various isometry charges. Gauge invariance is broken for the induced Kähler field although Kähler charge is conserved. There is huge vacuum degeneracy corresponding to vanishing of induced Kähler field and the interpretation is in terms of spin glass degeneracy.

  2. Also in TGD dissipation rate vanishes for the known solutions of field equations and a possible interpretation is as space-time correlates for asympotic non-dissipating self organization patterns.

  3. In TGD framework massless extremals represent the analogs for superpositions of plane waves with fixed polarization and propagation direction and representing targeted and dispersionless propagation of signal. Gauge currents are light-like and non-vanishing for these solutions. The decomposition of space-time surface to space-time sheets representing p"../articles/ is much more general counterpart for the asymptotic solutions of Maxwell hydrodynamics with vanishing λ.

  4. In TGD framework one can indeed consider the possibility that four-velocity assignable to a macroscopic quantum phase is proportional to Kähler potential. In this kind of situation one could speak of quantal Maxwell hydrodynamics. In this case however ε could be function of position.

If TGD is taken seriously, these similarities force to ask whether Maxwell hydrodynamics might be interpreted as a nonlinear variant of real electrodynamics. One must however notice that in TGD em field is proportional to the induced Kähler form only in special cases and is in general non-vanishing also for vacuum extremals.

For the construction of extremals of Kähler action see the chapter Basic Extremals of Kähler action.

Allais effect and TGD

Allais effect is a fascinating gravitational anomaly associated with solar eclipses. It was discovered originally by M. Allais, a Nobelist in the field of economy, and has been reproduced in several experiments but not as a rule. The experimental arrangement uses so called paraconical pendulum, which differs from the Foucault pendulum in that the oscillation plane of the pendulum can rotate in certain limits so that the motion occurs effectively at the surface of sphere.

The "../articles/ Should the Laws of Gravitation Be Reconsidered: Part I,II,III? of Allais here and here and the summary article The Allais effect and my experiments with the paraconical pendulum 1954-1960 of Allais give a detailed summary of the experiments performed by Allais.

A. Experimental findings of Allais

Consider first a brief summary of the findings of Allais.

  1. In the ideal situation (that is in the absence of any other forces than gravitation of Earth) paraconic pendulum should behave like a Foucault pendulum. The oscillation plane of the paraconic pendulum however begins to rotate.

  2. Allais concludes from his experimental studies that the orbital plane approach always asymptotically to a limiting plane and the effect is only particularly spectacular during the eclipse. During solar eclipse the limiting plane contains the line connecting Earth, Moon, and Sun. Allais explains this in terms of what he calls the anisotropy of space.

  3. Some experiments carried out during eclipse have reproduced the findings of Allais, some experiments not. In the experiment carried out by Jeverdan and collaborators in Romania it was found that the period of oscillation of the pendulum changes by Δ f/f≈ 5× 10-4, which happens to correspond to the constant v0=2-11 appearing in the formula of the gravitational Planck constant.

  4. There is also quite recent finding by Popescu and Olenici which they interpret as a quantization of the plane of oscillation of paraconic oscillator during solar eclipse (see this).

B. TGD inspired model for Allais effect

The basic idea of the TGD based model is that Moon absorbs some fraction of the gravitational momentum flow of Sun and in this manner partially screens the gravitational force of Sun in a disk like region having the size of Moon's cross section. Screening is expected to be strongest in the center of the disk. The predicted upper bound for the change of the oscillation frequency is slightly larger than the observed change which is highly encouraging.

1. Constant external force as the cause of the effect

The conclusions of Allais motivate the assumption that quite generally there can be additional constant forces affecting the motion of the paraconical pendulum besides Earth's gravitation. This means the replacement ggg of the acceleration g due to Earth's gravitation. Δg can depend on time.

The system obeys still the same simple equations of motion as in the initial situation, the only change being that the direction and magnitude of effective Earth's acceleration have changed so that the definition of vertical is modified. If Δ g is not parallel to the oscillation plane in the original situation, a torque is induced and the oscillation plane begins to rotate. This picture requires that the friction in the rotational degree of freedom is considerably stronger than in oscillatory degree of freedom: unfortunately I do not know what the situation is.

The behavior of the system in absence of friction can be deduced from the conservation laws of energy and angular momentum in the direction of gg.

2. What causes the effect in normal situations?

The gravitational accelerations caused by Sun and Moon come first in mind as causes of the effect. Equivalence Principle implies that only relative accelerations causing analogs of tidal forces can be in question. In GRT picture these accelerations correspond to a geodesic deviation between the surface of Earth and its center. The general form of the tidal acceleration would thus the difference of gravitational accelerations at these points:

Δg= -2GM[(Δ r/r3) - 3(r•Δ rr/r5)].

Here r denotes the relative position of the pendulum with respect to Sun or Moon. Δr denotes the position vector of the pendulum measured with respect to the center of Earth defining the geodesic deviation. The contribution in the direction of Δ r does not affect the direction of the Earth's acceleration and therefore does not contribute to the torque. Second contribution corresponds to an acceleration in the direction of r connecting the pendulum to Moon or Sun. The direction of this vector changes slowly.

This would suggest that in the normal situation the tidal effect of Moon causesgradually changing force mΔg creating a torque, which induces a rotation of the oscillation plane. Together with dissipation this leads to a situation in which the orbital plane contains the vector Δg so that no torque is experienced. The limiting oscillation plane should rotate with same period as Moon around Earth. Of course, if effect is due to some other force than gravitational forces of Sun and Earth, paraconic oscillator would provide a manner to make this force visible and quantify its effects.

3. What happens during solar eclipse?

During the solar eclipse something exceptional must happen in order to account for the size of effect. The finding of Allais that the limiting oscillation plane contains the line connecting Earth, Moon, and Sun implies that the anomalous acceleration Δ |g| should be parallel to this line during the solar eclipse.

The simplest hypothesis is based on TGD based view about gravitational force as a flow of gravitational momentum in the radial direction.

  1. For stationary states the field equations of TGD for vacuum extremals state that the gravitational momentum flow of this momentum. Newton's equations suggest that planets and moon absorb a fraction of gravitational momentum flow meeting them. The view that gravitation is mediated by gravitons which correspond to enormous values of gravitational Planck constant in turn supports Feynman diagrammatic view in which description as momentum exchange makes sense and is consistent with the idea about absorption. If Moon absorbs part of this momentum, the region of Earth screened by Moon receives reduced amount of gravitational momentum and the gravitational force of Sun on pendulum is reduced in the shadow.

  2. Unless the Moon as a coherent whole acts as the absorber of gravitational four momentum, one expects that the screening depends on the distance travelled by the gravitational flux inside Moon. Hence the effect should be strongest in the center of the shadow and weaken as one approaches its boundaries.

  3. The opening angle for the shadow cone is given in a good approximation by Δ Θ= RM/RE. Since the distances of Moon and Earth from Sun differ so little, the size of the screened region has same size as Moon. This corresponds roughly to a disk with radius .27× RE.

    The corresponding area is 7.3 per cent of total transverse area of Earth. If total absorption occurs in the entire area the total radial gravitational momentum received by Earth is in good approximation 93.7 per cent of normal during the eclipse and the natural question is whether this effective repulsive radial force increases the orbital radius of Earth during the eclipse.

    More precisely, the deviation of the total amount of gravitational momentum absorbed during solar eclipse from its standard value is an integral of the flux of momentum over time:

    Δ Pkgr = ∫ Δ(Pkgr/dt) (S(t))dt,

    (ΔPkgr/dt)(S(t))= ∫S(t) Jkgr(t)dS.

    This prediction could kill the model in classical form at least. If one takes seriously the quantum model for astrophysical systems predicting that planetary orbits correspond to Bohr orbits with gravitational Planck constant equal to GMm/v0, v0=2-11, there should be not effect on the orbital radius. The anomalous radial gravitational four-momentum could go to some other degrees of freedom at the surface of Earth.

  4. The rotation of the oscillation plane is largest if the plane of oscillation in the initial situation is as orthogonal as possible to the line connecting Moon, Earth and Sun. The effect vanishes when this line is in the initial plane of oscillation. This testable prediction might explain why some experiments have failed to reproduce the effect.

  5. The change of |g| to |gg| induces a change of oscillation frequency given by

    Δf/f=g• Δ g/g2 = (Δ g/g) cos(Θ).

    If the gravitational force of the Sun is screened, one has |gg| >g and the oscillation frequency should increase. The upper bound for the effect is obtained from the gravitational acceleration of Sun at the surface of Earth given by v2E/rE≈ 6.0× 10-4g. One has

    |Δ f|/f≤ Δ g/g = v2E/rE ≈ 6.0× 10-4.

    The fact that the increase(!) of the frequency observed by Jeverdan and collaborators is Δf/f≈ 5× 10-4 supports the screening model. Unfortunately, I do not have access to the paper of Jeverdan et al to find out whether the reported change of frequency, which corresponds to a 10 degree deviation from vertical is consistent with the value of cos(Θ) in the experimental arrangement.

C. What kind of tidal effects are predicted?

If the model applies also in the case of Earth itself, new kind of tidal effects (for normal tidal effects see this) are predicted due to the screening of the gravitational effects of Sun and Moon inside Earth. At the night-side the paraconical pendulum should experience the gravitation of Sun as screened. Same would apply to the "night-side" of Earth with respect to Moon.

Consider first the differences of accelerations in the direction of the line connecting Earth to Sun/Moon: these effects are not essential for tidal effects proper. The estimate for the ratio for the orders of magnitudes of the these accelerations is given by

gp(Sun)|/|Δgp(Moon)|= (MS/MM) (rM/rE)3≈ 2.17.

The order or magnitude follows from r(Moon)=.0026 AU and MM/MS=3.7× 10-8. The effects caused by Sun are two times stronger. These effects are of same order of magnitude and can be compensated by a variation of the pressure gradients of atmosphere and sea water.

The tangential accelerations are essential for tidal effects. The above estimate for the ratio of the contributions of Sun and Moon holds true also now and the tidal effects caused by Sun are stronger by a factor of two.

Consider now the new tidal effects caused by the screening.

  1. Tangential effects on day-side of Earth are not affected (night-time and night-side are of course different notions in the case of Moon and Sun). At the night-side screening is predicted to reduce tidal effects with a maximum reduction at the equator.

  2. Second class of new effects relate to the change of the normal component of the forces and these effects would be compensated by pressure changes corresponding to the change of the effective gravitational acceleration. The night-day variation of the atmospheric and sea pressures would be considerably larger than in Newtonian model.

The intuitive expectation is that the screening is maximum when the gravitational momentum flux travels longest path in the Earth's interior. The maximal difference of radial accelerations associated with opposite sides of Earth along the line of sight to Moon/Sun provides a convenient manner to distinguish between Newtonian and TGD based models:

gp,N|=4GM ×(RE/r)3 ,

gp,TGD|= 4GM ×(1/r2).

The ratio of the effects predicted by TGD and Newtonian models would be

gp,TGD|/|Δ gp,N|= r/RE ,

rM/RE =60.2 , rS/RE= 2.34× 104.

The amplitude for the oscillatory variation of the pressure gradient caused by Sun would be

Δ|gradpS|=v2E/rE≈ 6.1× 10-4g

and the pressure gradient would be reduced during night-time. The corresponding amplitude in the case of Moon is given by

Δ |gradpS|/Δ|gradpM|= (MS/MM)× (rM/rS)3≈ 2.17.

Δ |gradpM| is in a good approximation smaller by a factor of 1/2 and given by

Δ|gradpM|=2.8× 10-4g.

Thus the contributions are of same order of magnitude.

One can imagine two simple qualitative killer predictions.

  1. Solar eclipse should induce anomalous tidal effects induced by the screening in the shadow of the Moon.
  2. The comparison of solar and moon eclipses might kill the scenario. The screening would imply that inside the shadow the tidal effects are of same order of magnitude at both sides of Earth for Sun-Earth-Moon configuration but weaker at night-side for Sun-Moon-Earth situation.

D. An interesting co-incidence

The measured value of Δ f/f=5× 10-4 is exactly equal to v0=2-11, which appears in the formula hbargr= GMm/v0 for the favored values of the gravitational Planck constant. The predictions are Δ f/f≤ Δ p/p≈ 6× 10-4. Powers of 1/v0 appear also as favored scalings of Planck constant in the TGD inspired quantum model of bio-systems based on dark matter (see this). This co-incidence would suggest the quantization formula

gE/gS= (MS/ME) × (RE/rE)2= v0

for the ratio of the gravitational accelerations caused by Earth and Sun on an object at the surface of Earth.

E. Summary of the predicted new effects

Let us sum up the basic predictions of the model.

  1. The first prediction is the gradual increase of the oscillation frequency of the conical pendulum by Δ f/f≤ 6× 10-4 to maximum and back during night-time. Also a periodic variation of the frequency and a periodic rotation of the oscillation plane with period co-inciding with Moon's rotation period is predicted.

  2. A paraconical pendulum with initial position, which corresponds to the resting position in the normal situation should begin to oscillate during solar eclipse. This effect is testable by fixing the pendulum to the resting position and releasing it during the eclipse. The amplitude of the oscillation corresponds to the angle between g and gg given in a good approximation by

    sin[Θ(g,gg)]= (Δ g/g)sin[Θ( gg)].

    An upper bound for the amplitude would be Θ≤ 6× 10-4, which corresponds to .03 degrees.

  3. Gravitational screening should cause a reduction of tidal effects at the "night-side" of Moon/Sun. The reduction should be maximum at "midnight". This reduction together with the fact that the tidal effects of Moon and Sun at the day side are of same order of magnitude could explain some anomalies know to be associated with the tidal effects. A further prediction is the day-night variation of the atmospheric and sea pressure gradients with amplitude which is for Sun 6× 10-4g and for Moon 1.3× 10-3g.

To sum up, the predicted anomalous tidal effects and the explanation of the limiting oscillation plane in terms of stronger dissipation in rotational degree of freedom could kill the model.

For details see the chapter The Relationship Between TGD and GRT.

Updated TGD Inspired Cosmology

I have updated "TGD Inspired Cosmology". Here is the updated abstract.

A proposal for what might be called TGD inspired cosmology is made. The basic ingredient of this cosmology is the TGD counter part of the cosmic string. It is found that many-sheeted space-time concept; the new view about the relationship between inertial and gravitational four-momenta; the basic properties of the cosmic strings; zero energy ontology; the hierarchy of dark matter with levels labelled by arbitrarily large values of Planck constant: the existence of the limiting temperature (as in string model, too); the assumption about the existence of the vapor phase dominated by cosmic strings; and quantum criticality imply a rather detailed picture of the cosmic evolution, which differs from that provided by the standard cosmology in several respects but has also strong resemblances with inflationary scenario.

TGD inspired cosmology in its recent form relies on an ontology differing dramatically from that of GRT based cosmologies. Zero energy ontology states that all physical states have vanishing net quantum numbers so that all matter is creatable from vacuum. The hierarchy of dark matter identified as macroscopic quantum phases labelled by arbitrarily large values of Planck constant is second aspect of the new ontology. The values of the gravitational Planck constant assignable to space-time sheets mediating gravitational interaction are gigantic. This implies that TGD inspired late cosmology might decompose into stationary phases corresponding to stationary quantum states in cosmological scales and critical cosmologies corresponding to quantum transitions changing the value of the gravitational Planck constant and inducing an accelerated cosmic expansion.

1. Zero energy ontology

The construction of quantum theory leads naturally to zero energy ontology stating that everything is creatable from vacuum. Zero energy states decompose into positive and negative energy parts having identification as initial and final states of particle reaction in time scales of perception longer than the geometro-temporal separation T of positive and negative energy parts of the state. If the time scale of perception is smaller than T, the usual positive energy ontology applies.

In zero energy ontology inertial four-momentum is a quantity depending on the temporal time scale T used and in time scales longer than T the contribution of zero energy states with parameter T1<T to four-momentum vanishes. This scale dependence alone implies that it does not make sense to speak about conservation of inertial four-momentum in cosmological scales. Hence it would be in principle possible to identify inertial and gravitational four-momenta and achieve strong form of Equivalence Principle. It however seems that this is not the correct approach to follow.

2. Dark matter hierarchy and hierarchy of Planck constants

Dark matter revolution with levels of the hierarchy labelled by values of Planck constant forces a further generalization of the notion of imbedding space and thus of space-time. One can say, that imbedding space is a book like structure obtained by gluing together infinite number of copies of the imbedding space like pages of a book: two copies characterized by singular discrete bundle structure are glued together along 4-dimensional set of common points. These points have physical interpretation in terms of quantum criticality. Particle states belonging to different sectors (pages of the book) can interact via field bodies representing space-time sheets which have parts belonging to two pages of this book.

3. Quantum criticality

TGD Universe is quantum counterpart of a statistical system at critical temperature. As a consequence, topological condensate is expected to possess hierarchical, fractal like structure containing topologically condensed 3-surfaces with all possible sizes. Both Kähler magnetized and Kähler electric 3-surfaces ought to be important and string like objects indeed provide a good example of Kähler magnetic structures important in TGD inspired cosmology. In particular space-time is expected to be many-sheeted even at cosmological scales and ordinary cosmology must be replaced with many-sheeted cosmology. The presence of vapor phase consisting of free cosmic strings and possibly also elementary p"../articles/ is second crucial aspects of TGD inspired cosmology.

Quantum criticality of TGD Universe supports the view that many-sheeted cosmology is in some sense critical. Criticality in turn suggests fractality. Phase transitions, in particular the topological phase transitions giving rise to new space-time sheets, are (quantum) critical phenomena involving no scales. If the curvature of the 3-space does not vanish, it defines scale: hence the flatness of the cosmic time=constant section of the cosmology implied by the criticality is consistent with the scale invariance of the critical phenomena. This motivates the assumption that the new space-time sheets created in topological phase transitions are in good approximation modellable as critical Robertson-Walker cosmologies for some period of time at least.

These phase transitions are between stationary quantum states having stationary cosmologies as space-time correlates: also these cosmologies are determined uniquely apart from single parameter.

4. Only sub-critical cosmologies are globally imbeddable

TGD allows global imbedding of subcritical cosmologies. A partial imbedding of one-parameter families of critical and overcritical cosmologies is possible. The infinite size of the horizon for the imbeddable critical cosmologies is in accordance with the presence of arbitrarily long range fluctuations at criticality and guarantees the average isotropy of the cosmology. Imbedding is possible for some critical duration of time. The parameter labelling these cosmologies is scale factor characterizing the duration of the critical period. These cosmologies have the same optical properties as inflationary cosmologies. Critical cosmology can be regarded as a 'Silent Whisper amplified to Bang' rather than 'Big Bang' and transformed to hyperbolic cosmology before its imbedding fails. Split strings decay to elementary p"../articles/ in this transition and give rise to seeds of galaxies. In some later stage the hyperbolic cosmology can decompose to disjoint 3-surfaces. Thus each sub-cosmology is analogous to biological growth process leading eventually to death.

5. Fractal many-sheeted cosmology

The critical cosmologies can be used as a building blocks of a fractal cosmology containing cosmologies containing ... cosmologies. p-Adic length scale hypothesis allows a quantitative formulation of the fractality. Fractal cosmology predicts cosmos to have essentially same optic properties as inflationary scenario but avoids the prediction of unknown vacuum energy density. Fractal cosmology explains the paradoxical result that the observed density of the matter is much lower than the critical density associated with the largest space-time sheet of the fractal cosmology. Also the observation that some astrophysical objects seem to be older than the Universe, finds a nice explanation.

6. Equivalence Principle in TGD framework

The failure of Equivalence Principle in TGD Universre was something which was very difficult to take seriously and this led to a long series of ad hoc constructs trying to save Equivalence Principle instead of trying to characterize the failure, to find out whether it has catastrophic consequences, and to relate it to the recent problems of cosmology, in particular the necessity to postulate somewhat mysterious dark energy characterized by cosmological constant. The irony was that all this was possible since TGD allows to define both inertial and gravitational four-momenta and generalized gravitational charges assignable to isometries of M4× CP2 precisely.

It indeed turns out that Equivalence Principle can hold true for elementary p"../articles/ having so called CP2 type extremals as space-time correlates and for hadrons having string like objects as space-time correlates. This is more or less enough to have consistency with experimental facts. Equivalence Principle fails for vacuum extremals representing Robertson-Walker cosmologies and for all vacuum extremals representing solutions of Einstein's equations. The failure is very dramatic for string like objects that I have used to call cosmic strings. These failures can be however understood in zero energy ontology.

7. Cosmic strings as basic building blocks of TGD inspired cosmology

Cosmic strings are the basic building blocks of TGD inspired cosmology and all structures including large voids, galaxies, stars, and even planets can be seen as pearls in a cosmic fractal necklaces consisting of cosmic strings containing smaller cosmic strings linked around them containing... During cosmological evolution the cosmic strings are transformed to magnetic flux tubes with smaller Kähler string tension and these structures are also key players in TGD inspired quantum biology.

Cosmic strings are of form X2× Y2subset M4× CP2, where X2 corresponds to string orbit and Y2 is a complex sub-manifold of CP2. The gravitational mass of cosmic string is Mgr=(1-g)/4G, where g is the genus of Y2. For g=1 the mass vanishes. When Y2 corresponds to homologically trivial geodesic sphere of CP2 the presence of Kähler magnetic field is however expected to generate inertial mass which also gives rise to gravitational mass visible as asymptotic behavior of the metric of space-time sheet at which the cosmic string has suffered topological condensation. The corresponding string tension is in the same range that for GUT strings and explains the constant velocity spectrum of distant stars around galaxies.

For g>1 the gravitational mass is negative. This inspires a model for large voids as space-time regions containing g&gr;1 cosmic string with negative gravitational energy and repelling the galactic g=0 cosmic strings to the boundaries of the large void.

These voids would participate cosmic expansion only in average sense. During stationary periods the quantum states would be modellable using stationary cosmologies and during phase transitions increasing gravitational Planck constant and thus size of the large void they critical cosmologies would be the appropriate description. The acceleration of cosmic expansion predicted by critical cosmologies can be naturally assigned with these periods. Classically the quantum phase transition would be induced when galactic strings are driven to the boundary of the large void by the antigravity of big cosmic strings with negative gravitational energy. The large values of Planck constant are crucial for understanding of living matter so that gravitation would play fundamental role also in the evolution of life and intelligence.

Many-sheeted fractal cosmology containing both hyperbolic and critical space-time sheets based on cosmic strings suggests an explanation for several puzzles of GRT based cosmology such as dark matter problem, origin of matter antimatter asymmetry, the problem of cosmological constant and mechanism of accelerated expansion, the problem of several Hubble constants, and the existence of stars apparently older than the Universe. Under natural assumptions TGD predicts same optical properties of the large scale Universe as inflationary scenario does. The recent balloon experiments however favor TGD inspired cosmology.

For details see the updated chapter TGD Inspired Cosmology.

Updated The Relationship Between TGD and GRT

I am continuing the updating of the chapters related to the relationship of TGD and GRT. The updatings are due to the zero energy ontology, the hierarchy of dark matter labelled by Planck constants, and due to the progress in the understanding of Equivalence Principle. I just finished the elimination of the worst trash from "The Relationship between TGD and GRT" and attach the abstract below.

In this chapter the recent view about TGD as Poincare invariant theory of gravitation is discussed. Radically new views about ontology were necessary before it was possible to see what had been there all the time. Zero energy ontology states that all physical states have vanishing net quantum numbers. The hierarchy of dark matter identified as macroscopic quantum phases labelled by arbitrarily large values of Planck constant is second aspect of the new ontology.

1. The fate of Equivalence Principle

There seems to be a fundamental obstacles against the existence of a Poincare invariant theory of gravitation related to the notions of inertial and gravitational energy.

  1. The conservation laws of inertial energy and momentum assigned to the fundamental action would be exact in this kind of a theory. Gravitational four-momentum can be assigned to the curvature scalar as Noether currents and is thus completely well-defined unlike in GRT. Equivalence Principle requires that inertial and gravitational four-momenta are identical. This is satisfied if curvature scalar defines the fundamental action principle crucial for the definition of quantum TGD. Curvature scalar as a fundamental action is however non-physical and had to be replaced with so called Kähler action.

  2. One can question Equivalence Principle because the conservation of gravitational four-momentum seems to fail in cosmological scales.

  3. For the extremals of Kähler action the Noether currents associated with curvature scalar are well-defined but non-conserved. Also for vacuum extremals satisfying Einstein's equations gravitational energy momentum is not conserved and non-conservation becomes large for small values of cosmic time. This looks fine but the problem is whether the failure of Equivalence Principle is so serious that it leads to conflict with experimental facts.

It turns out that Equivalence Principle can hold true for elementary p"../articles/ having so called CP2 type extremals as space-time correlates and for hadrons having string like objects as space-time correlates. This is more or less enough to have consistency with experimental facts. Equivalence Principle fails for vacuum extremals representing Robertson-Walker cosmologies and for all vacuum extremals representing solutions of Einstein's equations. The failure is very dramatic for string like objects that I have used to call cosmic strings. These failures can be however understood in zero energy ontology.

2. The problem of cosmological constant

A further implication of dark matter hierarchy is that astrophysical systems correspond to stationary states analogous to atoms and do not participate to cosmic expansion in a continuous manner but via discrete quantum phase transitions in which gravitational Planck constant increases. By quantum criticality of these phase transitions critical cosmologies are excellent candidates for the modelling of these transitions. Imbeddable critical cosmologies are unique apart from a parameter determining their duration and represent accelerating cosmic expansion so that there is no need to introduce cosmological constant.

It indeed turns out possible to understand these critical phases in terms of quantum phase transition increasing the size of large modelled in terms of "big" cosmic strings with negative gravitational mass whose repulsive gravitation drives "galactic" cosmic strings with positive gravitational mass to the boundaries of the void. In this framework cosmological constant like parameter does not characterize the density of dark energy but that of dark matter identifiable as quantum phases with large Planck constant.

A further problem is that the naive estimate for the cosmological constant is predicted to be by a factor 10120 larger than its value deduced from the accelerated expansion of the Universe. In TGD framework the resolution of the problem comes naturally from the fact that large voids are quantum systems which follow the cosmic expansion only during the quantum critical phases.

p-Adic fractality predicting that cosmological constant is reduced by a power of 2 in phase transitions occurring at times T(k) propto 2k/2, which correspond to p-adic time scales. These phase transitions would naturally correspond to quantum phase transitions increasing the size of the large voids during which critical cosmology predicting accelerated expansion naturally applies. On the average Λ(k) behaves as 1/a2, where a is the light-cone proper time. This predicts correctly the order of magnitude for observed value of Λ.

3. Topics of the chapter

The topics discussed in the chapter are following.

  1. The relationship between TGD and GRT is discussed applying recent views about the relationship of inertial and gravitational masses, the zero energy ontology, and dark matter hierarchy. One of the basic outcomes is the TGD based understanding of cosmological constant as characterized of dark matter density.

  2. The notion of many-sheeted space time interpreted as a hierarchy of smoothed out space-times produced by Nature itself rather than only renormalization group theorist is discussed. The dynamics of what might be called gravitational charges is discussed the basic idea being that the structure of Einstein's tensor automatically implies that metric carries information about sources of the gravitational field without any assumption about variational principle.

  3. The theory is applied to the vacuum extremal embeddings of Reissner-Nordström and Schwartschild metric.

  4. A model for the final state of a star, which indicates that Z0 force, presumably created by dark matter, might have an important role in the dynamics of the compact objects. During year 2003, more than decade after the formulation of the model, the discovery of the connection between supernovas and gamma ray burstsprovided strong support for the predicted axial magnetic and Z0 magnetic flux tube structures predicted by the model for the final state of a rotating star. Two years later the interpretation of the predicted long range weak forces as being caused by dark matter emerged.

    The recent progress in understanding of hadronic mass calculations has led to the identification of so called super-canonical bosons and their super-counterparts as basic building blocks of hadrons. This notion leads also to a microscopic description of neutron stars and black-holes in terms of highly entangled string like objects in Hagedorn temperature and in very precise sense analogous to gigantic hadrons.

  5. There is experimental evidence for gravimagnetic fields in rotating superconductors which are by 20 orders of magnitudes stronger than predicted by general relativity. A TGD based explanation of these observations is discussed.

For details see the updated chapter The Relationship Between TGD and GRT.

Updated Cosmic Strings

Cosmic strings belong to the basic extremals of the Kähler action. The upper bound for string tension of the cosmic strings is T≈.5× 10-6/G and in the same range as the string tension of GUT strings and this makes them very interesting cosmologically although TGD cosmic strings have otherwise practically nothing to do with their GUT counterparts.

1. Basic ideas

The understanding of cosmic strings has developed only slowly and has required dramatic modifications of existing views.

  1. Zero energy ontology implies that the inertial energy and all quantum numbers of the Universe vanishes and physical states are zero energy states decomposing into pairs of positive and negative energy states. Positive energy ontology is a good approximation under certain assumptions.

  2. Dark matter hierarchy whose levels are labelled by gigantic values of gravitational Planck constant associated with dark matter is second essential piece of the picture.

  3. The identification of gravitational four-momentum as the Noether charge associated with curvature scalar looks in retrospect completely obvious and resolves the long standing ambiguities. This identification explains the non-conservation of gravitational four-momentum which is in contrast with the conservation of inertial four-momentum and implies breaking of Equivalence Principle. There are good reasons to believe that this breaking can be avoided for elementary p"../articles/ and hadronic strings.

  4. The gravitational energy of string like objects X2× Y2subset M4× CP2 corresponds to gravitational string tension Tgr= (1-g)/4G, where g is the genus of Y2. The tension is negative for g>1. The string tension is by a factor of order 107 larger than the inertial string tension. This leads to the hypothesis that g>1 "big" strings in the centers of large voids generate repulsive gravitational force driving g=1 galactic strings to the boundaries of the voids. If the total gravitational mass of strings inside voids vanishes, the breaking of Equivalence Principle occurs only below the size scale of the void.

  5. The basic question whether one can model the exterior region of the topologically condensed cosmic string using General Relativity. The exterior metric of the cosmic string corresponds to a small deformation of a vacuum extremal. The angular defect and surplus associated with the exterior metrics extremizing curvature scalar can be much smaller than assuming vacuum Einstein's equations. The conjecture is that the exterior metric of g=1 galactic string conforms with the Newtonian intuitions and thus explains the constant velocity spectrum of distant stars if one assumes that galaxies are organized to linear structures along long strings like pearls in a necklace.

2. Critical and over-critical cosmologies involve accelerated cosmic expansion

In TGD framework critical and over-critical cosmologies are unique apart from single parameter telling their duration and predict the recently discovered accelerated cosmic expansion. Critical cosmologies are naturally associated with quantum critical phase transitions involving the change of gravitational Planck constant. A natural candidate for such a transition is the increase of the size of a large void as galactic strings have been driven to its boundary. During the phase transitions connecting two stationary cosmologies (extremals of curvature scalar) also determined apart from single parameter, accelerated expansion is predicted to occur. These transitions are completely analogous to quantum transitions at atomic level.

The proposed microscopic model predicts that the TGD counterpart of the quantity ρ+3p for cosmic strings is negative during the phase transition which implies accelerated expansion. Dark energy is replaced in TGD framework with dark matter indeed predicted by TGD and its fraction is .74 as in standard scenario. Cosmological constant thus characterizes the density of dark matter rather than energy in TGD Universe.

The sizes of large voids stay constant during stationary periods which means that also cosmological constant is piecewise constant. p-Adic length fractality predicts that Λ scales as 1/L2(k) as a function of the p-adic scale characterizing the space-time sheet of void. The order of magnitude for the recent value of the cosmological constant comes out correctly. The gravitational energy density described by the cosmological constant is identifiable as that associated with topologically condensed cosmic strings and of magnetic flux tubes to which they are gradually transformed during cosmological evolution.

3. Cosmic strings and generation of structures

  1. In zero energy ontology cosmic strings must be created from vacuum as zero energy states consisting of pairs of strings with opposite time orientation and inertial energy.

  2. The counterpart of Hawking radiation provides a mechanism by which cosmic strings can generate ordinary matter. The splitting of cosmic strings followed by a "burning" of the string ends provides a second manner to generate visible matter. Matter-antimatter symmetry would result if antimatter is inside cosmic strings and matter in the exterior region.

  3. Zero energy ontology has deep implications for the cosmic and ultimately also for biological evolution (magnetic flux tubes paly a fundamental role in TGD inspired biology and cosmic strings are limiting cases of them). The arrows of geometric time are opposite for the strings and also for positive energy matter and negative energy antimatter. This implies a competition between two dissipative time developments proceeding in different directions of geometric time and looking self-organization and even self-assembly from the point of view of each other. This resolves paradoxes created by gravitational self-organization contra second law of thermodynamics. So called super-canonical matter at cosmic strings implies large p-adic entropy resolves the well-known entropy paradox.

  4. p-Adic fractality and simple quantitative observations lead to the hypothesis that cosmic strings are responsible for the evolution of astrophysical structures in a very wide length scale range. Large voids with size of order 10^8 light years can be seen as structures cosmic strings wound around the boundaries of the void. Galaxies correspond to same structure with smaller size and linked around the supra-galactic strings. This conforms with the finding that galaxies tend to be grouped along linear structures. Simple quantitative estimates show that even stars and planets could be seen as structures formed around cosmic strings of appropriate size. Thus Universe could be seen as fractal cosmic necklace consisting of cosmic strings linked like pearls around longer cosmic strings linked like...

4. Cosmic strings, gamma ray bursts, and supernovae

During year 2003 two important findings related to cosmic strings were made.

  1. A correlation between supernovae and gamma ray bursts was observed.

  2. Evidence that some unknown p"../articles/ of mass m≈2me and decaying to gamma rays and/or electron positron pairs annihilating immediately serve as signatures of dark matter. These findings challenge the identification of cosmic strings and/or their decay products as dark matter, and also the idea that gamma ray bursts correspond to cosmic fire crackers formed by the decaying ends of cosmic strings. This forces the updating of the more than decade old rough vision about topologically condensed cosmic strings and about gamma ray bursts described in this chapter.
According to the updated model, cosmic strings transform in topological condensation to magnetic flux tubes about which they represent a limiting case. Primordial magnetic flux tubes forming ferro-magnet like structures become seeds for gravitational condensation leading to the formation of stars and galaxies. The TGD based model for the asymptotic state of a rotating star as dynamo leads to the identification of the predicted magnetic flux tube at the rotation axis of the star as Z0 magnetic flux tube of primordial origin. Besides Z0 magnetic flux tube structure also magnetic flux tube structure exists at different space-time sheet but is in general not parallel to the Z0 magnetic structure. This structure cannot have primordial origin (the magnetic field of star can even flip its polarity).

The flow of matter along Z0 magnetic (rotation) axis generates synchrotron radiation, which escapes as a precisely targeted beam along magnetic axis and leaves the star. The identification is as the rotating light beam associated with ordinary neutron stars. During the core collapse leading to the supernova this beam becomes gamma ray burst. The mechanism is very much analogous to the squeezing of the tooth paste from the tube. The fact that all nuclei are fully ionized Z0 ions, the Z0 charge unbalance caused by the ejection of neutrinos, and the radial compression make the effect extremely strong so that there are hopes to understand the observed incredibly high polarization of 80+/- 20 per cent.

TGD suggests the identification of p"../articles/ of mass m≈2me accompanying dark matter as lepto-pions formed by color excited leptons, and topologically condensed at magnetic flux tubes having thickness of about lepto-pion Compton length. Lepto-pions would serve as signatures of dark matter whereas dark matter itself would correspond to the magnetic energy of topologically condensed cosmic strings transformed to magnetic flux tubes.

For details see the updated chapter Cosmic Strings.

A new anomaly in Cosmic Microwave Background

In the comment section of Not-Even-Wrong 'island' gave a link to an article about the observation of a new anomaly in cosmic microwave background. The article Extragalactic Radio Sources and the WMAP Cold Spot by L. Rudnick, S. Brown, and L. R. Williams tells that a cold spot in the microwave background has been discovered. The amplitude of the temperature variation is -73 microK at maximum. The authors argue that the variation can be understood if there is a void at redshift z≤ 1, which corresponds to d≤ 1.4× 1010 ly. The void would have radius of 140 Mpc making 5.2× 108 ly.

In New Scientist, there is a story titled Cosmologists spot a 'knot' in space-time about Neil Turok�s recent talk at PASCOS entitled �Is the Cold Spot in the CMB a Texture?�. Turok has proposed that the cold spot results from a topological defect associated with a cosmic string of GUT type theories.

1. Comparison with sizes and distances of large voids

It is interesting to compare the size and distance of the argued CMB void to those for large voids. The largest known void has size of 163 Mpc making 5.3×108 ly which does not differ significantly from the size 8×6.5×108 ly of CMB void. The distance is 201 Mpc making about 6.5×108 ly and roughly by a factor 1/22 smaller than CMB void.

Is it only an accident that the size of CMB void is same as that for largest large void? If large voids follow the cosmic expansion in a continuous manner, the size of the CMB void should be roughly 1/22 time smaller. Could it be that large voids might follow cosmic expansion by rather seldomly occurring discrete jumps? TGD based quantum astrophysics indeed predicts that expansion occurs in discrete jumps.

2. TGD based quantum model for astrophysical systems

A brief summary of TGD based quantum model of astrophysical systems is in order.

  1. TGD based quantum model for astrophysical systems relies on the evidence that planetary orbits (also those of known exoplanets) correspond to Bohr orbits with a gigantic value of gravitational Planck constant hgr= GMm/v0 characterizing the gravitational interaction between masses M and m. Nottale introduced originally this quantization rule and assigned it to hydrodynamics.

  2. TGD inspired hypothesis is that quantization represents genuine quantum physics and is due to the fact that dark matter matter corresponds to a hierarchy whose levels are labelled by the values of Planck constant. Visible matter bound to dark matter would make this quantization visible. Putting it more precisely, the each or the space-time sheets mediating interactions (electro-weak, color, gravitational) between the two physical systems is characterized by its own Planck constant which can have arbitrarily large values. For gravitational interactions the value of this Planck constant is gigantic.

  3. The implication is that astrophysical systems are analogous to atoms and molecules and thus correspond to quantum mechanical stationary states have constant size in the local M4 coordinates (t,rM,Ω) related to Robertson Walker coordinates via the formulas (a,r,Ω) by (a2= t2-rM2, r= rM/a). This means that their M4 radius RM remains constant whereas the coordinate radius R decreases as 1/a rather than being constant as for comoving matter.

  4. Astrophysical quantum systems can however participate in the cosmic expansion by discrete quantum jumps in which Planck constant increases. This means that the parameter v0 appearing in the gravitational Planck constant hbar= GMm/v0 is reduced in a discrete manner so that the quantum scale of the system increases.

  5. This applies also to gravitational self interactions for which one has hbar= GM2/v0. During the final states of star the phase transitions reduce the value of Planck constant and the prediction is that collapse to neutron or super-nova should occur via phase transitions increasing v0. For blackhole state the value of v0 is maximal and equals to 1/2.

  6. Planetary Bohr orbit model explains the finding by Masreliez that planetary radii seem to decrease when express in terms of the cosmic radial coordinate r =rM/a (see this and this). The prediction is that planetary systems should experience now and then a phase transition in which the size of the system increases by an integer n. The favored values are ruler-and-compass integers expressible as products of distinct Fermat primes (four of them are known) and power of 2. The most favoured changes of v0 are as powers of 2. This would explain why inner and outer planets correspond to the values of v0 differing by a factor of 1/5.

3. The explanation of CMB void

Concerning the explanation of CMB void one can consider two options.

  1. If the large CMB void is similar to the standard large voids it should have emerged much earlier than these or the durations of constant value of v0 could be rather long so that also the nearby large voids should have existed for a very long time with same size.

  2. One can also consider the possibility that CMB void is a fractally scaled up variant of large void. The p-adic length scale of the CMB void would be Lp==L(k), p≈ 2k, k= 263 (prime). If it has participated cosmic expansion in the average sense its recent p-adic size scale would be about 16<22 times larger and p-adic scale would be L(k), k=271 (prime).

For TGD inspired vision about astrophysics see the chapters TGD and Astrophysics and Quantum Astrophysics.

General View About Physics in Many-Sheeted Space-Time: Part I,II

In the former chapter "General View About Physics in Many-Sheeted Space-Time" the notion of many-sheeted space-time concept and the understanding of coupling constant evolution at space-time level were discussed with emphasis on the notions of topological condensation and evaporation. The notion of many-sheeted space-time used was roughly that as it was around 1990 and 17 years is a long time.

The fusion of real and various p-adic physics to single coherent whole by generalizing the notion of number, the generalization of the notion of the imbedding space to allow a mathematical representation of dark matter hierarchy based on dynamical and quantized Planck constant, parton level formulation of TGD using light-like 3-surfaces as basic dynamical objects, and so called zero energy ontology force to generalizes considerably the view about space-time.

For these reasons I decided to add a chapter in which the picture about many-sheeted space-time is completed by a summary of the new rather dramatic developments in quantum TGD occurred during last few years.

For more details and background see the new chapter General View About Physics in Many-Sheeted Space-Time: Part II.

Blackhole production at LHC and replacement of ordinary blackholes with super-canonical blackholes

Tommaso Dorigo has an interesting posting about blackhole production at LHC. I have never taken this idea seriously but in a well-defined sense TGD predicts blackholes associated with super-canonical gravitons with strong gravitational constant defined by the hadronic string tension. The proposal is that super-canonical blackholes have been already seen in Hera, RHIC, and the strange cosmic ray events (see the previous posting). Ordinary blackholes are naturally replaced with super-canonical blackholes in TGD framework, which would mean a profound difference between TGD and string models.

Super-canonical black-holes are dark matter in the sense that they have no electro-weak interactions and they could have Planck constant larger than the ordinary one so that the value of αsK=1/4 is reduced. The condition that αK has the same value for the super-canonical phase as it has for ordinary gauge boson space-time sheets gives hbar=26×hbar0. With this assumption the size of the baryonic super-canonical blacholes would be 46 fm, the size of a big nucleus, and would define the fundamental length scale of nuclear physics.

1. RHIC and super-canonical blackholes

In high energy collisions of nuclei at RHIC the formation of super-canonical blackholes via the fusion of nucleonic space-time sheets would give rise to what has been christened a color glass condensate. Baryonic super-canonical blackholes of M107 hadron physics would have mass 934.2 MeV, very near to proton mass. The mass of their M89 counterparts would be 512 times higher, about 478 GeV. The "ionization energy" for Pomeron, the structure formed by valence quarks connected by color bonds separating from the space-time sheet of super-canonical blackhole in the production process, corresponds to the total quark mass and is about 170 MeV for ordinary proton and 87 GeV for M89 proton. This kind of picture about blackhole formation expected to occur in LHC differs from the stringy picture since a fusion of the hadronic mini blackholes to a larger blackhole is in question.

An interesting question is whether the ultrahigh energy cosmic rays having energies larger than the GZK cut-off (see the previous posting) are baryons, which have lost their valence quarks in a collision with hadron and therefore have no interactions with the microwave background so that they are able to propagate through long distances.

2. Ordinary blackholes as super-canonical blackholes

In neutron stars the hadronic space-time sheets could form a gigantic super-canonical blackhole and ordinary blackholes would be naturally replaced with super-canonical blackholes in TGD framework (only a small part of blackhole interior metric is representable as an induced metric).

  1. Hawking-Bekenstein blackhole entropy would be replaced with its p-adic counterpart given by

    Sp= (M/m(CP2))2× log(p),

    where m(CP2) is CP2 mass, which is roughly 10-4 times Planck mass. M corresponds to the contribution of p-adic thermodynamics to the mass. This contribution is extremely small for gauge bosons but for fermions and super-canonical p"../articles/ it gives the entire mass.

  2. If p-adic length scale hypothesis p≈2k holds true, one obtains

    Sp= k log(2)×(M/m(CP2))2 ,

    m(CP2)=hbar/R, R the "radius" of CP2, corresponds to the standard value of hbar0 for all values of hbar.

  3. Hawking Bekenstein area law gives in the case of Schwartschild blackhole

    S= hbar×A/4G = hbar×πGM2.

    For the p-adic variant of the law Planck mass is replaced with CP2 mass and klog(2)≈ log(p) appears as an additional factor. Area law is obtained in the case of elementary p"../articles/ if k is prime and wormhole throats have M4 radius given by p-adic length scale Lk=k1/2RCP2, which is exponentially smaller than Lp.

    For macroscopic super-canonical black-holes modified area law results if the radius of the large wormhole throat equals to Schwartschild radius. Schwartschild radius is indeed natural: I have shown that a simple deformation of the Schwartschild exterior metric to a metric representing rotating star transforms Schwartschild horizon to a light-like 3-surface at which the signature of the induced metric is transformed from Minkowskian to Euclidian (see this).

  4. The formula for the gravitational Planck constant appearing in the Bohr quantization of planetary orbits and characterizing the gravitational field body mediating gravitational interaction between masses M and m (see this) reads as

    hbargr/hbar0=GMm/v0 .

    v0=2-11 is the preferred value of v0. One could argue that the value of gravitational Planck constant is such that the Compton length hbargr/M of the black-hole equals to its Schwartshild radius. This would give

    hbargr/hbar0= GM2/v0 , v0=1/2 .

    This is a natural generalization of the Nottale's formula to gravitational self interactions. The requirement that hbargr is a ratio of ruler-and-compass integers expressible as a product of distinct Fermat primes (only four of them are known) and power of 2 would quantize the mass spectrum of black hole. Even without this constraint M2 is integer valued using p-adic mass squared unit and if p-adic length scale hypothesis holds true this unit is in an excellent approximation power of two.

  5. The gravitational collapse of a star would correspond to a process in which the initial value of v0, say v0 =2-11, increases in a stepwise manner to some value v0≤1/2. For a supernova with solar mass with radius of 9 km the final value of v0 would be v0=1/6. The star could have an onion like structure with largest values of v0 at the core. Powers of two would be favored values of v0. If the formula holds true also for Sun one obtains 1/v0= 3×17× 213 with 10 per cent error.

  6. Blackhole evaporation could be seen as means for the super-canonical blackhole to get rid of its electro-weak charges and fermion numbers (except right handed neutrino number) as the antip"../articles/ of the emitted p"../articles/ annihilate with the p"../articles/ inside super-canonical blackhole. This kind of minimally interacting state is a natural final state of star. Ideal super-canonical blackhole would have only angular momentum and right handed neutrino number.

  7. In TGD light-like partonic 3-surfaces are the fundamental objects and space-time interior defines only the classical correlates of quantum physics. The space-time sheet containing the highly entangled cosmic string might be separated from environment by a wormhole contact with size of black-hole horizon. This looks the most plausible option but one can of course ask whether the large partonic 3-surface defining the horizon of the black-hole actually contains all super-canonical p"../articles/ so that super-canonical black-hole would be single gigantic super-canonical parton. The interior of super-canonical blackhole would be space-like region of space-time, perhaps resulting as a large deformation of CP2 type vacuum extremal. Blackhole sized wormhole contact would define a gauge boson like variant of blackhole connecting two space-time sheets and getting its mass through Higgs mechanism. A good guess is that these states are extremely light.

For more details and background see the chapters TGD and Cosmology and Quantum Astrophysics.

Schwartschild horizon for a rotating blackhole like object as a 3-D lightlike surface defining wormhole throat

The metric determinant at Schwartschild radius is non-vanishing. This does not quite conform with the interpretation as an analog of a light-like partonic 3-surface identifiable as a wormhole throat for which the determinant of the induced 4-metric vanishes and at which the signature of the induced metric changes from Minkowskian to Euclidian.

An interesting question is what happens if one makes the vacuum extremal representing imbedding of Schwartshild metric a rotating solution by a very simple replacement Φ→ Φ+nΦ, where Φ is the angle angle coordinate of homologically trivial geodesic sphere S2 for the simplest vacuum extremals, and Φ the angle coordinate of M4 spherical coordinates. It turns out that Schwartschild horizon is transformed to a surface at which det(g4) vanishes so that the interpretation as a wormhole throat makes sense. If one assumes that black hole horizon is analogous to a wormhole contact, only rotating black hole like structures with quantized angular momentum are possible in TGD Universe.

For details see the chapter TGD and GRT.

Quantum chaos in astrophysical length scales?

Kea commented about transition to quantum chaos and gave a link to the article Quantum Chaos by Martin Gurtzwiller in Matthew Watkins's home page devoted to Riemann Zeta. Occasionally even this kind of a masterpiece of scientific writing manages to stimulate only an intention to read it more carefully later. When you indeed read it again few years later it can shatter you into a wild resonance. Just this occurred at this time.

1. Brief summary about quantum chaos

The article of Gurtzwiller discusses the complex regime between quantal and classical behavior as it was understood at the time of writing (1992). As a non-specialist I have no idea about possible new discoveries since then.

The article introduces the division of classical systems into regular (R) and chaotic (P in honor of Poincare) ones. Besides this one has quantal systems (Q). There are three transition regions between these three realms.

  1. R-P corresponds to transition to classical chaos and KAM theorem is a powerful tool allowing to organize the view about P in terms of surviving periodic orbits.

  2. Quantum-classical transition region R-Q corresponds to high quantum number limit and is governed by Bohr's correspondence principle. Highly excited hydrogen atom - Rydberg atom - defines a canonical example of the situation.

  3. Somewhat surprisingly it has turned out that also P-Q region can be understood in terms of periodic classical orbits (nothing else is available!). P-Q region can be achieved experimentally if one puts Rydberg atom in a strong magnetic field. At the weak field limit quantum states are delocalized but in chaotic regime the wave functions become strongly concentrated along a periodic classical orbits.

    At the level of dynamics the basic example about P-Q transition region discussed is the chaotic quantum scattering of electron in atomic lattice. Classical description does not work: a superposition of amplitudes for orbits, which consist of pieces which are fragments of a periodic orbit plus localization around atom is necessary.

The fractal wave function patterns associated with say hydrogen atom in strong magnetic field are extremely beautiful and far from chaotic. Even in the case of chaotic quantum scattering one has interference of quantum amplitudes for classical Bohr orbits and also now Fourier transform exhibits nice peaks corresponding to the periods of classical orbits. The term chaos seems to be an unfortunate choice referring to our limited cognitive capacities rather than the actual physical situation and the term quantum complexity would be more appropriate. For a quantum consciousness theorist the challenge is to try to formulate in a more precise manner this fact. Quantum measurement theory with a finite measurement resolution indeed provide the mathematics necessary for this purpose.

2. What does the transition to quantum chaos mean?

The transition to quantum chaos in the sense the article discusses it means that a system with a large number of virtually independent degrees of freedom (in very general sense) makes a transition to a phase in there is a strong interaction between these degrees of freedom. Perturbative phase becomes non-perturbative. This means emergence of correlations and reduction of the effective dimension of the system to a finite fractal dimension. When correlations become complete and the system becomes a genuine quantum system, the dimension of the system is genuinely reduced and again non-fractal. In this sense one has transition via complexity to new kind of order.

2.1 The level of stationary states

At the level of energy spectrum this means that the energy of system which correspond to sums of virtually independent energies and thus is essentially random number becomes non-random. As a consequence, energy levels tend to avoid each other, order and simplicity emerge but at the collective level. Spectrum of zeros of Zeta has been found to simulate the spectrum for a chaotic system with strong correlations between energy levels. Zeta functions indeed play a key role in the proposed description of quantum criticality associated with the phase transition changing the value of Planck constant.

2.2 The importance of classical periodic orbits in chaotic scattering

Poincare with his immense physical and mathematical intuition foresaw that periodic classical orbits should have a key role also in the description of chaos. The study of complex systems indeed demonstrates that this is the case although the mathematics and physics behind this was not fully understood around 1992 and is probably not so even now. The basic discovery coming from numerical simulations is that the Fourier transform of a chaotic orbits exhibits has peaks the frequencies which correspond to the periods of closed orbits. From my earlier encounters with quantum chaos I remember that there is quantization of periodic orbits so that their periods are proportional to log(p), p prime in suitable units. This suggests a connection of arithmetic quantum field theory and with p-adic length scale hypothesis. Note that in planetary Bohr orbitology any closed orbit can be Bohr orbits with a suitable mass distribution but that velocity spectrum is universal.

The chaotic scattering of electron in atomic lattice is discussed as a concrete example. In the chaotic situation the notion of electron consists of periods spend around some atom continued by a motion along along some classical periodic orbit. This does not however mean loss of quantum coherence in the transitions between these periods: a purely classical model gives non-sensible results in this kind of situation. Only if one sums scattering amplitudes over all piecewise classical orbits (not all paths as one would do in path integral quantization) one obtains a working model.

2.3. In what sense complex systems can be called chaotic?

Speaking about quantum chaos instead of quantum complexity does not seem appropriate to me unless one makes clear that it refers to the limitations of human cognition rather than to physics. If one believes in quantum approach to consciousness, these limitations should reduce to finite resolution of quantum measurement not taken into account in standard quantum measurement theory.

In the framework of hyper-finite factors of type II1 finite quantum measurement resolution is described in terms of inclusions N subset M of the factors and sub-factor N defines what might be called N-rays replacing complex rays of state space. The space M/N has a fractal dimension characterized by quantum phase and increases as quantum phase q=exp(iπ/n), n=3,4,..., approaches unity which means improving measurement resolution since the size of the factor N is reduced.

Fuzzy logic based on quantum qbits applies in the situation since the components of quantum spinor do not commute. At the limit n→∞ one obtains commutativity, ordinary logic, and maximal dimension. The smaller the n the stronger the correlations and the smaller the fractal dimension. In this case the measurement resolution makes the system apparently strongly correlated when n approaches its minimal value n=3 for which fractal dimension equals to 1 and Boolean logic degenerates to single valued totalitarian logic.

Non-commutativity is the most elegant description for the reduction of dimensions and brings in reduced fractal dimensions smaller than the actual dimension. Again the reduction has interpretation as something totally different from chaos: system becomes a single coherent whole with strong but not complete correlation between different degrees of freedom. The interpretation would be that in the transition to non-chaotic quantal behavior correlation becomes complete and the dimension of system again integer valued but smaller. This would correspond to the cases n=6, n=4, and n=3 (D=3,2,1).

3. Quantum chaos in astrophysical scales?

3.1 Quantum criticality

  1. TGD Universe is quantum critical. The most important implication of quantum criticality of TGD Universe is that it fixes the value of Kähler coupling strength, the only free parameter appearing in definition of the theory as the analog of critical temperature. The dark matter hierarchy characterized partially by the increasing values of Planck constant allows to characterize more precisely what quantum criticality might means. By quantum criticality space-time sheets are analogs of Bohr orbits. Since quantum criticality corresponds to P-Q region, the localization of wave functions around generalized Bohr orbits should occur quite generally in some scale.

  2. Elementary p"../articles/ are maximally quantum critical systems analogous to H2O at tri-critical point and can be said to be in the intersection of imbedding spaces labelled by various values of Planck constants. Planck constant does not characterize the elementary particle proper. Rather, each field body of particle (em, weak, color, gravitational) is characterized by its own Planck constant and this Planck constant characterizes interactions. The generalization of the notion of the imbedding space allows to formulate this idea in precise manner and each sector of imbedding space is characterized by discrete symmetry groups Zn acting in M4 and CP2 degrees of freedom. The transition from quantum to classical corresponds to a reduction of Zn to subgroup Zm, m factor of n. Ruler-and-compass hypothesis implies very powerful predictions for the remnants of this symmetry at the level of visible matter. Note that the reduction of the symmetry in this chaos-to-order transition!

  3. Dark matter hierarchy makes TGD Universe an ideal laboratory for studying P-Q transitions with chaos identified as quantum critical phase between two values of Planck constant with larger value of Planck constant defining the "quantum" phase and smaller value the "classical" phase. Dark matter is localized near Bohr orbits and is analogous to quantum states localized near the periodic classical orbits. Planetary Bohr orbitology provides a particularly interesting astrophysical application of quantum chaos.

  4. The above described picture for chaotic quantums scattering applies quite generally in quantum TGD. Path integral is replaced with a functional integral over classical space-time evolutions and the failure of the complete classical non-determinism is analogous to the transition between classical orbits. Functional integral also reduces to perturbative functional integral around maxima of Kähler function.

3.2 Rings and spokes as the basic building blocks of dark matter structures

The Bohr orbit model for the planetary orbits based on the hierarchy of dark matter relies in an essential manner on the idea that macroscopic quantum phases of dark matter dictate to a high degree the behavior of the visible matter. Dark matter is concentrated on closed classical orbits in the simple rotationally symmetric gravitational potentials involved. Orbits become basic structures instead of points at the level of dark matter. A discrete subgroup Zn of rotational group with very large n characterizes dark matter structures quite generally. At the level of visible matter this symmetry can be broken to approximate symmetry defined by some subgroup of Zn.

Circles and radial spokes are the basic Platonic building blocks of dark matter structures. The interpretation of spokes would be as (gravi-)electric flux tubes. Radial spokes correspond to n=0 states in Bohr quantization for hydrogen atom and orbits ending into atom. Spokes have been observed in planetary rings besides decomposition to narrow rings and also in galactic scale. Also flux tubes of (gravi-)magnetic fields with Zn symmetry define rotational symmetric structures analogous to quantized dipole fields.

Gravi-magnetic flux tubes indeed correspond to circles rather than field lines of a dipole field for the simplest model of gravi-magnetic field, which means deviation from GRT predictions for gravi-magnetic torque on gyroscope outside equator: unfortunately the recent experiments are performed at equator. The flux tubes be seen only as circles orthogonal to the preferred plane and planetary Bohr rules apply automatically also now.

A word of worry is in order here. Ellipses are very natural objects in Bohr orbitology and for a given value of n would give n2-1 additional orbits. In planetary situation they would have very large eccentricities and are not realized. Comets can have closed highly eccentric orbits and correspond to large values of n. In any case, one is forced to ask whether the exactly Zn symmetric objects are too Platonic creatures to live in the harsh real world. Should one at least generalize the definition of the action of Zn as symmetry so that it could rotate the points of ellipse to each other. This might make sense. In the case of dark matter ellipses the radial spokes with Zn symmetry representing radial gravito-electric flux quanta would still connect dark matter ellipse to the central object and the rotation of the spoke structure induces a unique rotation of points at ellipse.

3.3. Dark matter structures as generalization of periodic orbits

The matter with ordinary or smaller value of Planck constant can form bound states with these dark matter structures. The dark matter circles would be the counterparts for the periodic Bohr orbits dictating the behavior of the quantum chaotic system. Visible matter (and more generally, dark matter at the lower levels of hierarchy behaving quantally in shorter length and time scales) tends to stay around these periodic orbits and in the ideal case provides a perfect classical mimicry of quantum behavior. Dark matter structures would effectively serve as selectors of the closed orbits in the gravitational dynamics of visible matter.

As one approaches classicality the binding of the visible matter to dark matter gradually weakens. Mercury's orbit is not quite closed, planetary orbits become ellipses, comets have highly eccentric orbits or even non-closed orbits. For non-closed quantum description in terms of binding to dark matter does not makes sense at all.

The classical regular limit (R) would correspond to a decoupling between dark matter and visible matter. A motion along geodesic line is obtained but without Bohr quantization in gravitational sense since Bohr quantization using ordinary value of Planck constant implies negative energies for GMm>1. The preferred extremal property of the space-time sheet could however still imply some quantization rules but these could apply in "vibrational" degrees of freedom.

3.4 Quantal chaos in gravitational scattering?

The chaotic motion of astrophysical object becomes the counterpart of quantum chaotic scattering. By Equivalence Principle the value of the mass of the object does not matter at all so that the motion of sufficiently light objects in solar system might be understandable only by assuming quantum chaos.

The orbit of a gravitationally unbound object such as comet could define the basic example. The rings of Saturn and Jupiter could represent interesting shorter length scale phenomena possible involving quantum scattering. One can imagine that the visible matter object spends some time around a given dark matter circle (binding to atom), makes a transition along radial spoke to the next circle, and so on.

The prediction is that dark matter forms rings and cart-wheel like structures of astrophysical size. These could become visible in collisions of say galaxies when stars get so large energy as to become gravitationally unbound and in this quantum chaotic regime can flow along spokes to new Bohr orbits or to gravi-magnetic flux tubes orthogonal to the galactic plane. Hoag's object represents a beautiful example ring galaxy. Remarkably, there is also direct evidence for galactic cart-wheels. There are also polar ring galaxies consisting of an ordinary galaxy plus ring approximately orthogonal to it and believed to form in galactic collisions. The ring rotating with the ordinary galaxy can be identified in terms of gravi-magnetic flux tube orthogonal to the galactic plane: in this case Zn symmetry would be completely broken at the level of visible matter.

For more details see the new chapter Quantum Astrophysics .

Basic objections against planetary Bohr orbitology

There are two objections against planetary Bohr orbitology.

  1. The success of this approach in the case solar system is not enough. In particular, it requires different values of v0 for inner and outer planets.

  2. The basic objection of General Relativist against the planetary Bohr orbitology model is the lack of the manifest General Goordinate and Lorentz invariances. In GRT context this objection would be fatal. In TGD framework the lack of these invariances is only apparent.

1. Also exoplanets obey Bohr rules

In the previous posting I proposed a simple model explaining why inner and outer planets must have different values of v0 by taking into account cosmic string contribution to the gravitational potential which is negligible nowadays but was not so in primordial times. Among other things this implies that planetary system has a finite size, at least about 1 ly in case of Sun (nearest star is at distance of 4 light years).

I have also applied the quantization rules to exoplanets in the case that the central mass and orbital radius are known. Errors are around 10 per cent for the most favored value of v0=2-11 (see this). The "anomalous" planets with very small orbital radius correspond to n=1 Bohr orbit (n=3 is the lowest orbit in solar system). The universal velocity spectrum v= v0/n in simple systems perhaps the most remarkable prediction and certainly testable: this alone implies that the Bohr radius GM/v02 defines the universal size scale for systems involving central mass. Obviously this is something new and highly non-trivial.

The recently observed dark ring in MLy scale is a further success and also the rings and Moons of Saturn and Jupiter obey the same universal length scale (n≥ 5 and v0→ (16/15)×v0 and v0→ 2×v0).

There is a further objection. For our own Moon orbital radius is much larger than Bohr radius for v0=2-11: one would have n≈138. n≈7 results for v0 →v0/20 giving r0≈ 1.2 RE. The small value of v0 could be understood to result from a sequence of phase transitions reducing the value of v0 to guarantee that solar system participates in the average sense to the cosmic expansion and from the fact inner planets are older than outer ones in the proposed scenario.

Remark: Bohr orbits cannot participate in the expansion which manifests itself as the observed apparent shrinking of the planetary orbits when distances are expressed in terms Robertson-Walker radial coordinate r=rM. This anomaly was discovered by Masreliez and is discussed here. Ruler-and-compass hypothesis suggests preferred values of cosmic times for the occurrence of these transitions. Without this hypothesis the phase transitions could form almost continuum.

2. How General Coordinate Invariance and Lorentz invariance are achieved?

One can use Minkowski coordinates of the M4 factor of the imbedding space H=M4×CP2 as preferred space-time coordinates. The basic aspect of dark matter hierarchy is that it realizes quantum classical correspondence at space-time level by fixing preferred M4 coordinates as a rest system. This guarantees preferred time coordinate and quantization axis of angular momentum. The physical process of fixing quantization axes thus selects preferred coordinates and affects the system itself at the level of space-time, imbedding space, and configuration space (world of classical worlds). This is definitely something totally new aspect of observer-system interaction.

One can identify in this system gravitational potential Φgr as the gtt component of metric and define gravi-electric field Egr uniquely as its gradient. Also gravi-magnetic vector potential Agr and and gravimagnetic field Bgrcan be identified uniquely.

3. Quantization condition for simple systems

Consider now the quantization condition for angular momentum with Planck constant replaced by gravitational Planck constant hbargr= GMm/v0 in the simple case of pointlike central mass. The condition is

m∫ v•dl = n × hbargr.

The condition reduces to the condition on velocity circulation

∫ v•dl = n × GM/v0.

In simple systems with circular rings forced by Zn symmetry the condition reduces to a universal velocity spectrum


so that only the radii of orbits depend on mass distribution. For systems for which cosmic string dominates only n=1 is possible. This is the case in the case of stars in galactic halo if primordial cosmic string going through the center of galaxy in direction of jet dominates the gravitational potential. The velocity of distant stars is correctly predicted.

Zn symmetry seems to imply that only circular orbits need to be considered and there is no need to apply the condition for other canonical momenta (radial canonical momentum in Kepler problem). The nearly circular orbits of visible matter objects would be naturally associated with dark matter rings or more complex structures with Zn symmetry and dark matter rings could suffer partial or complete phase transition to visible matter.

4. Generalization of the quantization condition

  1. By Equivalence Principle dark ring mass disappears from the quantization conditions and the left hand side of the quantization condition equals to a generalized velocity circulation applying when central system rotates

    ∫ (v-Agr)•dl .

    Here one must notice that dark matter ring is Zn symmetric and closed so that the geodesic motion of visible matter cannot correspond strictly to the dark matter ring (perihelion shift of Mercury). Just by passing notice that the presence of dark matter ring can explain also the complex braidings associated with the planetary rings.

  2. Right hand side would be the generalization of GM by the replacement

    GM → ∫ e•r2Egr × dl .

    e is a unit vector in direction of quantization axis of angular momentum, × denotes cross product, and r is the radial M4 coordinate in the preferred system. Everything is Lorentz and General Coordinate Invariant and for Schwartschild metric this reduces to the expected form and reproduces also the contribution of cosmic string to the quantization condition correctly.

For more details see the chapter Quantum Astrophysics .

A simple quantum model for the formation of astrophysical structures

The mechanisms behind the formation of planetary systems, galaxies and larger systems are poorly understood but planar structures seem to define a common denominator and the recent discovery of dark matter ring in a galactic cluster in Mly scale (see this) suggest that dark matter rings might define a universal step in the formation of astrophysical structures.

Also the dynamics in planet scale is poorly understood. In particular, the rings of Saturn and Jupiter are very intricate structures and far from well-understood. Assuming spherical symmetry it is far from obvious why the matter ends up to form thin rings in a preferred plane. The latest surprise is that Saturn's largest, most compact ring consist of clumps of matter separated by almost empty gaps. The clumps are continually colliding with each other, highly organized, and heavier than thought previously.

The situation suggests that some very important piece might be missing from the existing models, and the vision about dark matter as a quantum phase with a gigantic Planck constant (see this and this) is an excellent candidate for this piece. The vision that the quantum dynamics for dark matter is behind the formation of the visible structures suggests that the formation of the astrophysical structures could be understood as a consequence of Bohr rules.

1. General quantum vision about formation of structures

The basic observation is that in the case of a straight cosmic string creating a gravitational potential of form v12/ρ Bohr quantization does not pose any conditions on the radii of the circular orbits so that a continuous mass distribution is possible.

This situation is obviously exceptional. If one however accepts the TGD based vision (see this) that the very early cosmology was cosmic string dominated and that elementary p"../articles/ were generated in the decay of cosmic strings, this situation might have prevailed at very early times. These cosmic strings can transform to strings with smaller string tension and magnetic flux tubes can be seen as their remnants dark energy being identifiable as magnetic energy. If so, the differentiation of a continuous density of ordinary matter to form the observed astrophysical structures would correspond to an approach to a stationary situation governed by Bohr rules and in the first approximation one could neglect the intermediate stages.

Cosmic string need not be infinitely long: it could branch into n return flux tubes, n very large in accordance with the Zn symmetry for the dark matter but also in this case the situation in the nearby region remains the same. For large distances the whole structure would behave as a single mass point creating ordinary Newtonian gravitational potential. Also phase transitions in which the system emits magnetic flux tubes so that the contribution of the cosmic string to the gravitational force is reduced, are possible.

What is of utmost importance is that the cosmic string induces the breaking of the rotational symmetry down to a discrete Zn symmetry and in the presence of the central mass selects a unique preferred orbital plane in which gravitational acceleration is parallel to the plane. This is just what is observed in astrophysical systems and not easily explained in the Newtonian picture. In TGD framework this relates directly to the choice of quantization axis of angular momentum at the level of dark matter. This mechanism could be behind the formation of planar systems in all length scales including planets and their moons, planetary systems, galaxies, galaxy clusters in the scale of Mly, and even the concentration of matter at the walls of large voids in the scale of 100 Mly.

The Zn symmetry for the dark matter with very large n suggests the possibility of more precise predictions. If n is a ruler-and-compass integer it has as factors only first powers of Fermat primes and a very large power of 2. The breaking of Zn symmetry at the level of visible matter would naturally occur to subgroups Zm subset Zn. Since m is a factor of n, the average number of matter clumps could tend to be a factor of n, and hence a ruler-and-compass integer. Also the hexagonal symmetry discovered near North Pole of Saturn (see this)could have interpretation in terms of this symmetry breaking mechanism.

2. How inner and outer planets might have emerged?

The Bohr orbit model requires different values of the parameter v0 related by a scaling v0→v0/5 for inner and outer planets. It would be nice to understand why this is the case. The presence of cosmic string along rotational axis implied both by the model for the asymptotic state of the star and TGD based model for gamma ray bursts might allow to understand this result.

One can construct a simple modification of the hydrogen atom type model for solar system by including the contribution of cosmic string to the gravitational force. For circular orbits the condition identifying kinetic and gravitational radial accelerations plus quantization of angular momentum in units of gravitational Planck constant are used. The prediction is that only a finite number of Bohr orbits are possible. One might hope that this could explain the decomposition of the planetary system to inner and outer planets.

String tension implies anomalous acceleration of same form as the radial kinetic acceleration implying that for given radius kinetic energy per mass is shifted upwards by a constant amount. This acceleration anomaly is severely bounded above by the constant acceleration anomaly of space-crafts (Pioneer anomaly) and for the recent value of the cosmic string tension the number of allowed inner planets is much larger than 3.

The situation was however different in the primordial stage when cosmic string tension was much larger and gradually reduced in phase transitions involving the emission of closed magnetic flux tubes. Primordial Sun could have emitted the seeds of the two planetary systems related by scaling and that this might have happened in the phase transition reducing magnetic flux by the emission of closed magnetic flux tube structure.

3. Models for the interior of astrophysical object and for planetary rings

Using similar quantization conditions one can construct a very simple model of astrophysical object as a cylindrically symmetric pancake like structure. There are three basic predictions which do not depend on the details of the mass distribution.

  1. The velocity spectrum for the circular orbits is universal and given as v=v0/n and that only the radii of Bohr orbits of p"../articles/ depend on the form of average mass distribution which can vary in wide limits.

  2. Velocity does not decrease with distance and is constant in the presence of only cosmic string.

  3. The size of the system is always finite and increasing values of n correspond to decreasing radii. This came as a complete surprise, and is a complete opposite for what hydrogen atom like model without cosmic string predicts (when cosmic string is introduced planetary system for given value of v0 has necessarily a finite size).

Four mass distributions were tested.

  1. First corresponds to a power law, second to a logarithmic velocity distribution and third to a spectrum of orbital radii coming as powers of 2 in accordance with p-adic length scale hypothesis. Fourth mass distribution corresponds to evenly spaced Bohr radii below certain radius.

  2. Only the third option works as a model for say Earth and predicts that dark matter forms an onionlike structure with the radii of shells coming as powers of 2 (of 21/2 in the most general formulation of p-adic length scale hypothesis). This prediction is universal and means that dark matter part of stellar objects would be very much analogous to atom having also shell like structure. Actually this is not surprising.

  3. Second and fourth option could define a reasonable model for ring like structures (Saturn's and Jupiter's rings). The predicted universal velocity spectrum for dark rings serves as a test for the model.

For more details see the new chapter Quantum Astrophysics .

NASA Hubble Space Telescope Detects Ring of Dark Matter

The following catched my attention during this morning's webwalk.

NASA Hubble Space Telescope Detects Ring of Dark Matter

NASA will hold a media teleconference at 1 p.m. EDT on May 15 to discuss the strongest evidence to date that dark matter exists. This evidence was found in a ghostly ring of dark matter in the cluster CL0024+17, discovered using NASA's Hubble Space Telescope. The ring is the first cluster to show a dark matter distribution that differs from the distribution of both the galaxies and the hot gas. The discovery will be featured in the May 15 issue of the Astrophysical Journal.

"Rings" puts bells ringing! In TGD Universe dark matter characterized by a gigantic value of Planck constant making dark matter a macroscopic quantum phase in astrophysical length and time scales. Rotationally symmetric structures - such as rings- with an exact rotational symmetry Zn, n very very large, of the "field body" of the system, is the basic prediction. In the model of planetary orbits the rings of dark matter around Bohr orbits force the visible matter at the Bohr orbit (see this).

TGD based model for dark matter inspires the hypothesis that it corresponds to Bohr orbit for macroscopically quantum coherent dark matter with gigantic value of Planck constant predicted by the model. The article about finding is now in archive and contains the data making possible to test the model. I am grateful for Kea for providing the link. The ring corresponds with a good accuracy to the lowest Bohr orbit for v0= 3×2-11, which is 3 times the favored value but allowed by the general hypothesis for the favored values of Planck constant.

I add the little calculation here to give an idea about what is involved. The number theoretic hypothesis for the preferred values of Planck constants states that the gravitational Planck constant

hbar= GMm/v0

equals to a ruler-and-compass rational which is ratio q= n1/n2 of ruler-and-compass ni integers expressible as a product of form n=2k∏ Fs, where all Fermat primes Fs are different. Only four of them are known and they are given by 3, 5, 17, 257, 216+1. v0=2-11 applies to inner planets and v0=2-11/5 to outer planets and the conditions from the quantization of hbar are satisfied.

The obvious TGD inspired hypothesis is that the dark matter ring corresponds to Bohr orbit. Hence the distance would be

r= n2 r0,

where r0 is Bohr radius and n is integer. n=1 for lowest Bohr orbit. The Bohr radius is given


where M the total mass in the dense core region inside the ring. This would give distance of about 2000 times Schwartschild radius for the lowest orbit for the preferred value of v0=2-11.

This prediction can be confronted with the data since the article Discovery of a ringlike dark matter structure in the core of the galaxy cluster C1 0024+17 is in the archive now.

  1. From the Summary and Conclusion part of the article the radius of the ring is about .4 Mpc, which makes in a good approximation 1.2 Mly (I prefer light years). More precisely - using arc second as a unit - the ring corresponds to a bump in the interval 60''-85'' centered at 75''. Figure 10 of of the article gives a good idea about the shape of the bump.

  2. From the article the mass in the dense core within radius which is almost half of the ring radius is about M=1.5×1014× MSun. The mass estimate based on gravitational lensing gives M=1.5×1014× MSun. If the gravitational lensing involves dark mass not in the central core, the first value can be used as the estimate. The Bohr radius this system is therefore r0= 1.5×1014× r0(Sun),

    where I have assumed v0=2-11 as for the inner planets in the model for the solar system.

  3. The Bohr orbit for our planetary system predicts correctly Mercury's orbital radius as n=3 Bohr orbit for v0 =2-11 so that one has


    where rM is Mercury's orbital radius. One obtains

    r0= 1.5×1014× rM/9.

  4. Mercury's orbital radius is in a good approximation rM=.4 AU, and AU (the distance of Earth from Sun) is 1.5×1011 meters. 1 ly corresponds to .95×1016 meters. This gives

    r0 =11 Mly to be compared with 1.2 Mly deduced from observations. The result is by a factor 9 too large.

  5. If one replaces v0 with 3v0 one obtains downwards scaling by a factor of 1/9, which gives r0=1.2 Mly. The general hypothesis indeed allows to scale v0 by a factor 3.

  6. If one considers instead of Bohr orbits genuine solutions of Schrödinger equation then only n> 1 structures can correspond to rings like structures. Minimal option would be n=2 with v0 replaced with 6v0 .

The conclusion would be that the ring would correspond to the lowest possible Bohr orbit for v0=3× 2-11. I would have been really happy if the favored value of v0 had appeared in the formula but the consistency with the ruler-and-compass hypothesis serves as a consolation. Skeptic can of course always argue that this is a pure accident. If so, it would be an addition to long series of accidents (planetary radii in solar system and radii of exoplanets). One can of course search rings at radii corresponding to n=2,3,... If these are found, I would say that the situation is settled.

For more details see the new chapter Quantum Astrophysics .

Gravitational radiation and large value of gravitational Planck constant

Gravitational waves has been discussed on both Lubos's blog and Cosmic Variance. This raised the stimulus of looking how TGD based predictions for gravitational waves differ classical predictions. The article Gravitational Waves in Wikipedia provides excellent background material which I have used in the following. This posting is an extended and corrected version of the original.

The description of gravitational radiation provides a stringent test for the idea about dark matter hierarchy with arbitrary large values of Planck constants. In accordance with quantum classical correspondence, one can take the consistency with classical formulas as a constraint allowing to deduce information about how dark gravitons interact with ordinary matter. In the following standard facts about gravitational radiation are discussed first and then TGD based view about the situation is sketched.

A. Standard view about gravitational radiation

A.1 Gravitational radiation and the sources of gravitational waves

Classically gravitational radiation corresponds to small deviations of the space-time metric from the empty Minkowski space metric (see this). Gravitational radiation is characterized by polarization, frequency, and the amplitude of the radiation. At quantum mechanical level one speaks about gravitons characterized by spin and light-like four-momentum.

The amplitude of the gravitational radiation is proportional to the quadrupole moment of the emitting system, which excludes systems possessing rotational axis of symmetry as classical radiators. Planetary systems produce gravitational radiation at the harmonics of the rotational frequency. The formula for the power of gravitational radiation from a planetary system given by

P= dE/dt=(32/π)×G2M1M2(M1+M2)/R5.

This formula can be taken as a convenient quantitative reference point.

Planetary systems are not very effective radiators. Because of their small radius and rotational asymmetry supernovas are much better candidates in this respect. Also binary stars and pairs of black holes are good candidates. In 1993, Russell Hulse and Joe Taylor were able to prove indirectly the existence of gravitational radiation. Hulse-Taylor binary consists of ordinary star and pulsar with the masses of stars around 1.4 solar masses. Their distance is only few solar radii. Note that the pulsars have small radius, typically of order 10 km. The distance between the stars can be deduced from the Doppler shift of the signals sent by the pulsar. The radiated power is about 1022 times that from Earth-Sun system basically due to the small value of R. Gravitational radiation induces the loss of total energy and a reduction of the distance between the stars and this can be measured.

A.2 How to detect gravitational radiation?

Concerning the detection of gravitational radiation the problems are posed by the extremely weak intensity and large distance reducing further this intensity. The amplitude of gravitational radiation is measured by the deviation of the metric from Minkowski metric, denote by h.

Weber bar (see this) provides one possible manner to detect gravitational radiation. It relies on a resonant amplification of gravitational waves at the resonance frequency of the bar. For a gravitational wave with an amplitude h≈10-20 the distance between the ends of a bar with length of 1 m should oscillate with the amplitude of 10-20 meters so that extremely small effects are in question. For Hulse-Taylor binary the amplitude is about h=10-26 at Earth. By increasing the size of apparatus one can increase the amplitude of stretching.

Laser interferometers provide second possible method for detecting gravitational radiation. The masses are at distance varying from hundreds of meters to kilometers(see this). LIGO (the Laser Interferometer Gravitational Wave Observatory) consists of three devices: the first one is located with Livingston, Lousiana, and the other two at Hanford, Washington. The system consist of light storage arms with length of 2-4 km and in angle of 90 degrees. The vacuum tubes in storage arms carrying laser radiation have length of 4 km. One arm is stretched and one arm shortened and the interferometer is ideal for detecting this. The gravitational waves should create stretchings not longer that 10-17 meters which is of same order of magnitude as intermediate gauge boson Compton length. LIGO can detect a stretching which is even shorter than this. The detected amplitudes can be as small as h≈ 5× 10-22.

B. Gravitons in TGD

In this subsection two models for dark gravitons are discussed. Spherical dark graviton (or briefly giant graviton) would be emitted in quantum transitions of say dark gravitational variant of hydrogen atom. Giant graviton is expected to de-cohere into topological light rays, which are the TGD counterparts of plane waves and are expected to be detectable by human built detectors.

B.1 Gravitons in TGD

Unlike the naive application of Mach's principle would suggest, gravitational radiation is possible in empty space in general relativity. In TGD framework it is not possible to speak about small oscillations of the metric of the empty Minkowski space imbedded canonically to M4× CP2 since Kähler action is non-vanishing only in fourth order in the small deformation and the deviation of the induced metric is quadratic in the deviation. Same applies to induced gauge fields. Even the induced Dirac spinors associated with the modified Dirac action fixed uniquely by super-symmetry allow only vacuum solutions in this kind of background. Mathematically this means that both the perturbative path integral approach and canonical quantization fail completely in TGD framework. This led to the vision about physics as Kähler geometry of "world of classical worlds" with quantum states of the universe identified as the modes of classical configuration space spinor fields.

The resolution of various conceptual problems is provided by the parton picture and the identification of elementary p"../articles/ as light-like 3-surfaces associated with the wormhole throats. Gauge bosons correspond to pairs of wormholes and fermions to topologically condensed CP2 type extremals having only single wormhole throat.

Gravitons are string like objects in a well defined sense. This follows from the mere spin 2 property and the fact that partonic 2-surfaces allow only free many-fermion states. This forces gauge bosons to be wormhole contacts whereas gravitons must be identified as pairs of wormhole contacts (bosons) or of fermions connected by flux tubes. The strong resemblance with string models encourages to believe that general relativity defines the low energy limit of the theory. Of course, if one accepts dark matter hierarchy and dynamical Planck constant, the notion of low energy limit itself becomes somewhat delicate.

B.2 Model for the giant graviton

Detector, giant graviton, source, and topological light ray will be denoted simply by D, G, and S, and ME in the following. Consider first the model for the giant graviton.

  1. Orbital plane defines the natural quantization axis of angular momentun. Giant graviton and all dark gravitons corresponds to na-fold coverings of CP2 by M4 points, which means that one has a quantum state for which fermionic part remains invariant under the transformations φ→ φ+2π/na. This means in particular that the ordinary gravitons associated with the giant graviton have same spin so that the giant graviton can be regarded as Bose-Einstein condensate in spin degrees of freedom. Only the orbital part of state depends on angle variables and corresponds to a partial wave with a small value of L.

  2. The total angular momentum of the giant graviton must correspond to the change of angular momentum in the quantum transition between initial and final orbit. Orbital angular momentum in the direction of quantization axis should be a small multiple of dark Planck constant associated with the system formed by giant graviton and source. These states correspond to Bose-Einstein condensates of ordinary gravitons in eigen state of orbital angular with ordinary Planck constant. Unless S-wave is in question the intensity pattern of the gravitational radiation depends on the direction in a characteristic non-classical manner. The coherence of dark graviton regarded as Bose-Einstein condensate of ordinary gravitons is what distinguishes the situation in TGD framework from that in GRT.

  3. If all elementary p"../articles/ with gravitons included are maximally quantum critical systems, giant graviton should contain r(G,S) =na/nb ordinary gravitons. This number is not an integer for nb>1. A possible interpretation is that in this case gravitons possess fractional spin corresponding to the fact that rotation by 2π gives a point in the nb-fold covering of M4 point by CP2 points. In any case, this gives an estimate for the number of ordinary gravitons and the radiated energy per solid angle. This estimate follows also from the energy conservation for the transition. The requirement that average power equals to the prediction of GRT allows to estimate the geometric duration associated with the transition. The condition hbar ω = Ef-Ei is consistent with the identification of hbar for the pair of systems formed by giant-graviton and emitting system.

B.3 Dark graviton as topological light ray

Second kind of dark graviton is analog for plane wave with a finite transversal cross section. TGD indeed predicts what I have called topological light rays, or massless extremals (MEs) as a very general class of solutions to field equations ((see this, this, and this).

MEs are typically cylindrical structures carrying induced gauge fields and gravitational field without dissipation and dispersion and without weakening with the distance. These properties are ideal for targeted long distance communications which inspires the hypothesis that they play a key role in living matter (see this and this) and make possible a completely new kind of communications over astrophysical distances. Large values of Planck constant allow to resolve the problem posed by the fact that for long distances the energies of these quanta would be below the thermal energy of the receiving system.

Giant gravitons are expected to decay to this kind of dark gravitons having smaller value of Planck constant via de-decoherence and that it is these gravitons which are detected. Quantitative estimates indeed support this expectation.

At the space-time level dark gravitons at the lower levels of hierarchy would naturally correspond to na-Riemann sheeted (r=GmE/v0=na/nb for m>>E) variants of topological light rays ("massless extremals", MEs), which define a very general family of solutions to field equations of TGD (see this). na-sheetedness is with respect to CP2 and means that every point of CP2 is covered by na M4 points related by a rotation by a multiple of 2π/na around the propagation direction assignable with ME. nb-sheetedness with respect to M4 is possible but does not play a significant role in the following considerations. Using the same loose language as in the case of giant graviton, one can say that r=na/nb copies of same graviton have suffered a topological condensation to this kind of ME. A more precise statement would be na gravitons with fractional unit hbar0/na for spin.

C. Detection of gravitational radiation

One should also understand how the description of the gravitational radiation at the space-time level relates to the picture provided by general relativity to see whether the existing measurement scenarios really measure the gravitational radiation as they appear in TGD. There are more or less obvious questions to be answered (or perhaps obvious after a considerable work).

What is the value of dark gravitational constant which must be assigned to the measuring system and gravitational radiation from a given source? Is the detection of primary giant graviton possible by human means or is it possible to detect only dark gravitons produced in the sequential de-coherence of giant graviton? Do dark gravitons enhance the possibility to detect gravitational radiation as one might expect? What are the limitations on detection due to energy conservation in de-coherence process?

C.1 TGD counterpart for the classical description of detection process

The oscillations of the distance between the two masses defines a simplified picture about the receival of gravitational radiation. Now ME would correspond to na-"Riemann-sheeted" (with respect to CP2)graviton with each sheet oscillating with the same frequency. Classical interaction would suggest that the measuring system topologically condenses at the topological light ray so that the distance between the test masses measured along the topological light ray in the direction transverse to the direction of propagation starts to oscillate.

Obviously the classical behavior is essentially the same as as predicted by general relativity at each "Riemann sheet". If all elementary p"../articles/ are maximally quantum critical systems and therefore also gravitons, then gravitons can be absorbed at each step of the process, and the number of absorbed gravitons and energy is r-fold.

C.2. Sequential de-coherence

Suppose that the detecting system has some mass m and suppose that the gravitational interaction is mediated by the gravitational field body connecting the two systems.

The Planck constant must characterize the system formed by dark graviton and measuring system. In the case that E is comparable to m or larger, the expression for r=hbar/hbar0 must replaced with the relativistically invariant formula in which m and E are replaced with the energies in center of mass system. This gives

r= GmE/[v0(1+β)(1-β)1/2], β= z(-1+(1+2/x))1/2) , x= E/2m .

Assuming m>>E0 this gives in a good approximation

r=Gm1 E0/v0= G2 m1mM/v02.

Note that in the interaction of identical masses ordinary hbar is possible for m≤ (v0)1/2MPl. For v0=2-11 the critical mass corresponds roughly to the mass of water blob of radius 1 mm.

One can interpret the formula by saying that de-coherence splits from the incoming dark graviton dark piece having energy E1= (Gm1E0/v0)ω, which makes a fraction E1/E0= (Gm1/v0)ω from the energy of the graviton. At the n:th step of the process the system would split from the dark graviton of previous step the fraction

En/E0= (Gωn/v0)ni(mi).

from the total emitted energy E0. De-coherence process would proceed in steps such that the typical masses of the measuring system decrease gradually as the process goes downwards in length and time scale hierarchy. This splitting process should lead at large distances to the situation in which the original spherical dark graviton has split to ordinary gravitons with angular distribution being same as predicted by GRT.

The splitting process should stop when the condition r≤ 1 is satisfied and the topological light ray carrying gravitons becomes 1-sheeted covering of M4. For E<<m this gives GmE≤ v0 so that m>>E implies E<<MPl. For E>>m this gives GE3/2m1/2 <2v0 or

E/m≤ (2v0/Gm2)2/3 .

C.3. Information theoretic aspects

The value of r=hbar/hbar0 depends on the mass of the detecting system and the energy of graviton which in turn depends on the de-coherence history in corresponding manner. Therefore the total energy absorbed from the pulse codes via the value of r information about the masses appearing in the de-coherence process. For a process involving only single step the value of the source mass can be deduced from this data. This could some day provide totally new means of deducing information about the masses of distant objects: something totally new from the point of view of classical and string theories of gravitational radiation. This kind of information theoretic bonus gives a further good reason to take the notion of quantized Planck constant seriously.

If one makes the stronger assumption that the values of r correspond to ruler-and-compass rationals expressible as ratios of the number theoretically preferred values of integers expressible as n=2ksFs, where Fs correspond to different Fermat primes (only four is known), very strong constraints on the masses of the systems participating in the de-coherence sequence result. Analogous conditions appear also in the Bohr orbit model for the planetary masses and the resulting predictions were found to be true with few per cent. One cannot therefore exclude the fascinating possibility that the de-coherence process might in a very clever manner code information about masses of systems involved with its steps.

C.4. The time interval during which the interaction with dark graviton takes place?

If the duration of the bunch is T= E/P, where P is the classically predicted radiation power in the detector and T the detection period, the average power during bunch is identical to that predicted by GRT. Also T would be proportional to r, and therefore code information about the masses appearing in the sequential de-coherence process.

An alternative, and more attractive possibility, is that T is same always and correspond to r=1. The intuitive justification is that absorption occurs simultaneously for all r "Riemann sheets". This would multiply the power by a factor r and dramatically improve the possibilities to detect gravitational radiation. The measurement philosophy based on standard theory would however reject these kind of events occurring with 1/r time smaller frequency as being due to the noise (shot noise, seismic noise, and other noise from environment). This might relate to the failure to detect gravitational radiation.

D. Quantitative model

In this subsection a rough quantitative model for the de-coherence of giant (spherical) graviton to topological light rays (MEs) is discussed and the situation is discussed quantitatively for hydrogen atom type model of radiating system.

D.1. Leakage of the giant graviton to sectors of imbedding space with smaller value of Planck constant

Consider first the model for the leakage of giant graviton to the sectors of H with smaller Planck constant.

  1. Giant graviton leaks to sectors of H with a smaller value of Planck constant via quantum critical points common to the original and final sector of H. If ordinary gravitons are quantum critical they can be regarded as leakage points.

  2. It is natural to assume that the resulting dark graviton corresponds to a radial topological light ray (ME). The discrete group Zna acts naturally as rotations around the direction of propagation for ME. The Planck constant associated with ME-G system should by the general criterion be given by the general formula already described.

  3. Energy should be conserved in the leakage process. The secondary dark graviton receives the fraction Δ ω/4π= S/4π r2 of the energy of giant graviton, where S(ME) is the transversal area of ME, and r the radial distance from the source, of the energy of the giant graviton. Energy conservation gives

    S(ME)/4π r2 hbar(G,S)ω= hbar(ME,G)ω .


    S(ME)/4π r2= hbar(ME,G)/hbar(G,S)≈ E(ME)/M(S) .

    The larger the distance is, the larger the area of ME. This means a restriction to the measurement efficiency at large distances for realistic detector sizes since the number of gravitons must be proportional to the ratio S(D)/S(ME) of the areas of detector and ME.

D.2. The direct detection of giant graviton is not possible for long distances

Primary detection would correspond to a direct flow of energy from the giant graviton to detector. Assume that the source is modellable using large hbar variant of the Bohr orbit model for hydrogen atom. Denote by r=na/nb the rationals defining Planck constant as hbar= r×hbar0.

For G-S system one has

r(G,S)= GME/v0 =GMmv0× k/n3 .

where k is a numerical constant of order unity and m refers to the mass of planet. For Hulse-Taylor binary m≈ M holds true.

For D-G system one has

r(D,G)=GM(D) E/v0 = GM(D)mv0× k/n3 .

The ratio of these rationals (in general) is of order M(D)/M.

Suppose first that the detector has a disk like shape. This gives for the total number n(D) of ordinary gravitons going to the detector the estimate

n(D)=(d/r)2 × na(G,S)= (d/r)2× GMmv0× nb(G,S)× k/n3 .

If the actual area of detector is smaller than d2 by a factor x one has

n(D)→ xn(D) .

n(D) cannot be smaller than the number of ordinary gravitons estimated using the Planck constant associated with the detector: n(D)≥ na(D,G)=r(D,G)nb(D,G). This gives the condition

d/r≥(M(D)/M(S))1/2× (nb(D,G)/nb(G,S))1/2×(k/xn3)1/2.

Suppose for simplicity that nb(D,G)/nb(G,S)=1 and M(D)=103 kg and M(S)=1030 kg and r= 200 MPc ≈ 109 ly, which is a typical distance for binaries. For x=1,k=1,n=1 this gives roughly d≥ 10-4 ly ≈ 1011 m, which is roughly the size of solar system. From energy conservation condition the entire solar system would be the natural detector in this case. Huge values of nb(G,S) and larger reduction of nb(G,S) would be required to improve the situation. Therefore direct detection of giant graviton by human made detectors is excluded.

D.3. Secondary detection

The previous argument leaves only the secondary detection into consideration. Assume that ME results in the primary de-coherence of a giant graviton. Also longer de-coherence sequences are possible and one can deduce analogous conditions for these.

Energy conservation gives

S(D)/S(ME)× r(ME,G) = r(D,ME) .

Using the expression for S(ME) this gives an expression for S(ME) for a given detector area:

S(ME)= r(ME,G)/r(D,ME) × S(D)≈ E(G)/M(D)× S(D) .

From S(ME)=E(ME)/M(S)4π r2 one obtains

r = (E(G)M(S)/E(ME)M(D))1/2×S(D)1/2

for the distance at which ME is created. The distances of binaries studied in LIGO are of order D=1024 m. Using E(G)≈ Mv02 and assuming M=1030 kg and S(D)= 1 m2 (just for definiteness), one obtains r≈ 1025(kg/E(ME)) m. If ME is generated at distance r≈ D and if one has S(ME)≈ 106 m2 (from the size scale for LIGO) one obtains from the equation for S(ME) the estimate E(ME)≈ 10-25 kg ≈ 10-8 Joule.

D.4 Some quantitative estimates for gravitational quantum transitions in planetary systems

To get a concrete grasp about the situation it is useful to study the energies of dark giant gravitons in the case of planetary system assuming Bohr model.

The expressions for the energies of dark gravitons can be deduced from those of hydrogen atom using the replacements Ze2→4π GMm, hbar →GMm/v0. I have assumed that second mass is much smaller. The energies are given by

En= 1/n2E1 , E1= (Zα)2 m/4= (Ze2/4π×hbar)2× m/4→m/4v02.

E1 defines the energy scale. Note that v0 defines a characteristic velocity if one writes this expression in terms of classical kinetic energy using virial theorem T= -V/2 for the circular orbits. This gives En= Tn= mvn2/2= mv02/4n2 giving

vn=(v0/21/2)/n . Orbital velocities are quantized as sub-harmonics of the universal velocity v0/2-1/2=2-23/2 and the scaling of v0 by 1/n scales does not lead out from the set of allowed velocities.

Bohr radius scales as r0= hbar/Zα m→ GM/v02.

For v0=211 this gives r0= 222GM ≈ 4× 106GM. In the case of Sun this is below the value of solar radius but not too much.

The frequency ω(n,n-k) of the dark graviton emitted in n→n-k transition and orbital rotation frequency ωn are given by

ω(n,n-k) = v03/GM× (1/n2-1/(n-k)2)≈ kωn.

ωn= v03/GMn3.

The emitted frequencies at the large n limit are harmonics of the orbital rotation frequency so that quantum classical correspondence holds true. For low values of n the emitted frequencies differ from harmonics of orbital frequency.

The energy emitted in n→n-k transition would be

E(n,n-k)= mv02× (1/n2-1/(n-k)2) ,

and obviously enormous. Single spherical dark graviton would be emitted in the transition and should decay to gravitons with smaller values of hbar. Bunch like character of the detected radiation might serve as the signature of the process. The bunch like character of liberated dark gravitational energy means coherence and might play role in the coherent locomotion of living matter. For a pair of systems of masses m=1 kg this would mean Gm2/v0≈ 1020 meaning that exchanged dark graviton corresponds to a bunch containing about 1020 ordinary gravitons. The energies of graviton bunches would correspond to the differences of the gravitational energies between initial and final configurations which in principle would allow to deduce the Bohr orbits between which the transition took place. Hence dark gravitons could make possible the analog of spectroscopy in astrophysical length scales.

E. Generalization to gauge interactions

The situation is expected to be essentially the same for gauge interactions. The first guess is that one has r= Q1Q2g2/v0, were g is the coupling constant of appropriate gauge interaction. v0 need not be same as in the gravitational case. The value of Q1Q2g2 for which perturbation theory fails defines a plausible estimate for v0. The naive guess would be v0≈ 1. In the case of gravitation this interpretation would mean that perturbative approach fails for GM1M2=v0. For r>1 Planck constant is quantized with rational values with ruler-and-compass rationals as favored values. For gauge interactions r would have rather small values. The above criterion applies to the field body connecting two gauge charged systems. One can generalize this picture to self interactions assignable to the "personal" field body of the system. In this case the condition would read as Q2g2/v0>>1.

E.1 Applications

One can imagine several applications.

  • A possible application would be to electromagnetic interactions in heavy ion collisions.

  • Gamma ray bursts might be one example of dark photons with very large value of Planck constant. The MEs carrying gravitons could carry also gamma rays and this would amplify the value of Planck constant form them too.

  • Atomic nuclei are good candidates for systems for which electromagnetic field body is dark. The value of hbar would be r=Z2e2/v0, with v0≈ 1. Electromagnetic field body could become dark already for Z>3 or even for Z=3. This suggest a connection with nuclear string model (see this) in which A< 4 nuclei (with Z<3) form the basic building bricks of the heavier nuclei identified as nuclear strings formed from these structures which themselves are strings of nucleons.

  • Color confinement for light quarks might involve dark gluonic field bodies.

  • Dark photons with large value of hbar could transmit large energies through long distances and their phase conjugate variants could make possible a new kind of transfer mechanism (see this) essential in TGD based quantum model of metabolism and having also possible technological applications. Various kinds of sharp pulses suggest themselves as a manner to produce dark bosons in laboratory. Interestingly, after having given us alternating electricity, Tesla spent the rest of his professional life by experimenting with effects generated by electric pulses. Tesla claimed that he had discovered a new kind of invisible radiation, scalar wave pulses, which could make possible wireless communications and energy transfer in the scale of globe (see this for a possible but not the only TGD based explanation).

E.2 In what sense dark matter is dark?

The notion of dark matter as something which has only gravitational interactions brings in mind the concept of ether and is very probably only an approximate characterization of the situation. As I have been gradually developing the notion of dark matter as a hierarchy of phases of matter with an increasing value of Planck constant, the naivete of this characterization has indeed become obvious.

If the proposed view is correct, dark matter is dark only in the sense that the process of receiving the dark bosons (say gravitons) mediating the interactions with other levels of dark matter hierarchy, in particular ordinary matter, differs so dramatically from that predicted by the theory with a single value of Planck constant that the detected dark quanta are unavoidably identified as noise. Dark matter is there and interacts with ordinary matter and living matter in general and our own EEG in particular provide the most dramatic examples about this interaction. Hence we could consider the dropping of "dark matter" from the glossary altogether and replacing the attribute "dark" with the spectrum of Planck constants characterizing the p"../articles/ (dark matter) and their field bodies (dark energy).

For more details see the chapter Quantum Astrophysics .

Gravity Probe B and TGD

Gravity Probe B experiment tests the predictions of General Relativity related to gravimagnetism. Only the abstract of the talk C. W. Francis Everitt summarizing the results is available when I am writing this. Here is a slightly reformatted abstract of the talk.

The NASA Gravity Probe B (GP-B) orbiting gyroscope test of General Relativity, launched from Vandenberg Air Force Base on 20 April, 2004, tests two consequences of Einstein's theory:

  1. the predicted 6.6 arc-s/year geodetic effect due to the motion of the gyroscope through the curved space-time around the Earth;

  2. the predicted 0.041 arc-s/year frame-dragging effect due to the rotating Earth.

The mission has required the development of cryogenic gyroscopes with drift-rates 7 orders of magnitude better than the best inertial navigation gyroscopes. These and other essential technologies, for an instrument which once launched must work perfectly, have come into being as the result of an intensive collaboration between Stanford physicists and engineers, NASA and industry. GP-B entered its science phase on August 27, 2004 and completed data collection on September 29, 2005. Analysis of the data has been in continuing progress during and since the mission. This paper will describe the main features and challenges of the experiment and announce the first results.

The Confrontation between General Relativity and Experiment gives an excellent summary of various test of GRT. The predictions tested by GP-B relate to gravitomagnetic effects. The field equations of general relativity in post-Newtonian approximation with a choice of a preferred frame can in good approximation (gij=-δij) be written in a form highly reminiscent of Maxwell's equestions with gtt component of metric defining the counterpart of the scalar potential giving rise to gravito-electric field and gti the counterpart of vector potential giving rise to the gravitomagnetic field.

Rotating body generates a gravitomagnetic field so that bodies moving in the gravitomagnetic field of a rotating body experience the analog of Lorentz force and gyroscope suffers a precession similar to that suffered by a magnetic dipole in magnetic field (Thirring-Lense efffect or frame-drag). Besides this there is geodetic precession due to the motion of the gyroscope in the gravito-electric field present even in the case of non-rotating source which might be perhaps understood in terms of gravito-Faraday law. Both these effects are tested by GP-B.

In the following something general about how TGD and GRT differs and also something about the predictions of TGD concerning GP-B experiment.

1. TGD and GRT

Consider first basic differences between TGD and GRT.

  1. In TGD local Lorentz invariance is replaced by exact Poincare invariance at the level of the imbedding space H= M4× CP2. Hence one can use unique global Minkowski coordinates for the space-time sheets and gets rids of the problems related to the physical identification of the preferred coordinate system.

  2. General coordinate invariance holds true in both TGD and GRT.

  3. The basic difference between GRT and TGD is that in TGD framework gravitational field is induced from the metric of the imbedding space. One important cosmological implication is that the imbeddings of the Robertson-Walker metric for which the gravitational mass density is critical or overcritical fail after some value of cosmic time. Also classical gauge potentials are induced from the spinor connection of H so that the geometrization applies to all classical fields. Very strong constraints between fundamental interactions at the classical level are implied since CP2 are the fundamental dynamical variables at the level of macroscopic space-time.

  4. Equivalence Principle holds in TGD only in a weak form in the sense that gravitational energy momentum currents (rather than tensor) are not identical with inertial energy momentum currents. Inertial four-momentum currents are conserved but not gravitational ones. This explains the non-conservation of gravitational mass in cosmological time scales. At the more fundamental parton level (light-like 3-surfaces to which an almost-topological QFT is assigned) inertial four-momentum can be regarded as the time-average of the non-conserved gravitational four-momentum so that equivalence principle would hold in average sense. The non-conservation of gravitational four-momentum relates very closely to particle massivation.

2. TGD and GP-B

There are excellent reasons to expect that Maxwellian picture holds true in a good accuracy if one uses Minkowski coordinates for the space-time surface. In fact, TGD allows a static solutions with 2-D CP2 projection for which the prerequisites of the Maxwellian interpretation are satisfied (the deviations of the spatial components gij of the induced metric from -δij are negligible).

Schwartschild and Reissner-Norströom metric allow imbeddings as 4-D surfaces in H but Kerr metric assigned to rotating systems probably not. If this is indeed the case, the gravimagnetic field of a rotating object in TGD Universe cannot be identical with the exact prediction of GRT but could be so in the Post-Newtonian approximation. Scalar and vector potential correspond to four field quantities and the number of CP2 coordinates is four. Imbedding as vacuum extremals with 2-D CP2 projection guarantees automatically the consistency with the field equations but requires the orthogonality of gravito-electric and -magnetic fields. This holds true in post-Newtonian approximation in the situation considered.

This raises the possibility that apart from restrictions caused by the failure of the global imbedding at short distances one can imbed Post-Newtonian approximations into H in the approximation gij=-δij. If so, the predictions for Thirring-Lense effect would not differ measurably from those of GRT. The predictions for the geodesic precession involving only scalar potential would be identical in any case.

The imbeddability in the post-Newtonian approximation is however questionable if one assumes vacuum extremal property and small deformations of Schwartschild metric indeed predict a gravitomagnetic field differing from the dipole form.

3. Simplest candidate for the metric of a rotating star

The simplest situation for the metric of rotating start is obtained by assuming that vacuum extremal imbeddable to M4 × S2, where S2 is the geodesic sphere of CP2 with vanishing homological charge and induce Kähler form. Use coordinates Θ,Φ for S2 and spherical coordinates (t,r,θ,φ) in space-time identifiable as M4 spherical coordinates apart from scaling and r-dependent shift in the time coordinate.

  1. For Schartschild metric one has Φ= ωt


    u= sin(Θ)= f(r),

    f is fixed highly uniquely by the imbedding of Schwartschild metric and asymptotically one must have

    u =u0 + C/r

    in order to obtain gtt= 1-2GM/r (=1+Φgr) behavior for the induced metric.

  2. The small deformation giving rise to the gravitomagnetic field and metric of rotating star is given by

    Φ = ωt+nφ

    There is obvious analogy with the phase of Schödinger amplitude for angular momentum eigenstate with Lz=n which conforms with the quantum classical correspondence.

  3. The non-vanishing component of Ag is proportional to gravitational potential Φgr

    Agφ= g = (n/ω)Φgr.

  4. A little calculation gives for the magnitude of Bgθ from the curl of Ag the expression

    Bgθ= (n/ω)× (1/sin(θ)× 2GM/r3.

    In the plane θ=π/2 one has dipole field and the value of n is fixed by the value of angular momentum of star.

  5. Quantization of angular momentum is obtained for a given value of ω. This becomes clear by comparing the field with dipole field in θ= π/2 plane. Note that GJ, where J is angular momentum, takes the role of magnetic moment in Bg (see this). ω appears as a free parameter analogous to energy in the imbedding and means that the unit of angular momentum varies. In TGD framework this could be interpreted in terms of dynamical Planck constant having in the most general case any rational value but having a spectrum of number theoretically preferred values. Dark matter is interpreted as phases with large value of Planck constant which means possibility of macroscopic quantum coherence even in astrophysical length scales. Dark matter would induce quantum like effects on visible matter. For instance, the periodicity of small n states might be visible as patterns of visible matter with discrete rotational symmetry (could this relate to strange goings on in Saturn?).

4. Comparison with the dipole field

The simplest candidate for the gravitomagnetic field differs in many respects from a dipole field.

  1. Gravitomagnetic field has 1/r3 dependence so that the distance dependence is same as in GRT.

  2. Gravitomagnetic flux flows along z-axis in opposite directions at different sides of z=0 plane and emanates from z-axis radially and flows along spherical surface. Hence the radial component of Bg would vanish whereas for the dipole field it would be proportional to cos(θ).

  3. The dependence on the angle θ of spherical coordinates is 1/sin(θ) (this conforms with the radial flux from z-axis whereas for the dipole field the magnitude of Bθg has the dependence sin(θ). At z=0 plane the magnitude and direction coincide with those of the dipole field so that satellites moving at the gravitomagnetic equator would not distinguish between GRT and TGD since also the radial component of Bg vanishes here.

  4. For other orbits effects would be non-trivial and in the vicinity of the flux tube formally arbitrarily large effects are predicted because of 1/sin(θ) behavior whereas GRT predicts sin(θ) behavior. Therefore TGD could be tested using satellites near gravito-magnetic North pole.

  5. The strong gravimagnetic field near poles causes gravi-magnetic Lorentz force and could be responsible for the formation of jets emanating from black hole like structures and for galactic jets. This additional force might have also played some role in the formation of planetary systems and the plane in which planets move might correspond to the plane θ=π/2, where gravimagnetic force has no component orthogonal to the plane. Same applies in the case of galaxies.

5. Consistency with the model for the asymptotic state of star

In TGD framework natural candidates for the asymptotic states of the star are solutions of field equations for which gravitational four-momentum is locally conserved. Vacuum extremals must therefore satisfy the field equations resulting from the variation of Einstein's action (possibly with cosmological constant) with respect to the induced metric. Quite remarkably, the solution representing asymptotic state of the star is necessarily rotating (see this).

The proposed picture is consistent with the model of the asymptotic state of star. Also the magnetic parts of ordinary gauge fields have essentially similar behavior. This is actually obvious since CP2 coordinates are fundamental dynamical variables and the field line topologies of induced gauge fields and induced metric are therefore very closely related.

As already discussed, the physicists M. Tajmar and C. J. Matos and their collaborators working in ESA (European Satellite Agency) have made an amazing claim of having detected strong gravimagnetism with gravimagnetic field having a magnitude which is about 20 orders of magnitude higher than predicted by General Relativity. Hence there are some reasons to think that gravimagnetic fields might have a surprise in store.

Addition: Lubos Motl's blog tells that the error bars are still twice the size of the predicted frame-dragging effect. Already this information would have killed TGD inspired (strongly so) model unless the satellite had been at equator!

For details and background see the chapter TGD and GRT.

Machian Principle and TGD

Machian Principle has not played any role in the development of TGD. Hence it is somewhat surprising that this principle allows several interpretations in TGD framework.

1. Non-conserved gravitational four-momentum and conserved inertial momentum at 4-D space-time level

Consider first the situation at the level of classical theory identifiable in terms of classical dynamics for space-time surfaces.

  1. In TGD framework one must distinguish between non-conserved gravitational four-momentum and conserved inertial four-momentum identified as conserved Poincare four-momentum at the level of 4-D space-time dynamics and associated with the preferred extremals of Kähler action defining the analogs of Bohr orbits (no path integral over all possible space-time surfaces but functional integral over light-like partonic 3-surfaces). A collection of conserved vector currents rather than tensor results and this resolves the problems due to ill-definedness of four-momentum in General Relativity which served as the primary motivation for the identification of space-times as 4-surfaces of H=M4×CP2.

  2. Non-conserved gravitational four-momentum densities can be identified as a linear combination of Einstein tensor and metric tensor (cosmological constant) by contracting them with the Killing vectors of M4 translations. Collection of, in general non-conserved, 4-currents result but gravitational four-momentum is well-defined quite generally unlike in General Relativity. Only for the asymptotic stationary cosmologies corresponding to extremals of the curvature scalar plus constant for the induced metric gravitational four-momentum is conserved.

2. Inertial four-momentum as the average of gravitational four-momentum

The first question is how non-conserved gravitational and conserved inertial four-momentum relate to each other. Certainly Equivalence Principle in a strong form cannot hold true.

  1. In zero energy ontology the total quantum numbers of states vanish and positive and negative energy parts of states have interpretation as initial and final states of particle reaction at elementary particle level where geometro-temporal distance between them is short (TGD inspired theory of consciousness forces to distinguish between geometric time and subjective time). Positive energy ontology emerges as an effective ontology at observational level when the temporal distance between positive and negative energy parts of the state is long as compared to the time scale of conscious observer. The recent understanding about bosons as wormhole contacts between space-time sheets with positive and negative time orientation suggests that the two space-time sheets in question correspond to positive and negative energy parts of the state. This brings in mind the picture of Connes about Higgs mechanism involving two copies of Minkowski space.

  2. The intuitive idea is that the conserved inertial four-momentum assignable to the positive energy part of the state is the average of the non-conserved gravitational four momentum and depends on the p-adic length scale characterizing the pair of space-time sheets connecting positive and negative energy states. The average is over a p-adic time scale characterizing the temporal span of the space-time sheet. This average is coded by the classical dynamics for the preferred extremal of Kähler action defining the generalized Bohr orbit.

3. Non-conserved gravitational four-momentum and conserved inertial momentum at parton level

A deeper level description of the situation is achieved at parton level. For light-like partonic 3-surfaces the dynamics is defined by almost topological QFT defined by Chern-Simons action for the induced Kähler form. The extrema have 2-D CP2 projection. Light-likeness implies the replacement of "topological" with "almost topological" by bringing in the notions of metric and four-momentum.

  1. The world of classical worlds (WCW) decomposes into a union of sub-WCW:s associated with preferred points of imbedding space H= M4+/-× CP2. The selection of preferred point of H means means a selection of tip of future/past directed light-cone in the case of M4+/- and selection of U(2) subgroup of SU(3) in the case of CP2. There is a further selection fixing rest system and angular momentum quantization axis (preferred plane in M4 defining non-physical polarizations for massless bosons) and quantization axis of color isospin and hyper-charge. That configuration space geometry reflects these choices conforms with quantum-classical correspondence requiring that everything quantal has a geometric correlate.

  2. At the level of S-matrix the preferred points of H defining past/future directed light-cones correspond to the arguments of n-point function. In the construction of S-matrix one integrates over the tips of the light-cones parameterizing sub-WCW:s consisting of partonic 3-surfaces residing inside these light-cones (×CP2). Hence a full Poincare invariance results meaning the emergence of conserved four-momentum identifiable as inertial four-momentum assignable to the preferred extremals of Kähler action defining Bohr orbits. These light-cones give rise to Russian doll cosmology with cosmologies within cosmologies such that elementary p"../articles/ formally correspond to the lowest level in the hierarchy.

  3. Parton dynamics is associated with a given future/past light-cone. At parton level one has Lorentz invariance and only the mass squared is conserved for the partonic time evolution dictated by random light-likeness. There is a very delicate point involved here. Partonic four-momentum is non-vanishing only if CP2 Kähler gauge potential has also M4+/- component which is pure gauge. Mass squared is conserved (Lorentz invariance) if this component is in the direction of proper time coordinate a of the light-cone and if its magnitude is constant. From the point of view of spinor structure M4+/- and CP2 are not totally decoupled. This does not break gauge invariance since Kähler gauge potential does not give rise to U(1) gauge degeneracy but only to 4-D spin glass degeneracy.

  4. The natural identification of the conserved classical partonic four-momentum is as the non-conserved gravitational four-momentum defined for a space-time sheet characterized by a p-adic time scale. In accordance with zero energy ontology, a length scale dependent notion is in question. At single parton level Equivalence Principle would state that the conserved gravitational mass is equal to inertial mass but would not require equivalence of four-momenta.

4. Inertial four-momentum as average of partonic four-momentum and p-adic thermodynamics
  1. The natural hypothesis is that inertial four-momentum at partonic level is the temporal average of non-conserved gravitational four-momentum. This implies particle massivation in general since the motion of light-like parton is in general random zitterbewegung so that only mass squared is conserved. The average is defined always in some time scale identifiable as the p-adic time scale defining the mass scale via Uncertainty Principle. There is actually hierarchy of p-adic time scales coming as powers of p. Inertial mass vanishes only if the motion is non-random in the time scale considered and this never occurs exactly for even photon and graviton.

  2. The quantitative formulation of the averaging relies on p-adic thermodynamics for the integer valued conformal weight characterizing the particle. By number theoretic universality this description must be equivalent to real thermodynamics with quantized temperature. Quantization of the mass scale is purely number theoretical: p-adic thermodynamics based on standard Boltzman weight eL0/T does not make sense since exp(x) has always unit p-adic norm so that partition sum does not converge. One can however replace this Boltzman weight with pL0/Tp, which exists for Tp=1/n, n=1,2,..., if L0 is a generator of conformal scaling having non-negative integer spectrum. This predicts a discrete spectrum of p-adic mass scales and real thermodynamics is obtained by reversing the sign of exponent. Assuming a reasonable cutoff on conformal weight (only two lowest terms give non-negligible contributions to thermal average) and a prescription for the mapping of p-adic mass squared to its real counterpart the two descriptions are equivalent. Note that mass squared is the average of conformal weight rather than the average of four-momentum squared so that Lorentz invariance is not lost. Note also that in the construction of S-matrix four-momenta emerge only via the Fourier transform of n-point function and do not appear at fundamental vertices.

  3. Also the coupling to Higgs gives a contribution to the mass. Higgs corresponds to a wormhole contact with wormhole throats carrying fermion and antifermion quantum numbers as do all gauge bosons. Higgs expectation should have space-time correlate appearing in the modified Dirac operator. A good candidate is p-adic thermal average for the generalized eigenvalue of the modified Dirac operator vanishing for the zero modes. Thermal mass squared as opposed to Higgs contribution would correspond to the average of integer valued conformal weight. For bosons (in particular Higgs boson!) it is simply the sum of expectations for the two wormhole throats.

  4. Both contributions are basically thermal which raises the question whether the interpretation in terms of coherent state of Higgs field (and essentially quantal notion) is really appropriate unless also thermal states can be regarded as genuine quantum states. The matrix characterizing time-like entanglement for the zero energy quantum state can be also thermal S-matrix with respect to the incoming and outgoing partons (hyper-finite factors of type III allow the analog of thermal QFT at the level of quantum states). This allows also a first principle description of p-adic thermodynamics.

5. Various interpretations of Machian Principle

TGD allows several interpretations of Machian Principle and leads also to a generalization of the Principle.

  1. Machian Principle is true in the sense that the notion of completely free particle is non-sensible. Free CP2 type extremal (having random light-like curve as M4projection) is a pure vacuum extremal and only its topological condensation creates a wormhole throat (two of them) in the case of fermion (boson). Topological condensation to space-time sheet(s) generates all quantum numbers, not only mass. Both thermal massivation and massivation via the generation of coherent state of Higgs type wormhole contacts are due to topological condensation.

  2. Machian Principle has also interpretation in terms of p-adic physics. Most points of p-adic space-time sheets have infinite distance from the tip light-cone in the real sense. The discrete algebraic intersection of the p-adic space-time sheet with the real space-time sheet gives rise to effective p-adicity of the topology of the real space-time sheet if the number of these points is large enough. Hence p-adic thermodynamics with given p also assigned to the partonic 3-surface by the modified Dirac operator makes sense. The continuity and smoothness of the dynamics corresponds to the p-adic fractality and long range correlations for the real dynamics and allows to apply p-adic thermodynamics in the real context. p-Adic variant of Machian Principle says that p-adic dynamics of cognition and intentionality in literally infinite scale in the real sense dictates the values of masses among other things.

  3. A further interpretation of Machian Principle is in terms of number theoretic Brahman=Atman identity or equivalently, Algebraic Holography. This principle states that the number theoretic structure of the space-time point is so rich due to the presence of infinite hierarchy of real units obtained as ratios of infinite integers that single space-time point can represent the entire world of classical worlds. This could be generalized also to a criterion for a good mathematics: only those mathematical structures which are representable in the set of real units associated with the coordinates of single space-time point are really fundamental.
For more details see the end of the chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

To the index page