What's new in

Physics in Many-Sheeted Space-Time

Note: Newest contributions are at the top!



Year 2016



Antimatter as dark matter?

It has been found in CERN (see this ) that matter and antimatter atoms have no differences in the energies of their excited states. This is predicted by CPT symmetry. Notice however that CP and T can be separately broken and that this is indeed the case. Kaon is classical example of this in particle physics. Neutral kaon and anti-kaon behave slightly differently.

This finding forces to repeat an old question. Where does the antimatter reside? Or does it exist at all?

GUTs predicted that baryon and lepton number are not conserved separately and suggested a solution to the empirical absence of antimatter. GUTs have been however dead for years and there is actually no proposal for the solution of matter-antimatter asymmetry in the framework of mainstream theories (actually there are no mainstream theories after the death of superstring theories which also assumed GUTs as low energy limits!).

In TGD framework many-sheeted space-time suggests possible solution to the problem. Matter and antimatter are at different space-time sheets. One possibility is that antimatter corresponds to dark matter in TGD sense that is a phase with heff=n× h, n=1,2,3,... such that the value of n for antimatter is different from that for visible matter. Matter and antimatter would not have direct interactions and would interact only via classical fields or by emission of say photons by matter (antimatter) suffering a phase transition changing the value of heff before absorbtion by antimatter (matter). This could be rather rare process. Bio-photons could be produced from dark photons by this process and this is assumed in TGD based model of living matter.

What the value of n for ordinary visible matter could be? The naive guess is that it is n=1, the smallest possible value. Randell Mills has however claimed the existence of scaled down hydrogen atoms - Mills calls them hydrinos - with ground state binding energy considerably higher than for hydrogen atom. The experimental support for the claim is published in respected journals and the company of Mills is developing a new energy technology based on the energy liberated in the transition to hydrino state.

These findings can be understood in TGD framework if one has actually n=6 for visible atoms and n=1, 2, or 3 for hydrinos. Hydrino states would be stabilized in the presence of some catalysts. See this.

The model suggests a universal catalyst action. Among other things catalyst action requires that reacting molecule gets energy to overcome the potential barrier making reaction very slow. If an atom - say (dark) hydrogen - in catalyst suffers a phase transition to hydrino (hydrogen with smaller value of heff/h), it liberates binding energy, and if one of the reactant molecules receives it it can overcome the barrier. After the reaction the energy can be sent back and catalyst hydrino returns to the ordinary hydrogen state. The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.

The notion of phosphate high energy bond is somewhat mysterious concept and manifests as the ability provide energy in ATP to ADP transition. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the hydrino state (state with smaller value of heff/h) and liberate the binding energy? Could the decay ATP to ADP produce the original possibly dark hydrogen? Metabolic energy would be needed to kick it back to ordinary bond in ATP.

So: could it be that one has n=6 for stable matter and n is different from this for stable antimatter? Could the small CP breaking cause this?

For background see the chapter More about TGD inspired cosmology.



Minimal surface cosmology

Before the discovery of the twistor lift TGD inspired cosmology has been based on the assumption that vacuum extremals provide a good estimate for the solutions of Einstein's equations at GRT limit of TGD . One can find imbeddings of Robertson-Walker type metrics as vacuum extremals and the general finding is that the cosmological with super-critical and critical mass density have finite duration after which the mass density becomes infinite: this period of course ends before this. The interpretation would be in terms of the emergence of new space-time sheet at which matter represented by smaller space-time sheets suffers topological condensation. The only parameter characterizing critical cosmologies is their duration. Critical (over-critical) cosmologies having SO3× E3 (SO(4)) as isometry group is the duration and the CP2 projection at homologically trivial geodesic sphere S2: the condition that the contribution from S2 to grr component transforms hyperbolic 3-metric to that of E3 or S3 metric fixes these cosmologies almost completely. Sub-critical cosmologies have one-dimensional CP2 projection.

Do Robertson-Walker cosmologies have minimal surface representatives? Recall that minimal surface equations read as

Dα(gαββhkg1/2)= ∂α[gαββhk g1/2] + {αkm} gαββhm g1/2=0 ,

{αkm} ={l km} ∂αhl .

Sub-critical minimal surface cosmologies would correspond to X4⊂ M4× S1. The natural coordinates are Robertson-Walker coordinates, which co-incide with light-cone coordinates (a=[(m0)2-r2M]1/2, r= rM/a,θ, φ) for light-cone M4+. They are related to spherical Minkowski coordinates (m0,rM,θ,φ) by (m0=a(1+r2)1/2, rM= ar). β =rM/m0=r/(1+r2)1/20,rM). r corresponds to the Lorentz factor r= γ β=β/(1-β2)1/2

The metric of M4+ is given by the diagonal form [gaa=1, grr=a2/(1+r2), gθθ= a2r2, gφφ= a2r2sin2(θ)]. One can use the coordinates of M4+ also for X4.

The ansatz for the minimal surface reads is Φ= f(a). For f(a)=constant one obtains just the flat M4+. In non-trivial case one has gaa= 1-R2 (df/da)2. The gaa component of the metric becomes now gaa=1/(1-R2(df/da)2). Metric determinant is scaled by gaa1/2 =1 → (1-R2(df/da)21/2. Otherwise the field equations are same as for M4+. Little calculation shows that they are not satisfied unless one as gaa=1.

Also the minimal surface imbeddings of critical and over-critical cosmologies are impossible. The reason is that the criticality alone fixes these cosmologies almost uniquely and this is too much for allowing minimal surface property.

Thus one can have only the trivial cosmology M4+ carrying dark energy density as a minimal surface solution! This obviously raises several questions.

  1. Could Λ=0 case for which action reduces to Kähler action provide vacuum extremals provide single-sheeted model for Robertson-Walker cosmologies for the GRT limit of TGD for which many-sheeted space-time surface is replaced with a slightly curved region of M4? Could Λ=0 correspond to a genuine phase present in TGD as formal generalization of the view of mathematicians about reals as p=∞ p-adic number suggest. p-Adic length scale would be strictly infinite implying that Λ∝ 1/p vanishes.
  2. Second possibility is that TGD is quantum critical in strong sense. Not only 3-space but the entire space-time surface is flat and thus M4+. Only the local gravitational fields created by topologically condensed space-time surfaces would make it curved but would not cause smooth expansion. The expansion would take as quantum phase transitions reducing the value of Λ ∝ 1/p as p-adic prime p increases. p-Adic length scale hypothesis suggests that the preferred primes are near but below powers of 2 p≈ 2k for some integers k. This led for years ago to a model for Expanding Earth.
  3. This picture would explain why individual astrophysical objects have not been observed to expand smoothly (except possibly in these phase transitions) but participate cosmic expansion only in the sense that the distance to other objects increase. The smaller space-time sheets glued to a given space-time sheet preserving their size would emanate from the tip of M4+ for given sheet.
  4. RW cosmology should emerge in the idealization that the jerk-wise expansion by quantum phase transitions and reducing the value of Λ (by scalings of 2 by p-adic length scale hypothesis) can be approximated by a smooth cosmological expansion.
One should understand why Robertson-Walker cosmology is such a good approximation to this picture. Consider first cosmic redshift.
  1. The cosmic recession velocity is defined from the redshift by Doppler formula.

    z= (1+β)/(1-β)-1 ≈ β = v/c .

    In TGD framework this should correspond to the velocity defined in terms of the coordinate r of the object.

    Hubble law tells that the recession velocity is proportional to the proper distance D from the source. One has

    v= HD , H= (da/dt)/a= 1/(gaaa)1/2 .

    This brings in the dependence on the Robertson-Walker metric.

    For M4+ one has a=t and one would have gaa=1 and H=1/a. The experimental fact is however that the value of H is larger for non-empty RW cosmologies having gaa<1. How to overcome this problem?

  2. To understand this one must first understand the interpretation of gravitational redshift. In TGD framework the gravitational redshift is property of observer rather than source. The point is that the tangent space of the 3-surface assignable to the observer is related by a Lorent boost to that associated with the source. This implies that the four-momentum of radiation from the source is boosted by this same boost. Redshift would mean that the Lorentz boost reduces the momentum from the real one. Therefore redshift would be consistent with momentum conservation implied by Poincare symmetry.

    gaa for which a corresponds to the value of cosmic time for the observer should characterize the boost of observer relative to the source. The natural guess is that the boost is characterized by the value of gtt in sufficiently large rest system assignable to observer with t is taken to be M4 coordinate m0. The value of gtt fluctuates do to the presence of local gravitational fields. At the GRT limit gaa would correspond to the average value of gtt.

  3. There is evidence that H is not same in short and long scales. This could be understood if the radiation arrives along different space-time sheets in these two situations.
  4. If this picture is correct GRT description of cosmology is effective description taking into account the effect of local gravitation to the redshift, which without it would be just the M4+ redshift.
Einstein's equations for RW cosmology should approximately code for the cosmic time dependence of mass density at given slightly deformed piece of M4+ representing particular sub-cosmology expanding in jerkwise manner.
  1. Many-sheeted space-time implies a hierarchy of cosmologies in different p-adic length scales and with cosmological constant Λ ∝ 1/p so that vacuum energy density is smaller in long scale cosmologies and behaves on the average as 1/a2 where a characterizes the scale of the cosmology. In zero energy ontology given scale corresponds to causal diamond (CD) with size characterized by a defining the size scale for the distance between the tips of CD.
  2. For the comoving volume with constant value of coordinate radius r the radius of the volume increases as a. The vacuum energy would increase as a3 for comoving volume. This is in sharp conflict with the fact that the mass decreases as 1/a for radiation dominated cosmology, is constant for matter dominated cosmology, and is proportional to a for string dominated cosmology.

    The physical resolution of the problem is rather obvious. Space-time sheets representing topologically condensed matter have finite size. They do not expand except possibly in jerkwise manner but in this process Λ is reduced - in average manner like 1/a2.

    If the sheets are smaller than the cosmological space-time sheet in the scale considered and do not lose energy by radiation they represent matter dominated cosmology emanating from the vertex of M4+. The mass of the co-moving volume remains constant.

    If they are radiation dominated and in thermal equilibrium they lose energy by radiation and the energy of volume behaves like 1/a.

    Cosmic strings and magnetic flux tubes have size larger than that the space-time sheet representing the cosmology. The string as linear structure has energy proportional to a for fixed value of Λ as in string dominated cosmology. The reduction of Λ decreasing on the average like 1/a2 implies that the contribution of given string is reduced like 1/a on the average as in radiation dominated cosmology.

  3. GRT limit would code for these behaviours of mass density and pressure identified as scalars in GRT cosmology in terms of Einstein's equations. The time dependence of gaa would code for the density of the topologically condensed matter and its pressure and for dark energy at given level of hierarchy. The vanishing of covariant divergence for energy momentum tensor would be a remnant of Poincare invariance and give Einstein's equations with cosmological term.
  4. Why GRT limit would involve only the RW cosmologies allowing imbedding as vacuum extremals of Kähler action? Can one demand continuity in the sense that TGD cosmology at p→ ∞ limit corresponds to GRT cosmology with cosmological solutions identifiable as vacuum extremals? If this is assumed the earlier results are obtained. In particular, one obtains the critical cosmology with 2-D CP2 projection assumed to provide a GRT model for quantum phase transitions changing the value of Λ.
If this picture is correct, TGD inspired cosmology at the level of many-sheeted space-time would be extremely simple. The new element would be many-sheetedness which would lead to more complex description provided by GRT limit. This limit would however lose the information about many-sheetedness and lead to anomalies such as two Hubble constants.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? or article with the same title.



LIGO blackhole anomaly and minimal surface model for star

The TGD inspired model of star as a minimal surface with stationary spherically symmetric metric suggests strongly that the analog of blackhole metric as two horizons. The outer horizon is analogous to Scwartschild horizon in the sense that the roles of time coordinate and radial coordinate change. Radial metric component vanishes at Scwartschild horizon rather than divergence. Below the inner horizon the metric has Eucldian signature.

Is there any empirical evidence for the existence of two horizons? There is evidence that the formation of the recently found LIGO blackhole (discussed from TGD view point in is not fully consistent with the GRT based model (see this). There are some indications that LIGO blackhole has a boundary layer such that the gravitational radiation is reflected forth and back between the inner and outer boundaries of the layer. In the proposed model the upper boundary would not be totally reflecting so that gravitational radiation leaks out and gave rise to echoes at times .1 sec, .2 sec, and .3 sec. It is perhaps worth of noticied that time scale .1 sec corresponds to the secondary p-adic time scale of electron (characterized by Mersenne prime M127= 2127-1). If the minimal surface solution indeed has two horizons and a layer like structure between them, one might at least see the trouble of killing the idea that it could give rise to repeated reflections of gravitational radiation.

The proposed model (see this) assumes that the inner horizon is Schwarstchild horizon. TGD would however suggests that the outer horizon is the TGD counterpart of Schwartschild horizon. It could have different radius since it would not be a singularity of grr (gtt/grr would be finite at rS which need not be rS=2GM now). At rS the tangent space of the space-time surface would become effectively 2-dimensional: could this be interpreted in terms of strong holography (SH)?

One should understand why it takes rather long time T=.1 seconds for radiation to travel forth and back the distance L= rS-rE between the horizons. The maximal signal velocity is reduced for the light-like geodesics of the space-time surface but the reduction should be rather large for L∼ 20 km (say). The effective light-velocity is measured by the coordinate time Δ t= Δ m0+ h(rS)-h(rE) needed to travel the distance from rE to rS. The Minkowski time Δ m0-+ would be the from null geodesic property and m0= t+ h(r)

Δ m0-+ =Δ t -h(rS)+h(rE) ,

Δ t = ∫rErS(grr/gtt)1/2 dr== ∫rErS dr/c# .

The time needed to travel forth and back does not depend on h and would be given by

Δ m0 =2Δ t =2∫rErSdr/c# .

This time cannot be shorter than the minimal time (rS-rE)/c along light-like geodesic of M4 since light-like geodesics at space-time surface are in general time-like curves in M4. Since .1 sec corresponds to about 3× 104 km, the average value of c# should be for L= 20 km (just a rough guess) of order c#∼ 2-11c in the interval [rE,rS]. As noticed, T=.1 sec is also the secondary p-adic time assignable to electron labelled by the Mersenne prime M127. Since grr vanishes at rE one has c#→ ∞. c# is finite at rS.

There is an intriguing connection with the notion of gravitational Planck constant. The formula for gravitational Planck constant given by hgr= GMm/v0 characterizing the magnetic bodies topologically for mass m topologically condensed at gravitational magnetic flux tube emanating from large mass M. The interpretation of the velocity parameter v0 has remained open. Could v0 correspond to the average value of c#? For inner planets one has v0≈ 2-11 so that the the order of magnitude is same as for the the estimate for c#.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? or article with the same title.



Minimal surface counterpart of Reissner-Nordstöm solution

Occarm's razor have been used to debunk TGD. The following arguments provide the information needed by the reader to decide himself. Considerations at three levels.

The level of "world of classical worlds" (WCW) defined by the space of 3-surfaces endowed with Kähler structure and spinor structure and with the identification of WCW space spinor fields as quantum states of the Universe: this is nothing but Einstein's geometrization program applied to quantum theory. Second level is space-time level.

Space-time surfaces correspond to preferred extremals of Käction in M4× CP2. The number of field like variables is 4 corresponding to 4 dynamically independent imbedding space coordinates. Classical gauge fields and gravitational field emerge from the dynamics of 4-surfaces. Strong form of holography reduces this dynamics to the data given at string world sheets and partonic 2-surfaces and preferred extremals are minimal surface extremals of Kähler action so that the classical dynamics in space-time interior does not depend on coupling constants at all which are visible via boundary conditions only. Continuous coupling constant evolution is replaced with a sequence of phase transitions between phases labelled by critical values of coupling constants: loop corrections vanish in given phase. Induced spinor fields are localized at string world sheets to guarantee well-definedness of em charge.

At imbedding space level the modes of imbedding space spinor fields define ground states of super-symplectic representations and appear in QFT-GRT limit. GRT involves post-Newtonian approximation involving the notion of gravitational force. In TGD framework the Newtonian force correspond to a genuine force at imbedding space level.

For background see the chapter Can one apply Occam's razor as a general purpose debunking argument to TGD?.



How to build TGD space-time from legos?

TGD predicts shocking simplicity of both quantal and classical dynamics at space-time level. Could one imagine a construction of more complex geometric objects from basic building bricks - space-time legos?

Let us list the basic ideas.

  1. Physical objects correspond to space-time surfaces of finite size - we see directly the non-trivial topology of space-time in everyday length scales.
  2. There is also a fractal scale hierarchy: 3-surfaces are topologically summed to larger surfaces by connecting them with wormhole contact, which can be also carry monopole magnetic flux in which one obtains particles as pairs of these: these contacts are stable and are ideal for nailing together pieces of the structure stably.
  3. In long length scales in which space-time surface tend to have 4-D M4 projection this gives rise to what I have called many-sheeted spacetime. Sheets are deformations of canonically imbedded M4 extremely near to each other (the maximal distance is determined by CP2 size scale about 104 Planck lengths. The sheets touch each other at topological sum contacts, which can be also identified as building bricks of elementary particles if they carry monopole flux and are thus stable. In D=2 it is easy to visualize this hierarchy.
Simplest legos

What could be the simplest surfaces of this kind - legos?

  1. Assume twistor lift so that action contain volume term besides Kähler action: preferred extremals can be seen as non-linear massless fields coupling to self-gravitation. They also simultaneously extremals of Kähler action. Also hydrodynamical interpretation makes sense in the sense that field equations are conservation laws. What is remarkable is that the solutions have no dependence on coupling parameters: this is crucial for realizing number theoretical universality. Boundary conditions however bring in the dependence on the values of coupling parameters having discrete spectrum by quantum criticality.
  2. The simplest solutions corresponds to Lagrangian sub-manifolds of CP2: induced Kähler form vanishes identically and one has just minimal surfaces. The energy density defined by scale dependent cosmological constant is small in cosmological scales - so that only a template of physical system is in question. In shorter scales the situation changes if the cosmological constant is proportional the inverse of p-adic prime.

    The simplest minimal surfaces are constructed from pieces of geodesic manifolds for which not only the trace of second fundamental form but the form itself vanishes. Geodesic sub-manifolds correspond to points, pieces of lines, planes, and 3-D volumes in E3. In CP2 one has points, circles, geodesic spheres, and CP2 itself.

  3. CP2 type extremals defining a model for wormhole contacts, which can be used to glue basic building bricks at different scales together stably: stability follows from magnetic monopole flux going through the throat so that it cannot be split like homologically trivial contact. Elementary particles are identified as pairs of wormhole contacts and would allow to nail the legos together to from stable structures.
Amazingly, what emerges is the elementary geometry. My apologies for those who hated school geometry.

Geodesic minimal surfaces with vanishing induced gauge fields

Consider first static objects with 1-D CP2 projection having thus vanishing induced gauge fields. These objects are of form M1× X3, X3⊂ E3× CP2. M1 corresponds to time-like or possible light-like geodesic (for CP2 type extremals). I will consider mostly Minkowskian space-time regions in the following.

  1. Quite generally, the simplest legos consist of 3-D geodesic sub-manifolds of E3× CP2. For E3 their dimensions are D=1,2,3 and for CP2, D=0,1,2. CP2 allows both homologically non-trivial resp. trivial geodesic sphere S2I resp. S2II. The geodesic sub-manifolds cen be products G3 =GD1× GD2, D2=3-D1 of geodesic manifolds GD1, D1=1,2,3 for E3 and GD2, D2=0,1,2 for CP2.
  2. It is also possible to have twisted geodesic sub-manifolds G3 having geodesic circle S1 as CP2 projection corresponding to the geodesic lines of S1⊂ CP2, whose projections to E3 and CP2 are geodesic line and geodesic circle respectively. The geodesic is characterized by S1 wave vector. One can have this kind of geodesic lines even in M1× E3× S1 so that the solution is characterized also by frequency and is not static in CP2 degrees of freedom anymore.

    These parameters define a four-D wave vector characterizing the warping of the space-time surface: the space-time surface remains flat but is warped. This effect distinguishes TGD from GRT. For instance, warping in time direction reduces the effective light-velocity in the sense that the time used to travel from A to B increases. One cannot exclude the possibility that the observed freezing of light in condensed matter could have this warping as space-time correlate in TGD framework.

    For instance, one can start from 3-D minimal surfaces X2× D as local structures (thin layer in E3). One can perform twisting by replacing D with twisted closed geodesics in D× S1: this gives valued map from D to S1 (subset CP2) representing geodesic line of D× S1. This geodesic sub-manifold is trivially a minimal surface and defines a two-sheeted cover of X2× D. Wormhole contact pairs (elementary particles) between the sheets can be used to stabilize this structure.

  3. Structures of form D2× S1, where D2 is polygon, are perhaps the simplest building bricks for more complex structures. There are continuity conditions at vertices and edges at which polygons D2i meet and one could think of assigning magnetic flux tubes with edes in the spirit of homology: edges as magnetic flux tubes, faces as 2-D geodesic sub-manifolds and interiors as 3-D geodesic sub-manifolds.

    Platonic solids as 2-D surfaces can be build are one example of this and are abundant in biology and molecular physics. An attractive idea is that molecular physics utilizes this kind of simple basic structures. Various lattices appearing in condensed matter physics represent more complex structures but could also have geodesic minimal 3-surfaces as building bricks. In cosmology the honeycomb structures having large voids as basic building bricks could serve as cosmic legos.

  4. This lego construction very probably generalizes to cosmology, where Euclidian 3-space is replaced with 3-D hyperbolic space SO(3,1)/SO(3). Also now one has pieces of lines, planes and 3-D volumes associated with an arbitrarily chosen point of hyperbolic space. Hyperbolic space allows infinite number of tesselations serving as analogs of 3-D lattices and the characteristic feature is quantization of redshift along line of sight for which empirical evidence is found.
  5. These basic building bricks can glued together by wormhole contact pairs defining elementary particles so that matter emerges as stabilizer of the geometry: they are the nails allowing to fix planks together, one might say.
Geodesic minimal surfaces with non-vanishing gauge fields

What about minimal surfaces and geodesic sub-manifolds carrying non-vanishing gauge fields - in particular em field (Kähler form identifiable as U(1) gauge field for weak hypercharge vanishes and thus also its contribution to em field)? Now one must use 2-D geodesic spheres of CP2 combined with 1-D geodesic lines of E2. Actually both homologically non-trivial resp. trivial geodesic spheres S2I resp. S2II can be used so that also non-vanishing Kähler forms are obtained.

The basic legos are now D× S2i, i=I,II and they can be combined with the basic legos constructed above. These legos correspond to two kinds of magnetic flux tubes in the ideal infinitely thin limit. There are good reasons to expected that these infinitely thin flux tubes can be thickened by deforming them in E3 directions orthogonal to D. These structures could be used as basic building bricks assignable to the edges of the tensor networks in TGD.

Static minimal surfaces, which are not geodesic sub-manifolds

One can consider also more complex static basic building bricks by allowing bricks which are not anymore geodesic sub-manifolds. The simplest static minimal surfaces are form M1× X2× S1, S1 ⊂ CP2 a geodesic line and X2 minimal surface in E3.

Could these structures represent higher level of self-organization emerging in living systems? Could the flexible network formed by living cells correspond to a structure involving more general minimal surfaces - also non-static ones - as basic building bricks? The Wikipedia article about minimal surfaces in E3 suggests the role of minimal surface for instance in bio-chemistry (see this).

The surfaces with constant positive curvature do not allow imbedding as minimal surfaces in E3. Corals provide an example of surface consisting of pieces of 2-D hyperbolic space H2 immersed in E3 (see this). Minimal surfaces have negative curvature as also H2 but minimal surface immersions of H2 do not exist. Note that pieces of H2 have natural imbedding to E3 realized as light-one proper time constant surface but this is not a solution to the problem.

Does this mean that the proposal fails?

  1. One can build approximately spherical surfaces from pieces of planes. Platonic solids represents the basic example. This picture conforms with the notion of monadic manifold having as a spine a discrete set of points with coordinates in algebraic extension of rationals (preferred coordinates allowed by symmetries are in question). This seems to be the realistic option.
  2. The boundaries of wormhole throats at which the signature of the induced metric changes can have arbitrarily large M4 projection and they take the role of blackhole horizon. All physical systems have such horizon and the approximately boundaries assignable to physical objects could be horizons of this kind. In TGD one has minimal surface in E3× S1 rather than E3. If 3-surface have no space-like boundaries they must be multi-sheeted and the sheets co-incide at some 2-D surface analogous to boundary. Could this 3-surface give rise to an approximately spherical boundary.
  3. Could one lift the immersions of H2 and S2 to E3 to minimal surfaces in E3× S1? The constancy of scalar curvature, which is for the immersions in question quadratic in the second fundamental form would pose one additional condition to non-linear Laplace equations expressing the minimal surface property. The analyticity of the minimal surface should make possible to check whether the hypothesis can make sense. Simple calculations lead to conditions, which very probably do not allow solution.

Dynamical minimal surfaces: how space-time manages to engineer itself?

At even higher level of self-organization emerge dynamical minimal surfaces. Here string world sheets as minimal surfaces represent basic example about a building block of type X2× S2i. As a matter fact, S2 can be replaced with complex sub-manifold of CP2.

One can also ask about how to perform this building process. Also massless extremals (MEs) representing TGD view about topologically quantized classical radiation fields are minimal surfaces but now the induced Kähler form is non-vanishing. MEs can be also Lagrangian surfaces and seem to play fundamental role in morphogenesis and morphostasis as a generalization of Chladni mechanism. One might say that they represent the tools to assign material and magnetic flux tube structures at the nodal surfaces of MEs. MEs are the tools of space-time engineering. Here many-sheetedness is essential for having the TGD counterparts of standing waves.

For background see the chapter Can one apply Occam's razor as a general purpose debunking argument to TGD?.



Can one apply Occam's razor as a general purpose debunking argument to TGD?

Occarm's razor have been used to debunk TGD. The following arguments provide the information needed by the reader to decide himself. Considerations at three levels.

The level of "world of classical worlds" (WCW) defined by the space of 3-surfaces endowed with Kählerstructure and spinor structure and with the identification of WCW space spinor fields as quantum states of the Universe: this is nothing but Einstein's geometrization program applied to quantum theory. Second level is space-time level.

Space-time surfaces correspond to preferred extremals of Kähler action in M4× CP2. The number of field like variables is 4 corresponding to 4 dynamically independent imbedding space coordinates. Classical gauge fields and gravitational field emerge from the dynamics of 4-surfaces. Strong form of holography reduces this dynamics to the data given at string world sheets and partonic 2-surfaces and preferred extremals are minimal surface extremals ofKähler action so that the classical dynamics in space-time interior does not depend on coupling constants at all which are visible via boundary conditions only. Continuous coupling constant evolution is replaced with a sequence of phase transitions between phases labelled by critical values of coupling constants: loop corrections vanish in given phase. Induced spinor fields are localized at string world sheets to guarantee well-definedness of em charge.

At imbedding space level the modes of imbedding space spinor fields define ground states of super-symplectic representations and appear in QFT-GRT limit. GRT involves post-Newtonian approximation involving the notion of gravitational force. In TGD framework the Newtonian force correspond to a genuine force at imbedding space level.

For background see the chapter Can one apply Occam's razor as a general purpose debunking argument to TGD?.



Emergent gravity and dark Universe

Eric Verlinde has published article with title Emergent Gravity and the Dark Universe > (see this). The article represents his recent view about gravitational force as thermodynamical force described earlier and suggests an explanation for the constant velocity spectrum of distant stars around galaxies and for the recently reported correlation between the real acceleration of distant stars with corresponding acceleration caused by baryonic matter. In the following I discuss Verlinde's argument and compare the physical picture with that provided by TGD. I have already earlier discussed Verlinde's entropic gravity from TGD view point (see this). The basic observation is that Verlinde introduces long range quantum entanglement appearing even in cosmological scales: in TGD framework the hierarchy of Planck constants does this in much more explicit manner and has been part of TGD for more than decade. It is nice to see that the basic ideas of TGD are gradually popping up in literature.

Before continuing it is good to recall the basic argument against the identification of gravity as entropic force. As Kobakidzhe notices neutron diffraction experiments suggests that gravitational potential appears in the Schrödinger equation. This cannot be the case if gravitational potential has thermodynamic origin and therefore follows from statistical predictions of quantum theory: to my opinion Verlinde mixes apples with oranges.

Verlinden's argument

Consider now Verlinde's recent argument.

  1. Verlinde wants to explain the recent empirical finding that the observed correlation between the acceleration of distant stars around galaxy with that of baryonic matter (see this) in terms of apparent dark energy assigned with entanglement entropy proportional to volume rather than horizon area as in Bekenstein-Hawking formula. This means giving up the standard holography and introducing entropy proportional to volume.

    To achieve this he replaces anti-de-Sitter space (AdS) with de-Sitter space(dS) space with cosmic horizon expressible in terms of Hubble constant and assign it with long range entanglement since in AdS only short range entanglement is believed to be present (area law). This would give rise to an additional entropy proportional to volume rather than area. Dark energy or matter would corresponds to a thermal energy assignable to this long range entanglement.

  2. Besides this Verlinde introduces tensor nets as justification for the emergence of gravitation: this is just a belief. All arguments that I have seen about this are circular (one introduces 2-D surfaces and thus also 3-space from beginning) and also Verlinde uses dS space. What is to my opinion alarming that there is no fundamental approach really explaining how space-time and gravity emerges. Emergence of space-time should lead also to the emergence of spinor structure of space-time and this seems to me something impossible if one really starts from mere Hilbert space.
  3. Verlinde introduces also analogy with the thermodynamics of glass involving both short range crystal structure and amorphous long range behaviour that would correspond to entanglement entropy in long scales long range structure. Also analogy with elasticity is introduced. Below Hubble scale the microscopic states do not thermalize below the horizon and display memory effects. Dark gravitational force would be analogous to elastic response due to what he calls entropy displacement.
  4. Verlinde admits that this approach does not say much about cosmology or cosmic expansion, and even less about inflation.
The long range correlations of Verlinde correspond to hierarchy of Planck constants in TGD framework

The physical picture has analogies with my own approach (see this) to the explanation of the correlation between baryonic acceleration with observed acceleration of distant stars. In particular, long range entanglement has the ieentification of dark matter in terms of phases labelled by the hierarchy of Planck constants as TGD counterpart.

  1. Concerning the emergence of space and gravitation TGD leads to a different view. It is not 3-space but the experience about 3-space - proprioception -, which would emerge via tensor nets realized in TGD in terms of magnetic flux tubes emerging from 3-surfaces defining the nodes of the tensor net (see this) . This picture leads to a rather attractive view about quantum biology (see for instance this).
  2. Twistor lift of TGD has rapidly become a physically convincing formulation of TGD (see this). One replaces space-time surfaces in M4× CP2 with the 12-D product T(M4× CP2) of the twistor spaces T(M4) and T(CP2) and Kähler action with its 6-D variant.

    This requires that T(M4) and T(CP2) have Kähler structure. This is true but only for M4 (and its variants E4 and S4) and CP2. Hence TGD is completely unique also mathematically and physically (providing a unique explanation for the standard model symmetries). The preferred extremal property for Kähler action could reduce to the property that the 6-D surface as an extremal of 6-D Kähler action is twistor space of space-time surface and thus has the structure of S2 bundle. That this is indeed the case for the preferred extremals of dimensionally reduced 4-D action expressible as a sum of Kähler action and volume term remains to be rigorously proven.

  3. Long range entanglement even in cosmic scales would be crucial and give the volume term in entropy breaking the holography in the usual sense. In TGD framework hierarchy of Planck constants heff=n× h satisfying the additional condition heff=hgr, where hgr=GMm/v0 (M and m are masses and v0 is a parameter with dimensions of velocity) is the gravitational Planck constant introduced originally by Nottale , and assignable to magnetic flux tubes mediating gravitational interaction makes. This makes possible quantum entanglement even in astrophysical and cosmological long length scales since hgr can be extremely large. In TGD however most of the the galactic dark matter and energy is associated with cosmic strings having galaxies along it (like pearls in necklace). Baryonic dark matter could correspond to the ordinary matter which has resulted in the decay of cosmic strings taking the role of inflaton field in very early cosmology. This gives automatically a logarithmic potential giving rise to constant spectrum velocity spectrum modified slightly by baryonic matter and a nice explanation for the correlation, which served as the motivation of Verlinde.
  4. Also glass analogy has TGD counterpart. Kähler action has 4-D spin glass degeneracy giving rise to 4-D spin-glass degeneracy. In twistor lift of TGD cosmological term appears and reduces the degeneracy by allowing only minimal surfaces rather than all vacuum extremals. This removes the non-determinism. Cosmological constant is however extremely small implying non-perturbative behavior in the sense that the volume term for the action is extremely small and depends very weakly on preferred extremal. This suggests that spin glass in 3-D sense remains as Kähler action with varying sign is added.
  5. The mere Kähler action for the Minkowskian (at least) regions of the preferred extremals reduces to a Chern-Simons terms at light-like 3-surfaces at which the signature of the induced metric of the space-time surface changes from Minkowskian to Euclidian. The interpretation could be that TGD is almost topological quantum field theory. Also the interpretation in terms of holography can be considered.

    Volume term proportional to cosmological constant given by the twistorial lift of TGD (see this) could mean a small breaking of holography in the sense that it cannot be reduced to a 3-D surface term. One must be however very cautious here because TGD strongly suggests strong form of holography meaning that data at string world sheets and partonic 2-surfaces (or possibly at their metrically 2-D light-like orbits for which only conformal equivalence class matters) fix the 4-D dynamics.

    Volume term means a slight breaking of the flatness of the 3-space in cosmology since 3-D curvature scalar cannot vanish for Robertson-Walker cosmology imbeddable as a minimal surface except at the limit of infinitely large causal diamond (CD) implying that cosmological constant, which is proportional to the inverse of the p-adic length scale squared, vanishes at this limit. Note that the dependence Λ∝ 1/p, p p-adic prime, allows to solve the problem caused by the large value of cosmological constant in very early cosmology. Quite generally, volume term would describe finite volume effects analogous to those encountered in thermodynamics.

The argument against gravitation as entropic force can be circumvented in zero energy ontology

Could TGD allow to resolve the basic objection against gravitation as entropic force or generalize this notion?

  1. In Zero Energy Ontology quantum theory can be interpreted as "complex square root of thermodynamics". Vacuum functional is an exponent of the action determining preferred extremals - Kähler action plus volume term present for twistor lift. This brings in gravitational constant G and cosmological Λ constant as fundamental constants besides CP2 size scale R and Kähler coupling strength αK (see this). Vacuum functional would be analogous to an exponent of Ec/2, where Ec is complexified energy. I have also considered the possibility that vacuum functional is analogous to the exponent of free energy but following argument favors the interpretation as exponent of energy.
  2. The variation of Kähler action would give rise to the analog of TdS term and the variation of cosmological constant term to the analog of -pdV term in dE= TdS- pdV. Both T and p would be complex and would receive contributions from both Minkowskian and Euclidian regions. The contributions of Minkowskian and Euclidian regions to the action would differ by a multiplication with imaginary unit and it is possible that Kähler coupling strength is complex as suggested in (see this).

    If the inverse of the Kähler coupling is strength is proportional to the zero of Riemann zeta at critical line, it is complex and the coefficient of volume term must have the same phase: otherwise space-time surfaces are extremals of Kähler action and minimal surfaces simultaneously. In fact, the known non-vacuum extremals of Kähler action are surfaces of this kind, and one cannot exclude the possibility that preferred extremals have quite generally this property.

    Note: One can consider also the possibility that the values of Kähler coupling strength correspond to the imaginary part for the zero of Riemann zeta.

  3. Suppose that both terms in the action are proportional to the same phase factor. The part of the variation of the Kähler action with respect to the imbedding space coordinates giving the analog of TdS term would give the analog of entropic force. Since the variation of the entire action vanishes this contribution would be equal to the negative of the variation of the volume term with respect to the induced metric given by -pdV. Since the variations of Kähler action and volume term cancel each other, the entropic force would be non-vanishing only for the extremals for which Kähler action density is non-vanishing. The variation of Kähler action contains variation with respect to the induced metric and induced Kähler form so that the sum of gravitational and U(1) force would be equal to the analog of entropic force and Verlinde's proposal would not generalize as such.

    The variation of the volume term gives rise to a term proportional to the trace of the second fundamental form, which is 4-D generalization of ordinary force and vanishes for the vacuum extremals of Kähler action in which case one has analog of geodesic line. More generally, Kähler action gives rise to the generalization of U(1) force so that the field equations give a 4-D generalization of equations of motion for a point like particle in U(1) force having also interpretation as a generalization of entropic force.

  4. In Zero Energy Ontology (ZEO) TGD predicts a dimensional hierarchy of basic objects analogous to the brane hierarchy in M-theory: space-time surfaces as 4-D objects, 3-D light-like orbits of partonic 2-surfaces as boundaries of Minkowskian and Euclidian regions plus space-like 3-surfaces defining the ends of space-time surface at the opposite boundaries of CD, 2-D partonic surfaces and string world sheets, and 1-D boundaries of string world sheets. The natural idea is to identify the dynamics D-dimensional objects in terms of action consisting of D-dimensional volume in induced metric and D-dimensional analog of Kähler action. The surfaces at the ends of space-time should be freely choosable apart from the conditions related to to super-symplectic algebra realizing strong form of holography since they correspond to initial values.

    For the light-like orbits of partonic 2-surfaces 3-volume vanishes and one has only Chern-Simons type topological term. For string world sheets one has area term and magnetic flux, which is topological term reducing to a mere boundary term so that minimal surface equations are obtained. For the dynamical boundaries of string world sheets one obtains 1-D volume term as the length of string world line and the boundary term from string world sheet. This gives 1-D equation of motion in U(1) force just like in Maxwell's theory but with induced Kähler form defining the U(1) gauge field identifiable as the counterpart of classical U(1) field of standard model. Induced spinor fields couple at boundaries only to induced em gauge potential since induced classical W-boson gauge fields vanish at string world sheets in order to achieve a well-defined and conserved spinorial em charge (here the absolutely minimal option would be that the W and Z gauge potentials vanish only at the time-like boundaries of string world sheet). Should world-line geometry couple to the induced em gauge field instead of induced Kähler form? The only logical option is however that geometry couples to the U(1) charge perhaps identifiable in terms of fermion number.

  5. There however an objection against this picture. All known extremals of Kähler action are minimal surfaces and there are excellent number theoretical arguments suggesting that all preferred extremals of Kähler action are also minimal surfaces so that the original picture would be surprisingly near to the truth. The separate vanishing of variation implies that the solutions do not depend at all on coupling parameters as suggested by number theoretical universality and universality of the dynamics at quantum criticality. The discrete coupling constant evolution makes it however visible via boundary conditions classically. This would however predicts that the analogs to TdS and pdV vanish identically in space-time interior.

    The variations however involve also boundary terms, which need not vanish separately since the actions in Euclidian and Minkowskian regions differ by multiplication with (-1)1/2! The variations reduce to terms proportional to the normal component of the canonical momentum current contracted with the deformation at light-like 3-surfaces bounding Euclidian and Minkowskian space-time regions. These must vanish. If Kähler coupling strength is real, this implies decoupling of the dynamics due to the volume term and Kähler action also at light-like 3-surfaces and therefore also exchange of charges - in particular four-momentum - becomes impossible. This would be a catastrophe.

    If αK is complex as quantum TGD as a square root of thermodynamics and the proposal that the spectrum of 1/αK corresponds to the spectrum of zeros of zeta require, the normal component of the canonical momentum current for Kähler action equals to that for the volume term at the other side of the bounding surface. The analog of dE=TdS-pdV=0 would hold true in the non-trivial sense at light-like 3-surfaces and thermodynamical analogy holds true (note that energy is replaced with action). The reduction of variations to boundary terms would also conform with holography. Strong form of holography would even suggest that the 3-D boundary term in turn reduces to 2-D boundary terms.

    A possible problem is caused by the variation of volume term: g41/2 vanishes at the boundary and gnn diverges. The overall result should be finite and should be achieved by proper boundary conditions. What I have called weak form of electric-magnetic duality allows to avoid similar problems for Kähler action, and implies self-duality of the induced Kähler form at the boundary. A weaker form of boundary conditions would state that the sum of the variations of Kähler action and volume term is finite.

Physically this picture is very attractive and makes cosmological constant term emerging from the twistorial lift rather compelling. What is nice that this picture follows from the field equations of TGD rather than from mere heuristic arguments without underlying mathematical theory.

See the chapter TGD and GRT of "Physics in Many-Sheeted Space-time".



Is inflation theory simply wrong?

I listened a very nice lecture about inflation by Steinhardt, who was one of the founders of inflation theory and certainly knows what he talks. Steinhardt concludes that inflation is simply wrong. He discusses three kind of flexibilities of inflationary theory, which destroy its ability to predict and makes it non-falsifiable and therefore pseudoscience.

Basically cosmologists want to understand the extreme simplicity of cosmology. Also particle physics has turned to be extremely simple whereas theories have during last 4 decades become so complex that they cannot predict anything.

  1. CMB temperature is essentially constant. This looks like a miracle. The constant cosmic temperature is simply impossible due to the finite horizon size in typical cosmology making impossible classical communications between distant points so that temperature equalization cannot take place.
  2. One must also understand the almost flatness of 3-space: the value of curvature scalar is very near to zero.
Inflation theories were proposed as a solution of these problems.

Inflation theories

The great vision of inflationists is that these features of the universe result during an exponentially fast expansion of cosmos - inflationary period - analogous to super-cooling. This expansion would smooth out all inhomogenities and an-isotropies of quantum fluctuation and yield almost flat universe with almost constant temperature with relative fluctuations of temperature of order 10-5. The key ingredient of recent inflation theories is a scalar field known as inflaton field (actually several of them are needed). There are many variants of inflationary theory (see this). Inflation models are characterized by the potential function V (Φ) of the inflaton field Φ analogous to potential function used in classical mechanics. During the fast expansion V(Φ) would vary very slowly as a function of the vacuum expectation value of Φ . Super cooling would mean that Φ does not decay to particles during the expansion period.

  1. In "old inflation" model cosmos was trapped in a false minimum of energy during expansion and by quantum tunneling ended up to true minimum. The liberated energy decayed to particles and reheated the Universe. No inflaton field was introduced yet. This approach however led to difficulties.
  2. In "new inflation" model the effective potential Veff (Φ,T) of inflaton field depending on temperature was introduced. It would have no minimum above critical temperature and super-cooling cosmos would roll down the potential hill with a very small slope. At critical temperature the potential would change qualitatively: a minimum would emerge at critical temperature and the inflaton field fall to the minimum and decay to particles and causes reheating. This is highly analogous to Higgs mechanism emerging as the temperature reduces below that defined by electroweak mass scale.
  3. In "chaotic inflation" model there is no phase transition and the inflaton field rolls down to true vacuum, where it couples to other matter fields and decays to particles. Here it is essential that the expansion slows down so that particles have time to transform to ordinary particles. Universe is reheated.
Consider now the objections of Steinhardt against inflation. As non-specialist I can of course only repeat the arguments of Steinhardt, which I believe are on very strong basis.
  1. The parameters characterizing the scalar potential of inflaton field(s) can be chosen freely. This gives infinite flexibility. In fact, most outcomes based on classical inflation do not predict flat 3-space in recent cosmology! The simplest one-parameter models are excluded empirically. The inflaton potential energy must be very slowly decreasing function of Φ: in other words, the slope of the hill along which the field rolls down is extremely small. This looks rather artificial and suggests that the description based on scalar field could be wrong.
  2. The original idea that inflation leads from almost any initial conditions to flat universe, has turned out to be wrong. Most initial conditions lead to something very different from flat 3-space: another infinite flexibility destroying predictivity. To obtain a flat 3-space must assume that 3-space was essentially flat from beginning!
  3. In the original scenario the quantum fluctuations of inflaton fields were assumed to be present only during the primordial period and single quantum fluctuation expanded to the observer Universe. It has however turned out that this assumption fails for practically all inflationary models. The small quantum fluctuations of the inflationary field still present are amplified by gravitational backreaction. Inflation would continue eternally and produce all possible universes. Again predictivity would be completely lost. Multiverse has been sold as a totally new view about science in which one gives up the criterion of falsifiability.
Steinhardt discusses Popper's philosophy of science centered around the notions of provability, falsifiability, and pseudoscience. Popper state that in natural sciences it is only possible to prove that theory is wrong. A toy theory begins with a bold postulate "All swans are white!". It is not possible to prove this statement scientifically because it should be done for all values of time and everywhere. One can only demonstrate that the postulate is wrong. Soon one indeed discovers that there are also some black swans. The postulate weakens to "All swans are white except the black ones!". As further observations accumulate, one eventually ends up with not so bold postulate "All swans have some color.". This statement does not predict anything and is a tautology. Just this has happened in the case of inflationary theories and also in the case of superstring theory.

Steinhardt discusses the "There is no viable alternative" defense, which also M-theorists have used. According to Steinhardt there are viable alternatives and Steinhardt discusses some of them. The often heard excuse is also that superstring theory is completely exceptional theory because of its unforeseen mathematical beauty: for this reason one should give up the falsifiability requirement. Many physicists, including me, however are unable to experience this heavenly beauty of super strings: what I experience is the disgusting ugliness of the stringy landscape and multiverse.

The counterpart of inflation in TGD Universe

It is interesting to compare inflation theory with the TGD variant of very early cosmology (see this ). TGD has no inflaton fields, which are the source of the three kind of infinite flexibilities and lead to the catastrophe in inflation theory.

Let us return to the basic questions and the hints that TGD provides.

  1. How to understand the constancy of CMB temperature?
    Hint: string dominated cosmology with matter density behaving like 1/a2 as function of size scale has infinitely large horizon. This makes classical communications over infinitely long ranges possible and therefore the equalization of the temperature. At the moment of big-bang - boundary of causal diamond, which is part of boundary of light-cone - the M4distance between points in light-like radial direction vanishes. This could be the geometric correlate for the possibility of communications and long range quantum entanglement for the gas of strings.
    Note: The standard mistake is to see Big Bang as single point. As a matter of fact, it corresponds to the light-cone boundary as the observation that future light-cone of Minkowski space represents empty Robertson-Walker cosmology shows.
  2. How to understand the flatness of 3-space.
    Hint: (quantum) criticality predicts absence of length scales. The curvature scalar of 3-space is dimensional quantity must vanish - hence flatness. TGD Universe is indeed quantum critical! This fixes the value spectrum of various coupling parameters.
The original TGD inspired answers to the basic questions would be following.
  1. What are the initial conditions? In TGD Universe the primordial phase was a gas of cosmic strings in vicinity of the boundary of a very big CD (for the observer in recent cosmology) (see this and this). The boundary of CD - having M4 given by the intersection of future and past directed light-cones - consists of two pieces of light-cone boundary with points replaced with CP2). The gas is associated with the second piece of the boundary.

    Horizon size for M is infinite and the hierarchy of Planck constants allows quantum coherence in arbitrarily long scales for the gas of cosmic strings forming the primordial state. This could explain constant cosmic temperature both in classical and quantum sense (both explanations are needed by quantum classical correspondence).

  2. Inflationary period is replaced with the phase transition giving to space-time sheets with 4-D Minkowski space projection: the space-time as we know it. The basic objects are magnetic flux tubes which have emerged from cosmic strings as the M4 projection has thickened from string world sheet to 4-D region. These cosmic strings decay partially to elementary particles at the end of the counterpart of inflationary period. Hence Kähler magnetic energy replaces the energy of the inflaton field. The outcome is radiation dominated cosmology.
  3. The GRT limit of TGD replaces the many-sheeted space-time with a region of M4 made slighly curved (see this). Could one model this GRT cosmology using as a model as single space-time sheet? This need not make sense but one can try.

    Criticality states that mass density is critical as in inflationary scenario. Einstein's equations demand that the curvature scalar for Lorentz invariant RW cosmology vanishes. It turns out that one can realize this kind of cosmology as vacuum extremal of Kähler action. The resulting cosmology contains only single free parameter: the duration of the transition period. 3-space is flat and has critical mass density as given by Einstein tensor (see this).

    One might hope that this model could describe quantum criticality in all scales: not only the inflationary period but also the accelerating expansion at much later times. There is an exponentially fast expansion but it need not smooth out fluctuations now since the density of cosmic strings and temperature are essentially constant from beginning. This is what also inflationary models according to Steinhardt force to conclude although the original dream was that inflation produces the smoothness.

  4. The energy of inflaton field is in this scenario replaced with the magnetic energy of the magnetic flux tubes obtained from cosmic strings (2-D M4 projection). The negative "pressure" of the critical cosmology would microscopically corresponds to the magnetic tension along flux tubes.
  5. Quantum fluctuations are present also in TGD framework but quantum coherence made possible by hgr = heff=n× h dark matter saves the situation in arbitrary long scales (see this). Dark matter as large hbar phases replaces the multiverse. Dark matter exists! Unlike multiverse!
The twistorial lift of the Kähler action - whether it is necessary is still an open question - however forces to reconsider this picture (see this and this).
  1. Space-time surfaces are replaced with their twistor spaces required to allow imbedding to the twistor space of H= M4× CP2: the extremely strong conditions on preferred extremals given by strong form of holography (SH) should be more or less equivalent with the possibility of the twistor lift. Rather remarkably, M4 and CP2 are completely unique in the sense that their twistor spaces allow Kähler structure (twistor space of M4 in generalized sense). TGD would be completely unique!

    The dimensional reduction of 6-D generalization of Kähler action dictating the dynamics of twistor space of space-time surface would give 4-D 4-D Kähler action plus volume action having interpretation as a cosmological term.

    Planck length would define the radius of the sphere of the twistor bundle of M4 with radius which would be naturally Planck length. The coefficient of volume term is coupling constant having interpretation in terms of cosmological constant and would experience p-adic coupling constant evolution becoming large at early times and being extremely small in the recent cosmology. This would allow to describe both inflation and accelerating expansion at much later times using the same model: the only difference is that cosmological constant is different. This description would be actually a universal description of critical phase transitions. The volume term also forces ZEO with finite sizes of CDs: otherwise the volume action would be infinite!

  2. This looks nice but there is a problem. The volume term proportional to dimensional constant and one expects breaking of criticality and the critical vacuum extremal of Kähler action indeed fails to be a minimal surface as one can verify by a simple calculation. The value of cosmological constant is very small in the recent cosmology but the Kähler action of its vacuum extremal vanishes. Can one imagine ways out of the difficulty?
    1. Should one just give up the somewhat questionable idea that critical cosmology for single space-time sheet allows to model the transition from the gas of cosmic strings to radiation dominated cosmology at GRT limit of TGD?
    2. Should one consider small deformations of the critical vacuum extremal and assume that Kähler action dominates over the volume term for them so that it one can speak about small deformations of the critical cosmology is a good approximation? The average energy density associated with the small deformations - say gluing of smaller non-vacuum space-time sheets to the background - would be given by Einstein tensor for critical cosmology.
    3. Or could one argue as follows? During quantum criticality the action cannot contain any dimensional parameters - this at least at the limit of infinitely large CD. Hence the cosmological constant defining the coefficient of the volume term must vanish. The corresponding (p-adic) length scale is infinite and quantum fluctuations indeed appear in arbitrarily long scales as they indeed should in quantum criticality. Can one say that during quantum critical phase transition volume term becomes effectively vanishing because cosmological constant as coupling constant vanishes. One can argue that this picture is an overidealization. It might however work at GRT limit of TGD where size scale of CD defines the length scale assignable to cosmological constant and is taken to infinity. Thus vacuum extremal would be a good model for the cosmology as described by GRT limit.
  3. There is also second problem. One has two explanations for the vacuum energy and negative pressure. First would come from Käer magnetic energy and Kähler magnetic tension, and second from cosmological constant associated with the volume term. I have considered the possibility that these explanations are equivalent (see this). The first one would apply to the magnetic flux tubes near to vacuum extremals and carrying vanishing magnetic monopole flux. Second one would apply to magnetic flux tubes far from vacuum extremals and carrying non-vanishing monopole flux. One can consider quantum criticality in the sense that these two flux tubes correspond to each other in 1-1 manner meaning that their M4 projections are identical and they have same string tension this).
Steinhardt also considers concrete proposals for modifying inflationary cosmology. The basic proposal is that cosmology is a sequence of big bangs and big crunches, which however do not lead to the full singularity but to a bounch.

I have proposed what could be seen as analog of this picture in ZEO but without bounces (see this). Cosmos would be conscious entity which evolves, dies, and re-incarnates and after the re-incarnation expands in opposite direction of geometric time .

  1. Cosmology for given CD would correspond to a sequence of state function reductions for a self associated with the CD. The first boundary of CD - Big Bang - would be passive. The members of pairs of states formed by states at the two boundaries of CD would not change at this boundary and the boundary itself would remain unaffected. At the active boundary of CD the state would experience a sequence of unitary time evolutions involving localization for the position of the upper boundary. This would increase the temporal distance between the tips of CD. Conscious entity would experience it as time flow. Self would be a generalized Zeno effect.

    Negentropy Maximization Principle dictates the dynamics of state function reductions and eventually forces the first state function reduction to occur to the passive boundary of CD ,which becomes active and begins to shift farther away but in an opposite time direction. Self dies and re-incarnates with opposite arrow of clock time. The size of CD grows steadily and eventually CD is of cosmological size.

    In TGD framework the multiverse would be replaced with conscious entities, whose CDs grow gradually in size and eventually give rise to entire cosmologies. We ourselves could be future cosmologies. In ZEO the conservation of various quantum numbers is not in conflict with this.

  2. The geometry of CD brings in mind Big Bang followed by Big Crunch. This does not however forces space-time surfaces inside CDs to have similar structure. The basic observation is that the TGD inspired model for asymptotic cosmology is string dominated as also the cosmology associated with criticality. The strings in asymptotic cosmology are not however infinitely thin cosmic strings anymore but thickened during cosmic expansion. This suggests that the death of the cosmos leads to re-incarnations with a fractal zoom up of the strings of primordial stage! The first lethal reduction to the opposite boundary would produce a gas of thickened cosmic strings in M4 near the former active boundary.
  3. It might be possible to even test this picture by looking how far in geometric past the proposed coupling constant evolution for cosmological constant can be extrapolated. What is already clear is that in the recent cosmology it cannot be extrapolated to Planck time or even CP2 time.

See the chapter TGD inspired cosmology of "Physics in Many-Sheeted Space-time".



TGD interpretation for the new discovery about galactic dark matter

A very interesting new result related to the problem of dark matter has emerged: see the ScienceDaily article In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration. The original articl can be found at arXiv.org.

What is found that there is rather precise correlation between the gravitational acceleration produced by visible baryonic dark matter and and the observed acceleration usually though to be determined to a high degree by the presence of dark matter halo. According to the article, this correlation challenges the halo model model and might even kill it.

It turns out that the TGD based model in which galactic dark matter is at long cosmic strings having galaxies along it like pearls in necklace allows to interpret the finding and to deduce a formula for the density from the observed correlation.

  1. The model contains only single parameter, the rotation velocity of stars around cosmic string in absence of baryonic matter defining asymptotic velocity of distant stars, which can be determined from the experiments. TGD predicts string tension determining the velocity. Besides this there is the baryonic contribution to matter density, which can be derived from the empirical formula. In halo model this parameter is described by the parameters characterizing the density of dark halo.
  2. The gravitational potential of baryonic matter deduced from the empirical formula behaves logarithmically, which conforms with the hypothesis that baryonic matter is due to the decay of short cosmic string. Short cosmic strings be along long cosmic strings assignable to linear structures of galaxies like pearls in necklace.
  3. The critical acceleration appearing in the empirical fit as parameter corresponds to critical radius. The interpretation as the radius of the central bulge with size about 104 ly in the case of Milky Way is suggestive.
  4. In Zero Energy Ontology (ZEO) TGD predicts a dimensional hierarchy of basic objects analogous to the brane hierarchy in M-theory: space-time surfaces as 4-D objects, 3-D light-like orbits of partonic 2-surfaces as boundaries of Minkowskian and Euclidian regions plus space-like 3-surfaces defining the ends of space-time surface at the opposite boundaries of CD, 2-D partonic surfaces and string world sheets, and 1-D boundaries of string world sheets. The natural idea is to identify the dynamics D-dimensional objects in terms of action consisting of D-dimensional volume in induced metric and D-dimensional analog of Kähler action. The surfaces at the ends of space-time should be freely choosable apart from the conditions related to to super-symplectic algebra realizing strong form of holography since they correspond to initial values.

    For the light-like orbits of partonic 2-surfaces 3-volume vanishes and one has only Chern-Simons type topological term. For string world sheets one has area term and magnetic flux, which is topological term reducing to a mere boundary term so that minimal surface equations are obtained. For the dynamical boundaries of string world sheets one obtains 1-D volume term as the length of string world line and the boundary term from string world sheet. This gives 1-D equation of motion in U(1) force just like in Maxwell's theory but with induced Kähler form defining the U(1) gauge field identifiable as the counterpart of classical U(1) field of standard model. Induced spinor fields couple at boundaries only to induced em gauge potential since induced classical W-boson gauge fields vanish at string world sheets in order to achieve a well-defined and conserved spinorial em charge (here the absolutely minimal option would be that the W and Z gauge potentials vanish only at the time-like boundaries of string world sheet). Should world-line geometry couple to the induced em gauge field instead of induced Kähler form? The only logical option is however that geometry couples to the U(1) charge perhaps identifiable in terms of fermion number.

See the chapter TGD and Astrophysics or the article TGD interpretation for the new discovery about galactic dark matter.



Does GW150914 force to modify the views about the formation of binary blackhole systems?

The considerations below were inspired by a popular article related to the discovery of gravitational radiation in the formation of blackhole from two unexpectedly massive blackholes.

LIGO has hitherto detected two events in which the formation of blackhole as fusion of two blackholes has generated a detectable burst of gravitational radiation. The expected masses for the stars of the binary are typically around 10 solar masses. The later event involve a pair with masses of 8 and 14 solar masses marginally consistent with the expectation. The first event GW150914 involves masses of about 30 solar masses. This looks like a problem since blackhole formation is believed to be preceded via a formation of a red super giant and supernova and in this events star loses a large fraction of its mass.

The standard story evolution of binary to a pair of blackholes would go as follows.

  1. In the beginning the stars involved have masses in the range 10-30 solar masses. The first star runs out of the hydrogen fuel in its core and starts to burn hydrogen around the helium core. In this step it puffs up much of the hydrogen at its surface layers forming a red supergiant. The nuclear fusion proceeds in the core until iron core is formed and fusion cannot continue anymore. The first star collapses to a super nova and a lot of mass is thrown out (conservation of momentum forces this).
  2. Second star sucks much of the hydrogen after the formation of red supergiant. The core of the first star eventually collapses into a black hole. The stars gradually end end up close to each other. As the second star turns into a supergiant it engulfs its companion inside a common hydrogen envelope. The stars end up even closer to each other and the envelope is lost into space. Eventually the core of also second star collapses into a black hole. The two black holes finally merge together. The model predicts that due to the mass losses the masses of companions of the binary are not much higher than 10 solar masses. This is the problem.
Selma de Mink ( has proposed a new kind of story about the formation of blackholes from the stars of a binary.
  1. The story begins with two very massive stars rotating around each other extremely rapidly and so close together than they become tidally locked. They are like tango dancers. Both dancers would spin around their own axis in the same direction as they spin with respect to each other. This spinning would stir the stars and make them homogenous. Nuclear fusion would continue in the entire volume of the star rather in the core only. Stars would never run out of fuel and throw away they hydrogen layers. Therefore the resulting blackhole would be much more massive. This story would apply only to binaries.
  2. The simulations of the homogenous model however have difficulties with more conventional binaries such as the blackhole of the second LIGO signal. Second problem is that the blackholes forming GW150914 have very low spins if any. The proposed explanation would in terms of dance metaphor.

    Strong magnetic fields are present forcing the matter to flow near to the magnetic poles. The effect would be similar to that when figure skater stretches her arms to increase the moment of inertia in spin direction so that the spinning rate slows down by angular momentum conservation. This requires that the direction of the dipole differs from the axis of rotation considerably. Otherwise the spinning rate increases since moment of inertia is reduced: this is how the dancer develops the pirouette. The naive expectation is that the directions of the magnetic and rotation axis are near to each other.

What kind of story would TGD suggest? The basic ingredients of TGD story can be found in the article about LIGO discovery. Also the sections about the role of dark matter ant the magnetic flux tubes in the twistor lift of TGD might be helpful.
  1. The additional actor in this story is dark matter identified as large heff=hgr phases with hbargr=GMm/v0, where v0/c< 1 has dimensions of velocity: (c=1 is assumed for convenience) (see this). M is the large mass and m a small mass, say mass of elementary particle. The parameter v0 could be proportional to a typical rotational velocity in the system with universal coefficient.

    The crucial point is that the gravitational Compton length Λgr= hbargr/m= GM/v0 of the particle does not depend on its mass and for v0<c/2 is larger than Schwartschild radius rS= 2GM. For v0>c/2 the dark particles can reside inside blackhole.

  2. Could dark matter be involved with the formation of very massive blackholes in TGD framework? In particular, could the transformation of dark matter to ordinary matter devoured by the blackhole or ending as such to blackhole as such help to explain the large mass of GW150914?

    I have written already earlier about a related problem. If dark matter were sucked by blackholes the amount of dark matter should be much smaller in the recent Universe and it would look very different. TGD inspired proposal is that the dark matter is dark in TGD sense and has large value of Planck constant heff=n× h =hgr implying that the dark Compton length for particle with mass m is given by Λ= hbargr/m= GM/v0=rS/2v0. Λgr is larger than the value of blackhole horizon radius for v0/c<1/2 so that the dark matter remains outside the blackhole unless it suffers a phase transition to ordinary matter.

    For v0/c>1/2 dark matter can be regarded as being inside blackhole or having transformed to ordinary matter. Also the ordinary matter inside rS could transform to dark matter. For v0/c =1/2 for which Λ=rS holds true and one might say that dark matter resides at the surface of the blackhole.

  3. What could happen in blackhole binaries? Could the phase transition of dark matter to ordinary matter take place or could dark matter reside inside blackhole for v0/c ≥ 1/2? This would suggest large spin at the surface of blackhole. Note that the angular momenta of dark matter - possibly at the surface of blackhole - and ordinary matter in the interior could cancel each other.

    The GRT based model GW150914 has a parameter with dimensions of velocity very near to c and the earlier argument leads to the proposal that it approaches its maximal value meaning that Λ approaches rS/2. Already Λ=rS allows to regard dark matter as part of blackhole: dark matter would reside at the surface of blackhole. The additional dark matter contribution could explain the large mass of GW150914 without giving up the standard view about how stars evolve.

  4. Could magnetic fields explain the low spin of the components of GW150914? In TGD based model for blackhole formation magnetic fields are in a key role. Quite generally, gravitational interactions would be mediated by gravitons propagating along magnetic flux tubes here. Sunspot phenomenon in Sun involves twisting of the flux tubes of the magnetic field and with 11 year period reconnections of flux tubes resolve the twisting: this involves loss of angular momentum. Something similar is expected now: dark photons, gravitons, and possibly also other parts at magnetic flux tubes take part of the angular momentum of a rotating blackhole (or star). The gamma ray pulse observed by Fermi telescope assigned to GW150914 could be associated with this un-twisting sending angular momentum of twisted flux tubes out of the system. This process would transfer the spin of the star out of the system and produce a slowly spinning blackhole. Same process could have taken place for the component blackholes and explain why their spins are so small.
  5. Do blackholes of the binary dance now? If the gravitational Compton length Λgr= GM/v0 of dark matter particles are so large that the other blackhole is contained within the sphere of radius Λgr, one might expect that they form single quantum system. This would favor v0/c considerably smaller than v0/c=1/2. Tidal locking could take place for the ordinary matter favoring parallel spins. For dark matter antiparallel spins would be favored by vortex analogy (hydrodynamical vortices with opposite spins are attracted).

See the article LIGO and TGD. For background see the chapter TGD and Astrophysics.



Is Dragonfly a "failed" galaxy?

In Phys.Org there was an article telling about the discovery of a dark galaxy - Dragonfly 44 - with mass, which is of the same order of magnitude as that of Milky Way from the estimate based on standard model of galactic dark matter, for which the region within half-light radius is deduced to be 98 per cent dark. The dark galaxies found earlier have been much lighter. Dragonfly 44 posesses 94 globular clusters and in this respects remembles ordinary galaxies in this mass range.

The abstract of the article telling about the discovery gives a more quantitative summary about the finding.

Recently a population of large, very low surface brightness, spheroidal galaxies was identified in the Coma cluster. The apparent survival of these Ultra Diffuse Galaxies (UDGs) in a rich cluster suggests that they have very high masses. Here we present the stellar kinematics of Dragonfly 44, one of the largest Coma UDGs, using a 33.5 hr integration with DEIMOS on the Keck II telescope. We find a velocity dispersion of 47 km/s, which implies a dynamical mass of Mdyn=0.7× 1010 Msun within its deprojected half-light radius of r1/2=4.6 kpc. The mass-to-light ratio is M/L=48 Msun/Lsun, and the dark matter fraction is 98 percent within the half-light radius. The high mass of Dragonfly 44 is accompanied by a large globular cluster population. From deep Gemini imaging taken in 0.4" seeing we infer that Dragonfly 44 has 94 globular clusters, similar to the counts for other galaxies in this mass range. Our results add to other recent evidence that many UDGs are "failed" galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. We estimate the total dark halo mass of Dragonfly 44 by comparing the amount of dark matter within r=4.6 kpc to enclosed mass profiles of NFW halos. The enclosed mass suggests a total mass of ∼ 1012 Msun, similar to the mass of the Milky Way. The existence of nearly-dark objects with this mass is unexpected, as galaxy formation is thought to be maximally-efficient in this regime.

To get some order of manitude perspective it is good to start by noticing that r1/2=4.6 kpc is about 15,000 ly - the distance of Sun from galactic center is about 3 kpc. The diameter of Milky Way is 31-55 kpc and the radius of the blackhole in the center of Milky Way, which is smaller than 17 light hours.

The proposed interpretation is as a failed galaxy. What could this failure mean? Did Dragonfly 44 try become an ordinary galaxy but dark matter remained almost dark inside the region defined by half radius? It is very difficult to imagine what the failure of dark matter to become ordinary matter could mean. In TGD framework this would correspond to phase transition transforming dark identified as heff=n×h phases to ordinary matter and could be imagined but this is not done in the following. Could the unexpected finding challenge the standard assumption that dark matter forms a halo around galactic center?

The mass of Dragonfly 44 is deduce from the velocities of stars. The faster they move, the larger the mass. The model for dark matter assumes dark matter halo and this in turn gives estimate for the total mass of the galaxy. Here a profound difference from TGD picture emerges.

  1. In TGD most of dark matter and energy are concentrated at long cosmic strings transformed to magnetic flux tubes like pearls along string. Galaxies are indeed known to be organized to form filaments. Galactic dark energy could correspond to the magnetic energy. The twistorial lift of TGD predicts also cosmological constant (see this). Both forms of dark energy could be involved. The linear distribution of dark matter along cosmic strings implies a effectively 2-D gravitational logarithmic potential giving in Newtonian approximation and neglecting the effect of the ordinary matter constant velocity spectrum serving as a good approximation to the observed velocity spectrum. A prediction distinguishing TGD from halo model is that the motion along the cosmic string is free. The self-gravitation of pearls however prevents them from decaying.
  2. Dark matter and energy at galactic cosmic string (or flux tube) could explain most of the mass of Dragonfly 44 and the velocity spectrum for the stars of Dragonfly 44. No halo of dark stars would be needed and there would be no dark stars within r1/2. Things would be exactly what they look like apart from the flux tube!

    The "failure" of Dragonfly 44 to become ordinary galaxy would be that stars have not been gathered to the region within r1/2. Could the density of the interstellar gas been low in this region? This would not have prevented the formation of stars in the outer regions and feeling the gravitational pull of cosmic string.

This extremely simple explanation of the finding for which standard halo model provides no explanation would distinguish TGD inspired model from the standard intuitive picture about the formation of galaxies as a process beginning from galactic nucleus and proceeding outwards. Dragonfly 44 would be analogous to a hydrogen atom with electrons at very large orbits only. This analogy goes much further in TGD framework since galaxies are predicted to be quantal objects (see this).

See the article Some astrophysical and cosmological findings from TGD point of view. For background see the chapter TGD and Astrophysics.



Does GRT really allow gravitational radiation?

In Facebook discussion Niklas Grebäck mentioned Weyl tensor and I learned something that I should have noticed long time ago. Wikipedia article lists the basic properties of Weyl tensor as the traceless part of curvature tensor, call it R. Weyl tensor C is vanishing for conformally flat space-times. In dimensions D=2,3 Weyl tensor vanishes identically so that they are always conformally flat: this obviously makes the dimension D=3 for space very special. Interestingly, one can have non-flat space-times with nonvanishing Weyl tensor but the vanishing Schouten/Ricci/Einstein tensor and thus also with vanishing energy momentum tensor.

The rest of curvature tensor R can be expressed in terms of so called Kulkarni-Nomizu product P• g of Schouten tensor P and metric tensor g: R=C+P• g, which can be also transformed to a definition of Weyl tensor using the definition of curvature tensor in terms of Christoffel symbols as the fundamental definition. Kulkarni-Nomizu product • is defined as tensor product of two 2-tensors with symmetrization with respect to first and second index pairs plus antisymmetrization with respect to second and fourth indices.

Schouten tensor P is expressible as a combination of Ricci tensor Ric defined by the trace of R with respect to the first two indices and metric tensor g multiplied by curvature scalar s (rather than R in order to use index free notation without confusion with the curvature tensor). The expression reads as

P= 1/(D-2)×[Ric-(s/2(D-1))×g] .

Note that the coefficients of Ric and g differ from those for Einstein tensor. Ricci tensor and Einstein tensor are proportional to energy momentum tensor by Einstein equations relate to the part.

Weyl tensor is assigned with gravitational radiation in GRT. What I see as a serious interpretational problem is that by Einstein's equations gravitational radiation would carry no energy and momentum in absence of matter. One could argue that there are no free gravitons in GRT if this interpretation is adopted! This could be seen as a further argument against GRT besides the problems with the notions of energy and momentum: I had not realized this earlier.

Interestingly, in TGD framework so called massless extremals (MEs) (see this and this) are four-surfaces, which are extremals of Kähler action, have Weyl tensor equal to curvature tensor and therefore would have interpretation in terms of gravitons. Now these extremals are however non-vacuum extremals.

  1. Massless extremals correspond to graphs of possibly multi-valued maps from M4 to CP2. CP2 coordinates are arbitrary functions of variables u=k• m and w= ε • m (here "•" denotes M4 inner product). k is light-like wave vector and ε space-like polarization vector orthogonal to k so that the interpretation in terms of massless particle with polarization is possible. ME describes in the most general case a wave packet preserving its shape and propagating with maximal signal velocity along a kind of tube analogous to wave guide so that they are ideal for precisely targeted communications and central in TGD inspired quantum biology. MEs do not have Maxwellian counterparts. For instance, MEs can carry light-like gauge currents parallel to them: this is not possible in Maxwell's theory.
  2. I have discussed a generalization of this solution ansatz so that the directions defined by light-like vector k and polarization vector ε orthogonal to it are not constant anymore but define a slicing of M4 by orthogonal curved surfaces (analogs of string world sheets and space-like surfaces orthogonal to them). MEs in their simplest form at least are minimal surfaces and actually extremals of practically any general coordinate invariance action principle. For instance, this is the case if the volume term suggested by the twistorial lift of Kähler action (see this) and identifiable in terms of cosmological constant is added to Kähler action.
  3. MEs carry non-trivial induced gauge fields and gravitational fields identified in terms of the induced metric. I have identified them as correlates for particles, which correspond to pairs of wormhole contacts between two space-times such that at least one of them is ME. MEs would accompany to both gravitational radiation and other forms or radiation classically and serve as their correlates. For massless extremals the metric tensor is of form

    g= m+ a ε⊗ ε+ b k⊗ k + c(ε⊗ kv +k⊗ ε) ,

    where m is the metric of empty Minkowski space. The curvature tensor is necessarily quadrilinear in polarization vector ε and light-like wave vector k (light-like ifor both M4 and ME metric) and from the general expression of Weyl tensor C in terms of R and g it is equal to curvature tensor: C=R.

    Hence the interpretation as graviton solution conforms with the GRT interpretation. Now however the energy momentum tensor for the induced Kähler form is non-vanishing and bilinear in velocity vector k and the interpretational problem is avoided.

What is interesting that also at GRT limit cosmological constant saves gravitons from reducing to vacuum solutions. The deviation of the energy density given by cosmological term from that for Minkowski metric is identifiable as gravitonic energy density. The mysterious cosmological constant would be necessary for making gravitons non-vacuum solutions. The value of graviton amplitude would be determined by the continuity conditions for Einstein's equations with cosmological term. The p-adic evolution of cosmological term predicted by TGD is however difficult to understand in GRT framework.

See the article Does GRT really allow gravitational radiation?. For background see the chapter Basic extremals of the Kähler action.



Cosmic redshift but no expansion of receding objects: one further piece of evidence for TGD cosmology

"Universe is Not Expanding After All, Controversial Study Suggests" was the title of very interesting Science News article telling about study which forces to challenge Big Bang cosmology. The title of course involve the typical popular exaggeration.

The idea behind the study was simple. If Universe expands, one expects that also astrophysical objects - such as stars and galaxies - should participate the expansion, and should increase in size. The observation was that this does not happen! One however observes the cosmic redshift so that it is quite too early to start to bury Big Bang cosmology. The finding is however a strong objection against the strongest version of expanding Universe. That objects like stars do not participate the expansion was actually already known when I started to develop TGD inspired cosmology for quarter century ago, and the question is whether GRT based cosmology can model this fact naturally or not.

The finding supports TGD cosmology based on many-sheeted space-time. Individual space-time sheets do not expand continuously. They can however expand in jerk-wise manner via quantum phase transitions increasing the p-adic prime characterizing space-time sheet of object by say factor two of increasing the value of heff=n× h for it. This phase transition could change the properties of the object dramatically. If the object and suddenly expanded variant of it are not regarded as states of the same object, one would conclude that astrophysical objects do not expand but only comove. The sudden expansions should be however observable and happen also for Earth. I have proposed a TGD variant of Expanding Earth hypothesis along these lines (see this ).

When one approximates the many-sheeted space-time of TGD with GRT space-time, one compresses the sheets to single region of slightly curved piece of M4 and gauge potentials and the deviation of induced metric from M4 metric are replaced with their sums over the sheets to get standard model. This operation leads to a loss of information about many-sheetedness. Many-sheetedness demonstrates its presence only through anomalies such as different value of Hubble constant in scales of order large void and cosmological scales (see this ), arrival of neutrinos and gamma rays from supernova SN1987A as separate bursts (see this ), and the above observation.

One can of course argue that cosmic redshift is a strong counter argument against TGD. Conservation of energy and momentum implied by Poincare invariance at the level of imbedding space M4× CP2 does not seem to allow cosmic redshift. This is not the case. Photons arrive from the source without losing their energy. The point is that the properties of the imagined observer change as its distance from the source increases! The local gravitational field defined by the induced metric induces Lorentz boost of the M4 projection of the tangent space of the space-time surface so that the tangent spaces at source and receiver are boosted with respect to other: this causes the gravitational redshift as analog of Doppler effect in special relativity. This is also a strong piece of evidence for the identification of space-time as 4-surface in M4× CP2.

For details see the chapter More about TGD inspired cosmology or the article Some astrophysical and cosmological findings from TGD point of view.



The new findings about the structure of Milky from TGD viewpoint

I learned about two very interesting findings forcing to update the ideas about to the structure of Milky Way and allowing to test the TGD inspired Bohr model of galaxy based on the notion of gravitational Planck constant (see this, this, this, and this)

The first popular article tells about a colossal void extending from radius r0=150 ly to a radius of r1= 8,000 ly (ly=light year) around galactic nucleus discovered by a team led by professor Noriyuki Matsunaga. What has been found that there are no young stars known as Cepheids in this region. For Cepheids luminosity and the period of pulsation in brightness correlate and from the period for pulsation one can deduce luminosity and from the luminosity the distance. There are however Cepheids in the central region with radius about 150 ly.

Second popular article tells about the research conducted by an international team led by Rensselaer Polytechnic Institute Professor Heidi Jo Newberg. Researchers conclude that Milky Way is at least 50 per cent larger than estimated extending therefore to Rgal= 150,000 ly and has ring like structures in galactic plane. The rings are actually ripples in the disk having a higher density of matter. Milky way is said to be corrugated: there are at least 4 ripples in the disk of Milky Way. The first apparent ring of stars about at distance of R0=60,000 ly from the center. Note that R0 is considerably larger than r1=8,000 ly: the ratio is R0/r1= 15/2 so that this findings need not have anything to do with the first one.

Consider now the TGD based quantum model of galaxy. Nottale proposed that the orbits of planets in solar system are actually Bohr orbits with gravitational Planck constant (different for inner and outer planets and proportional to the product of masses of Sun and planet). In TGD this idea is developed furthe (see this): ordinary matter would condense around dark matter at spherical cells or tubes with Bohr radius. Bohr model is certainly over-simplification but can be taken as a starting point in TGD approach.

Could Bohr orbitology apply also to the galactic rings and could it predict ring radii as radii with which dark matter concentrations - perhaps at flux tubes - are associated? One can indeed apply Bohr orbitology by assuming TGD based model for galaxy formation.

  1. Galaxies are associated with long cosmic string like objects carrying dark matter and energy (as magnetic energy) (see this and this). Galaxies are like pearls along necklace and experience gravitational potential which is logarithmic potential. Gravitational force is of form F=mv12/ρ, where ρ is the orthogonal distance from cosmic string. Here v12 has dimensions of velocity squared being proportional to v12∝ GT, T=dM/dl the string tension of cosmic string.
  2. Newton's law v2/r= v12/r gives the observed constant velocity spectrum

    v=v1 .

    The approximate constancy originally led to the hypothesis that there is dark matter halo. As a matter of fact, the velocity tends to increase). Now there is no halo but cosmic string orthogonal to galactic plane: the well-known galactic jets would travel along the string. The prediction is that galaxies are free to move along cosmic string. There is evidence for large scale motions.

This was still just classical Newtonian physics. What comes in mind that one could apply also Bohr quantization for angular momentum to deduce the radii of the orbits.
  1. This requires estimate for the gravitational Planck constant

    hgr=GMm/v0

    assignable to te flux tubes connecting mass m to central mass M.

  2. The first guess for v0 would be as

    v0=v1 .

    The value of v1 is approximately v1= 10-3/3 (unit c=1 are used) (see this).

  3. What about mass M? The problem is that one does not have now a central mass M describable as a point mass but an effective mass characterizing the contributions of cosmic string distributed along string and also the mass of galaxy itself inside the orbit of star. It is not clear what value of central mass M should be assigned to the galactic end of the flux tubes.

    One can make guesses for M.

    1. The first guess for M would be as the mass of galaxy x× 1012× M(Sun), x∈ [.8-1.5]. The corresponding Schwartschild radius can be estimated from that of Sun (3 km) and equals to .48 ly for x=1.5. This would give for the mass independent gravitational Compton length the value

      Λgr= hgr/m= GM/v0=rS/2v0 (c=1) .

      For v0=v1 this would give Λgr= 4.5× 103 ly for x=1.5. Note that the colossal void extends from 150 ly to 8× 103 ly. This guess is very probably too large since M should correspond to a mass within R0 or perhaps even within r0.

    2. A more reasonable guess is that the mass corresponds to mass within R0=60,000 ly or perhaps even radius r0=150 ly. r0 turns out to make sense and gives a connection between the two observations.
  4. The quantization condition for angular momentum reads as

    mv1ρ= n× hgr/2 π .

    This would give

    ρn= n× ρ0 , ρ0=GM/[2π v1× v0] =Λgr/[2π v1] .

    The radii ρn are integer multiples of a radius ρ0.

    1. Taking M=Mgal, the value of ρ0 would be for the simplest guess v0=v1 about ρ0=2.15× 106 ly. This is roughly 36 times larger than the value of the radius R0=6× 104 ly for the lowest ring. The use of the mass of the entire galaxy as estimate for M of course explains the too large value.
    2. By scaling M down by factor 1/36 one would obtain R0=6× 104 ly and M= Mgal/36=.033.× Mgal: this mass should reside within R0 ly, actually within radius Λgr. Remarkably, the estimate for Λgr= 2π v1M gives Λgr= 127 ly, which is somewhat smaller than r0= 150 ly associated with void. The model therefore relates the widely different scales r0 and R0 assignable with the two findings to each other in terms of small parameter v0 appearing in the role of dimensionless gravitational "fine structure constant" > αgr= GMm/2hgr= v0/2.
The TGD inspired prediction would be that the radii of the observed rings are integer multiples of basic radius. 4 rings are reported implying that the outermost ring should be at distance of 240,000 ly, which is considerably larger than the claimed updated size of 150,000 ly. The simple quantization as integer multiples would not be quite correct. Orders of magnitude are however correct.

This would suggest that visible matter has condensed around dark matter at Bohr quantized orbits or circular flux tubes. This dark matter would contribute to the gravitational potential and imply that the velocity spectrum for distance stars is not quite constant but increases slowly as observed . The really revolutionary aspect of this picture is that gravitation would involve quantum coherence in galactic length scales. The constancy of the CMB temperature supports gravitational quantum coherence in cosmic scales.

For details see the chapter TGD and Astrophysics or the article Three astrophysical and cosmological findings from TGD point of view.



Blackholes do not absorb dark matter so fast as they should

Few days ago I encountered a link to a highly interesting popular article telling about the claim of astronomers that blackholes do not absorb dark matter as fast as they should. The claim is based on a model for dark matter: if the absorption rate were what one would expect by identifying dark matter as some exotic particle, the rate would be quite too fast and the Universe would look very different.

How could this relate to the vision that dark matter is ordinary matter in large Planck constant phase with heff=n× h= hgr= GMm/v0 generated at quantum criticality? Gravitational Planck constant hgr was originally introduced by Nottale. In this formula M is some mass, say that of black hole or astrophysical object, m is much smaller mass, say that of elementary particle, and v0 is velocity parameter, which is assumed to be in constant ratio to the spinning velocity of M in the model for quantum biology explaining biophotons as decay products of dark cyclotron photons.

Could the large value of Planck constant force dark matter be delocalized in much longer scale than blackhole size and in this manner imply that the absorption of dark matter by blackhole is not a sensible notion unless dark matter is transformed to ordinary matter? Could it be that the transformation does not occur at all or occurs very slowly and is therefore the slow bottle neck step in the process leading to the absorption to the interior of the blackhole? This could be the case! The dark Compton length would be Λgr= hgr/m= GM/v0 = rS/2v0, and for v0/c <<1 this would give dark Compton wavelength considerable larger than the radius rS=2GM of blackhole. Note that dark Compton length would not depend on m in accordance with Equivalence Principle and natural if one accepts gravitational quantum coherence is astrophysical scales. The observation would thus suggest that dark matter around blackhole is stable against phase transition to ordinary matter or the transition takes place very slowly. This in turn would reflect Negentropy Maximization Principle favoring the generation of entanglement negentropy assignable to dark matter.

For details see the chapter Quantum Astrophysics or the article Three astrophysical and cosmological findings from TGD point of view.





The problem of two Hubble constants persists

The rate of cosmic expansion manifesting itself as cosmic redshift is proportional to the distance r of the object: the expansion velocity satisfies v=Hr. The proportionality coefficients H is known as Hubble constant. Hubble constant has dimensions of 1/s. A more convenient parameter is Hubble length defined as lH= c/H, whose nominal value is 14.4 light years and corresponds to the limit at which the distant object recedes with light velocity from observer.

  1. The measurement of Hubble constant requires determination of distance of astrophysical object. For instance, the distance using so called standard candles - type I a supernovae having always same brightness decreasing like inverse square of distance (cosmic redshift also reduces the total intensity by shifting the frequencies). This method works for not too large distances (few hunder million light years, the size scale of the large voids): therefore this method gives the value of the local Hubble constant.
  2. The rate can be also deduced from cosmic redhift for CMB radiation. This method gives the Hubble constant in cosmic scales considerably longer than the size of large voids: one speaks of global determination of Hubble constant.
The problem has been that local and global method give different values for H. One might hope that the discrepancy should disappear as measurements become more precise. The recent determinination of the local value of the Hubble constant however demonstrates that the problem persists. The global value is roughly 9 per cent smaller than the local value. For popular articles about the finding see this and this.

The explanation of the discrepancy in terms of many-sheeted space-time was one of the first applications of TGD inspired cosmology. The local value of Hubble constant would correspond to space-time sheets of size at most that of large void. Global value would correspond to space-time sheets with size scales up to ten billion years assignable to the entire observed cosmos. The smaller value of the Hubble constant for space-time sheets of cosmic size would reflect the fact that the metric for them corresponds to a smaller average density for them. Mass density would be fractal in accordance with the fractality of TGD Universe implied by many-sheetedness.

Reader has perhaps noticed that I have been talking about space-time sheets in plural. The space-time of TGD is indeed many-sheeted 4-D surface in 8-D M4×CP2. It corresponds approximately to GRT space-time in the sense that the gauge potentials and gravitational fields (deviation of induced metric from Minkowksi metric) for sheets sum up to the gauge potential and gravitational field for the space-time of GRT characterized by metric and gauge potentials in standard model. Many-sheetedness leads to predictions allowing to distinguish between GRT and TGD. For instance, the propagation velocities of particles along different space-time sheets can differ since the light-velocity along space-time sheets is typically smaller than the maximal signal velocity in empty Minkowski space M4. Evidence for this effect was observed for the first time for supernova 1987A: neutrinos arrived in two bursts and also gamma ray burst arrived at different time than neutrinos: as if the propagation would have taken place along different space-time sheets (see this). Evidence for this effect has been observed also for neutrinos arrived from galactic blackhole Sagittarius A. Two pulses were detected and the difference for arrival time was few hours (see this).

For details see the chapter More about TGD cosmology or the article Three astrophysical and cosmological findings from TGD point of view.



Gravitational Waves from Black Hole Megamergers Are Weaker Than Predicted

There was an interesting article in Scientific American with title "Gravitational Waves from Black Hole Megamergers Are Weaker Than Predicted" (see this. The article told about the failure to find support for the effects of gravitational waves from the fusion of supermassive blackholes. The fusions of supermassive blackholes generate gravitational radiation. These collisions would be scaled up versions of the LIGO event.

Supermassive blackholes in galactic centers are by statistical arguments expected to fuse in the collisions of galaxies so often that the generated gravitational radiation produces a detectable hum. This should produce a background hum which should be seen as a jitter for the arrival times of photons of radiation from pulsars. This jitter is same for all pulsars and therefore is expected to be detectable as kind of "hum" defined by gravitational radiation at low frequencies. The frequencies happen to be audible frequencies. For the past decade, scientists with the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration tried to detect this constant "hum" of low-frequency gravitational waves (see this. The outcome is negative and one should explain why this is the case.

I do not know how much evidence there exists for nearby collisions of galaxies in which fusion of galactic supermassive blackholes really take place. What would TGD suggest? For year ago I would have considered an explanation in terms of dark gravitons with lower detection rate but after the revision of the model for the detection of gravitational waves forced by LIGO discovery the following explanation looks more plausible.

  1. In TGD Universe galaxies could be like pearls in necklace carrying dark magnetic energy identifiable as dark matter. This explains galactic rotation curves correctly 1/ρ force in plane orthogonal to the long cosmic string (in TGD sense) defining the necklace gives constant velocity spectrum plus free motion along string: this prediction distinguishes TGD from the competing models. Halo is not spherical since stars are in free motion along cosmic string. The galactic dark matter is identified as dark energy in turn identifiable as magnetic energy of long cosmic string. There is a considerable evidence for these necklaces and this model is one of the oldest parts of TGD inspired astrophysics and cosmology.
  2. Galaxies as vehicles moving along cosmic highways defined by long cosmic strings is more dynamical metaphor than pearls in necklace and better in recent context. The dominating interaction would be the gravitational interaction keeping the galaxy at highway and might make fusion of galactic blackholes a rare process.
This model allows to consider the possibility that the fusions of galactic super-massive blackholes are much rarer than expected in the standard model.
  1. The gravitational interaction between galaxies at separate highways passing near each other would be secondary interaction and galaxies would pass each other without anything dramatic occurring.
  2. If the highways intersect each other the galaxies could collide with each other if the timing is correct but this would be a rare event. This is like two vehicles arriving a crossing simultaneously. In fact, I wrote for a couple of years ago about the possibility that Milky Way could have resulted as the intersection of two cosmic highways (or as a result of cosmic traffic accident).
  3. If the galaxies are moving in opposite directions along the same highway, the situation changes and a fusion of galactic nuclei in head on collision is unavoidable. It is difficult to say how often this kind of events occur: it could occur that galaxies have after sufficiently many collisions "learned" to move in the same direction and define analog of hydrodynamical flow. A cosmic flow has been observed in "too" long scales and could correspond to a coherent flow along cosmic string.

For details see the chapter Quatum Astrophysics.



Correlated Triangles and Polygons in Standard Cosmology and in TGD

Peter Woit had an interesting This Week's Hype . The inspiration came from a popular article in Quanta Magazine telling about the proposal of Maldacena and Nima Arkani-Hamed that the temperature fluctuations of cosmic microwave background (CMB) could exhibit deviation from Gaussianity in the sense that there would be measurable maxima of n-point correlations in CMB spectrum as function of spherical angles. These effects would relate to the large scale structure of CMB. Lubos Motl wrote about the article in different and rather aggressive tone.

The article in Quanta Magazine does not go into technical details but the original article of Maldacena and Arkani-Hamed contains detailed calculations for various n-point functions of inflaton field and other fields in turn determining the correlation functions for CMB temperature. The article is technically very elegant but the assumptions behind the calculations are questionable. In TGD Universe they would be simply wrong and some habitants of TGD Universe could see the approach as a demonstration for how misleading the refined mathematics can be if the assumptions behind it are wrong.

It must be emphasized that already now it is known and stressed also in the articl that the deviations of the CMB from Gaussianity are below recent measurement resolution and the testing of the proposed non-Gaussianities requires new experimental technology such as 21 cm tomography mapping the redshift distribution of 21 cm hydrogen line to deduce information about fine details of CMB now n-point correlations.

Inflaton vacuum energy is in TGD framework replaced by Kähler magnetic energy and the model of Maldacena and Arkani-Hamed does not apply. The elegant work of Maldacena and Arkani-Hamed however inspired a TGD based consideration of the situation but with very different motivations. In TGD inflaton fields do not play any role since inflaton vacuum energy is replaced with the energy of magnetic flux tubes. The polygons also appear in totally different manner and are associated with symplectic invariants identified as Kähler fluxes, and might relate closely to quantum physical correlates of arithmetic cognition. These considerations lead to a proposal that integers (3,4,5) define what one might called additive primes for integers n≥ 3 allowing geometric representation as non-degenerate polygons - prime polygons. On should dig the enormous mathematical literature to find whether mathematicians have proposed this notion - probably so. Partitions would correspond to splicings of polygons to smaller polygons.

These splicings could be dynamical quantum processes behind arithmetic conscious processes involving addition. I have already earlier considered a possible counterpart for conscious prime factorization in the adelic framework. This will not be discussed in this section since this topic is definitely too far from primordial cosmology. The purpose of this article is only to give an example how a good work in theoretical physics - even when it need not be relevant for physics - can stimulate new ideas in completely different context.

For details see the chapter More About TGD Inspired Cosmology or the article Correlated Triangles and Polygons in Standard Cosmology and in TGD .



Cyclic cosmology from TGD perspective

The motivation for this piece of text came from a very inspiring (interview of Neil Turok by Paul Kennedy in CBS radio ). The themes were the extreme complexity of theories in contrast to extreme simplicity of physics, the mysterious homegeny and isotropy of cosmology, and the cyclic model of cosmology developed also by Turok himself. In the following I will consider these issues from TGD viewpoint.

1. Extreme complexity of theories viz. extreme simplicity of physics

The theme was the incredible simplicity of physics in short and long scales viz. equally incredible complexity of the fashionable theories not even able to predict anything testable. More precisely, super string theory makes predictions: the prediction is that every imaginable option is possible. Very safe but not very interesting. The outcome is the multiverse paradigm having its roots in inflationary scenario and stating that our local Universe is just one particular randomly selected Universe in a collection of infinite number of Universes. If so then physics has reached its end. This unavoidably brings to my mind the saying of Einstein: "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.".

Turok is not so pessimistic and thinks that some deep principle has remained undiscovered. Turok's basic objection against multiverse is that there is not a slightest thread of experimental evidence for it. In fact, I think that we can sigh for relief now: multiverse is disappearing to the sands of time, and can be seen as the last desperate attempt to establish super string theory as a respectable physical theory.

Emphasis is now in the applications of AdS/CFT correspondence to other branches of physics such as condensed matter physics and quantum computation. The attempt is to reduce the complex strongly interaction dynamics of conformally invariant systems to gravitational interaction in higher dimensional space-time called bulk. Unfortunately this approach involves the effective field theory thinking, which led to the landscape catastrophe in superstring theory. Einstein's theory is assumed to describe low energy gravitation in AdS so that higher dimensional blackholes emerge and their interiors can be populated with all kinds of weird entities. For TGD view about the situation see (see this)

One can of course criticize Turok's view about the simplicity of the Universe. What we know that visible matter becomes simple both at short and long scales: we actually know very little about dark matter. Turok also mentions that in our scales - roughly the geometric mean of shortest and longest scales for the known Universe - resides biology, which is extremely complex. In TGD Universe this would due to the fact that dark matter is the boss for living systems and the complexity of visible matter reflects that of dark matter. It could be that dark matter levels corresponding to increasing values of heff/h get increasingly complex in long scales and complexity increases. We just do not see it!

2. Why the cosmology is so homogenous and isotropic?

Turok sees as one of the deepest problems of cosmology the extreme homogeny and isotropy of cosmic microwave background implying that two regions with no information exchange have been at the same temperature in the remote past. Classically this is extremely implausible and in GRT framework there is no obvious reason for this. Inflationary scenario is one possible mechanism explaining this: the observed Universe would have been very small region, which expanded during inflationary period and all temperature gradients were smoothed out. This paradigm has several shortcomings and there exists no generally accepted variant of this scenario.

In TGD framework one can also consider several explanations.

  1. One of my original arguments for H=M4× CP2 was that the imbeddability of the cosmology to H forces long range correlations (see this, this and this). The theory is Lorentz invariant and standard cosmologies can be imbedded inside future light-cone with its boundary representing Big Bang. Only Roberton-Walker cosmologies with sub-critical or critical mass are allowed by TGD. Sub-critical ones are Lorentz invariant and therefore a very natural option. One would have automatically constant temperature. Could the enormous reduction of degrees of freedom due to the 4-surface property force the long range correlations? Probably not. 4-surface property is a necessary condition but very probably far from enough.
  2. The primordial TGD inspired cosmology is cosmic string dominated: one has a gas of string like objects, which in the ideal case are of form X2× Y2⊂ M4× CP2, where X2 is minimal surface and Y2 complex surface of CP2. The strings can be arbitrarily long unlike in GUTs. The conventional space-time as a surface representing the graph of some map M4→ CP2 does not exist during this period. The density goes like 1/a2, a light-cone proper time, and the mass of co-moving volume vanishes at the limit of Big Bang, which actually is reduced to "Silent Whisper" amplified later to Big Bang.

    Cosmic string dominated period is followed by a quantum critical period analogous to inflationary period as cosmic strings start to topologically condense at space-time sheets becoming magnetic flux tubes with gradually thickening M4 projections. Ordinary space-time is formed: the critical cosmology is universal and uniquely fixed apart from single parameter determining the duration of this period.

    After that a phase transition to the radiation dominated phase takes place and ordinary matter emerges in the decay of magnetic energy of cosmic strings to particles - Kähler magnetic energy corresponds to the vacuum energy of inflaton field. This period would do analogous to inflationary period. Negative pressure would be due to the magnetic tension of the flux tubes.

    Also the asymptotic cosmology is string dominated since the corresponding density of energy goes like 1/a2 as for primordial phase whereas for matter dominated cosmology it goes like 1/a3. This brings in mind the ekpyrotic phase of the cyclic cosmology.

  3. This picture is perhaps over-simplified. Quite recently I proposed a lift of Kähler action to its 6-D twistorial counterpart (see this). The prediction is that a volume term with positive coefficient representing cosmological constant emerges from the 6-D twistorial variant of Kähler action via dimensional reduction. It is associated with the S2 fiber of M4 twistor space and Planck length characterizes the radius of S2. Volume density and magnetic energy density together could give rise to cosmological constant behind negative pressure term. Note that cosmological term for cosmic strings reduces to similar form as that from Kähler action and depending on the value of cosmological constant only either of them or both are important. TGD suggest strongly that cosmological constant Λ has a spectrum determined by quantum criticality and is proportional to the inverse of p-adic length scale squared so that both terms could be important. If cosmological constant term is small always the original explanation for the negative pressure applies.

    The vision about quantum criticality of TGD Universe would suggest that the two terms has similar sizes. For cosmic strings the cosmological term does not give pressure term since it come from the string world sheet alone. Thus for cosmic strings Kähler action would define the negative pressure and for space-time sheets both. If the contributions could have opposite signs, the acceleration of cosmic expansion would be determined by competing control variables. To my best understanding the signs of the two contributions are same (my best understanding does snot however guarantee much since I am a numerical idiot and blundering with numerical factors and signs are my specialities). If the signs are opposite, one cannot avoid the question whether quantum critical Universe could be able to control its expansion by cosmic homeostasis by varying the two cosmological constants. Otherwise the control of the difference of accelerations for expansion rates of cosmic strings and space-time sheets would be possible.

  4. A third argument explaining the mysterious temperature correlations relies on the hierarchy of Planck constants heff/h=n labelling the levels of dark matter hierarchy with quantum scales proportional to n. Arbitrary large scales would be present and their presence would imply a hierarchy of arbitrary large space-time sheets with size characterized by n. The dynamics in given scale would be homogenous and isotropic below the scale of this space-time sheet.

    One could see the correlations of cosmic temperature as a signature of quantum coherence in cosmological scales involving also entanglement is cosmic scales (see this). Kähler magnetic flux tubes carrying monopole flux requiring no currents to generate the magnetic fields inside them would serve as correlates for the entanglement just as the wormholes serve as a correlate of entanglement in ER-EPR correlations. This would conform with the fact that the analog of inflationary phase preserves the flux tube network formed from cosmic strings. It would also explain the mysterious existence of magnetic fields in all scales.

3. The TGD analog of cyclic cosmology

Turok is a proponent of cyclic cosmology combining so called ekpyrotic cosmology and inflationary cosmology. This cosmology offers a further solution candidate for the homogeny/isotropy mystery. Contracting phase would differ from the expanding phase in that contraction would be much slower than expansion and only during the last state there would be a symmetry between the two half-periods. In concrete realizations inflaton type field is introduced. Also scenarios in which branes near each other collide with each other cyclically and generate in this manner big crunch followed by big bang is considered. I find difficult to see this picture as a solution of the homogeny/isotropy problem.

I however realized it is possible to imagine a TGD analog of cyclic cosmology in Zero Energy Ontology (ZEO). There is no need to assume that this picture solves the homogeny/isotropy problem and cyclicity corresponds to kind of biological cyclicity or rather sequence of re-incarnations.

3.1 A small dose of TGD inspired theory of consciousness

  1. In ZEO the basic geometric object is causal diamond (CD), whose M4 projection represents expanding spherical light-front, which at some moment begis to contract - this defines an intersection of future and past directed light-cones. Zero energy states are pairs of positive and negative energy states at opposite light-like boundaries of CD such that all conserved quantum numbers are opposite. This makes it possible to satisfy conservation laws.
  2. CD is identified as 4-D perceptive field of a conscious entity in the sense that the contents of conscious experiences are from CD. Does CD is represent only the perceptive field of an observer getting sensory representation about much larger space-time surface continuing beyond the boundaries of CD or does the geometry of CD imply cosmology, which is Big Bang followed by a Big Crunch. Or do the two boundaries of CD define also space-time boundaries so that space-time would end there.

    The conscious entity defined by CD cannot tell whether this is the case. Could a larger CD containing it perhaps answer the question? No! For larger CD the CD could represent the analog of quantum fluctuation so that space-time of CD would not extend beyond CD.

  3. The geometry of CD brings in mind Big Bang - Big Crunch cosmology. Could this be forced by boundary conditions at future and past boundaries of CD meeting along the large 3-sphere forcing Big Bang at both ends of CD but in opposite directions. If CD is independent geometric entity, one could see it as Big Bang followed by Big Crunch in some sense but not in a return back to the primordial state: this would be boring and in conflict with TGD view about cosmic evolution.
  4. To proceed some TGD inspired theory of consciousness is needed. In ZEO quantum measurement theory extends to a theory of consciousness. State function reductions can occur to either boundary of CD and Negentropy Maximization Principle (NMP) dictates the dynamics of consciousness (see this).

    Zeno effect generalizes to a sequence of state function reductions leaving second boundary of CD and the members of zero energy states at it unchanged but changing the states at opposite boundary and also the location of CD so that the distance between the tips of CD is increasing reduction by reduction. This gives rise to the experienced flow of subjective time and its correlation with the flow of geometric time identified as the increase of this distance.

    The first reduction to opposite boundary is forced to eventually occur by NMP and corresponds to state function reduction in the usual sense. It means the death of the conscious entity and its re-incarnation at opposite boundary, which begins to shift towards opposite time direction reduction by reduction. Therefore the distance between the tips of CD continues to increase. The two lifes of self are lived in opposite time directions.

  5. Could one test this picture? By fractality CDs appear in all scales and are relevant also for living matter and consciousness. For instance, mental images should have CDs as correlates in some scale. Can one identify some analogy for the Big Bang-Big Crunch cosmology for them? I have indeed considered what time reversal for mental images could mean and some individuals (including me) have experienced it concretely in some altered states of consciousness.
3.2 Does cyclic cosmology correspond to a sequence of re-incarnations for a cosmic organism?

The question that I am ready to pose is easy to guess by a smart reader. Could this sequence of life cycles of self with opposite directions of time serve as TGD analog for cyclic cosmology?

  1. If so, the Universe could be seen a gigantic organism dying and re-incarnating and quantum coherence even in largest scales would explain the long range correlations of temperature in terms of entanglement - in fact negentropic entanglement, which is basic new element of TGD based generalization of quantum theory.
  2. Big Crunch to primordial cosmology destroying all achievements of evolution should not occur at any level of dark matter hierarchy. Rather the process leading to biological death would involve the deaths of various subsystems with increasing scale and eventually the death in the largest scale involved.
  3. The system would continue its expansion and evolution from the state that it reached during the previous cycle but in opposite time direction. What would remain from previous life would be the negentropic entanglement at the evolving boundary fixed by the first reduction to the opposite boundary, and this conscious information would correspond to static permanent part of self for the new conscious entity, whose sensory input would come from the opposite boundary of CD after the re-incarnation. Birth of organism should be analogous to Big Bang - certainly the growth of organism is something like this in metaphoral sense. Is the decay of organism analogous to Big Crunch?
  4. What is remarkable that both primordial and asymptotic cosmology are dominated by string like objects, only their scales are different. Therefore the primordial cosmology would be dominated by cosmic strings thickened to cosmic strings also for the reversed cycle. Even more, the accelerated expansion could rip the space-time into pieces - this is one of the crazy looking predictions of accelerated expansion - and one would have free albeit thickened cosmic strings and in rough enough resolution they would look like ideal cosmic strings.

    The cycling would not be trivial and boring (dare I say stupid) repeated return to the same primordial state in conflict with NMP implying endless evolution. It would involve scaling up at each step. The evolution would be like a repeated zooming up of Mandelbrot fractal! Breathing is a good metaphor for this endless process of re-creation: God is breathing! Or Gods, since the is fractal hierarchy of CDs within CDs.

  5. There is however a trivial problem that I did not first notice. The light-cone proper times a+/- assignable to the two light-cones M4+/- defining CD are not same. If future directed light-cone M4+ corresponds to a+2= t2-rM2 with the lower tip of CD at (t,rM)=(0,0), the light-cone proper time associated with M4- corresponds a-2= (t-T)2-rM2= a+2-2tT+T2 = a+2-2(a+2+rM2)1/2T +T2. The energy density would behave near the upper tip like ρ ∝ 1/a+2 rather than ρ ∝ 1/a-2. Does this require that a Big Crunch occurs and leads to the phase where one has gas of cosmic strings in M4-? This does not seem plausible. Rather, the gas of presumably thickened cosmic strings in M4- is generated in the state function reduction to the opposite boundary. This state function reduction would be very much like the end of world and creation of a new Universe
To sum up, single observation - the constancy of cosmic temperature - gives strong support for extremely non-trivial and apparently completely crazy conclusion that quantum coherence is present in cosmological scales and also that Universe is living organism. This should prove how incredibly important the interaction between experiment and theory is.

For details see the chapter TGD Cosmology.



What could the detection of gamma ray pulse .4 seconds after the LIGO merger mean?

This posting begins with a part from earlier posting to which I added more material, which turns out to relate also to the no-hair theorem and to Hawking's recent work (discussed from TGD perspective here) in an interesting manner. It might be that the blackhole formed in the merger breaks non-hair theorem by having magnetic moment.

The Fermi Gamma-ray Burst Monitor detected 0.4 seconds after the merger a pulse of gamma rays with red shifted energies about 50 keV (see the posting of Lubos and the article from Fermi Gamma Ray Burst Monitor). At the peak of gravitational pulse the gamma ray power would have been about one millionth of the gravitational radiation. Since the gamma ray bursts do not occur too often, it is rather plausible that the pulse comes from the same source as the gravitational radiation. The simplest model for blackholes does not suggest this but it is not difficult to develop more complex models involving magnetic fields.

Could this observation be seen as evidence for the assumption that dark gravitons are associated with magnetic flux tubes?

  1. The radiation would be dark cyclotron gravitation generated at the magnetic flux tubes carrying the dark gravitational radiation at cyclotron frequency fc= qB/m and its harmonics (q denotes the charge of charge carrier and B the intensity of the magnetic field and its harmonics and with energy E=heffeB/m .
  2. If heff= hgr= GMm/v0 holds true, one has E= GMB/v0 so that all particles with same charge respond at the same the same frequency irrespective of their mass: this could be seen as a magnetic analog of Equivalence Principle. The energy 50 keV corresponds to frequency f∼ 5× 1018 Hz. For scaling purposes it is good to remember that the cyclotron frequency of electron in magnetic field Bend=.2 Gauss (value of endogenous dark magnetic field in TGD inspired quantum biology) is fc=.6 Mhz.

    From this the magnetic field needed to give 50 keV energy as cyclotron energy would be Bord= (f/fc)Bend=.4 GT corresponds to electrons with ordinary value of Planck constant the strength of magnetic field. If one takes the redshift of order v/c∼ .1 for cosmic recession velocity at distance of Gly one would obtain magnetic field of order 4 GT. Magnetic fields of with strength of this order of magnitude have been assigned with neutron stars.

  3. On the other hand, if this energy corresponds to hgr= GMme c/v0 one has B= (h/hgr)Bord = (v0mP2/Mme)× Bord∼ (v0/c)× 10-11 T (c=1). This magnetic field is rather weak (fT is the bound for detectability) and can correspond only to a magnetic field at flux tube near Earth. Interstellar magnetic fields between arms of Milky way are of the order of 5× 10-10 T and are presumably weaker in the intergalactic space.
  4. Note that the energy of gamma rays is by order or magnitude or two lower than that for dark gravitons. This suggests that the annihilation of dark gamma rays could not have produced dark gravitons by gravitational coupling bilinear in collinear photons.
One can of course forget the chains of mundane realism and ask whether the cyclotron radiation coming from distant sources has its high energy due to large value of hgr rather than due to the large value of magnetic field at source. The presence of magnetic fields would reflects itself also via classical dynamics (that is frequency). In the recent case the cyclotron period would be of order (.03/v0) Gy, which is of the same order of magnitude as the time scale defined by the distance to the merger.

In the case of Sun the prediction for energy of cyclotron photons would be E=[v0(Sun)/v0] × [M(Sun)/M(BH)] × 50 keV ∼ [v0(Sun)/v0] keV. From v0(Sun)/c≈ 2-11 one obtains E=(c/v0)× .5 eV> .5 eV. Dark photons in living matter are proposed to correspond to hgr=heff and are proposed to transform to bio-photons with energies in visible and UV range (see this).

Good dialectic would ask next whether both views about the gamma rays are actually correct. The "visible" cyclotron radiation with standard value of Planck constant at gamma ray energies would be created in the ultra strong magnetic field of blackhole, would be transformed to dark gamma rays with the same energy, and travel to Earth along the flux tubes. In TGD Universe the transformation ordinary photons to dark photons would occur in living matter routinely. One can of course ask whether this transformation takes place only at quantum criticality and whether the quantum critical period corresponds to the merger of blackholes.

The time lag was .4 second and the merger event lasted .2 seconds. Many-sheeted space-time provides one possible explanation. If the gamma rays were ordinary photons so that dark gravitons would have travelled along different flux tubes, one can ask whether the propagation velocities differed by Δ c/c∼ 10-17. In the case of SN1987A neutrino and gamma ray pulses arrived at different times and neutrinos arrived as two different pulses (see this so that this kind of effect is not excluded. Since the light-like geodesics of the space-time surface are in general not light-like geodesics of the imbedding space signals moving with light velocity along space-time sheet do not move with maximal signal velocity in imbedding space and the time taken to travel from A to B depends on space-time sheet. Could the later arrival time reflect slightly different signal velocities for photons and gravitons?

Could one imagine a function for the gamma ray pulse possibly explaining also why it came considerably later than gravitons (0.4 seconds after the merger which lasted 2. seconds)? This function might relate to the transfer of surplus angular momentum from the system.

  1. The merging blackholes were reported to have opposite spins. Opposite directions of spins would make the merger easier since local velocities at the point of contact are in same direction. The opposite directions spins suggest an analogy with two vortices generated from water and this suggests that their predecessors were born inside same star. There is also relative orbital angular momentum forming part of the spin of the final state blackhole, which was modelled as a Kerr blackhole. Since the spins of blackholes were opposite, the main challenge is to understand the transition to the situation in all matter has same direction of spin. The local spin directions must have changed by some mechanism taking away spin.
  2. Magnetic analogs blackholes seem to be needed. They would be analogs of magnetars, which are pulsars with very strong magnetic fields. Magnetic fields are needed to carry out angular momentum from the matter as blackhole is formed. Same should apply now. Outgoing matter spirals along the helical jets (and carries away the spin which is liberated as the rotating matter in two spinning blackholes slows down to rest and the orbital angular momentum becomes the total spin.
  3. If cyclotron adiation left .4 later, it would be naturally assignable to the liberation of temporarily stored surplus angular momentum which blackhole could not carry stably. This cyclotron radiation could have carried out the surplus angular momentum. Amusingly, it could be also seen as a dark analog of Hawking radiation.
Here one must be ready to update the beliefs about what black hole like objects are. About their interiors empirical data tell of course nothing.
  1. The exteriors could contain magnetic fields and must do so in TGD Universe. No exact rotating and magnetic blackhole solutions of Einstein-Maxwell theory are known - otherwise we would not have "blackhole has no hair theorem" stating that blackhole is completely characterized by conserved charges associated with long range interactions: mass, angular momentum and electric charge. In this framework one cannot speak about the magnetic dipole moment of blackhole.
  2. No hair theorem has been challenged quite recently by Hawking (for TGD inspired commentary see this). This suggest the possibility that higher multiple moments characterize blackhole like entities. An extension of U(1) gauge symmetries allowing gauge transformations, which become constant in radial direction at large distances but depend on angle degrees of freedom, is in question. In TGD framework the situation is analogous but much more general and super-symplectic and other symmetries with conformal structure extend the various conformal symmetries and allow to understand also the hierarchy of Planck constants in terms of a fractal hierarchy of symmetry breakins to sub-algebra isomorphic with the full algebra of symmetries in question (see this).
  3. There exist also experimental data challenging the no-hair theorem. The supermassive blackhole like entity near the galactic center is known to have a magnetic field (see this) and thus magnetic moment if the magnetic field is assignable to the blackhole itself rather than matter surrounding it.
Be as it may, any model should explain why the cyclotron radiation pulse came .4 seconds later than gravitaton pulse rather than at the same time. Compared to .2 seconds for blackhole formation this is quite a long time.
  1. Suppose that blackhole like objects have - as any gravitating astrophysical object in TGD Universe must have - a magnetic body making possible the transfer of gravitons and carrying classical gravitational fields. Suppose that radial monopole flux tubes carrying gravitons can carry also BE condensates for which charged particles have varying mass m. hgr= GMm/v0=heff=n× h implies that particles with different masses reside at their own flux tubes like books in book shelves - something very important in TGD inspired quantum biology (see this).

    One might argue that hbargr serves as a very large spin unit and makes the storage very effective but here one must be very cautious: spin fractionization suggested by the covering property of space-time sheets could scale down the spin unit to hbar/n. I do not really understand this issue well enough. In any case, already the spontaneously magnetized BE condensate with relative angular momentum of Cooper makes at pairs of helical flux tubes possible effective angular momentum storage.

  2. The spontaneously magnetized dark Bose-Einstein condensate would consist of charged bosons - say charged fermion pairs with members located at parallel flux tubes as in the TGD inspired model of hight Tc superconductor with spin S=1 Cooper pairs. This BE condensate would be ideal for the temporary storage of surplus spin and relative angular momentum of members of pairs at parallel helical flux tubes. This angular momentum would have been radiated away as gamma ray pulse in a quantum phase transition to a state without dark spontaneous magnetization.
To sum up, LIGO could mean also a new era in the theory of gravitation. The basic problem of GRT description of blackholes relates to the classical conservation laws and it becomes especially acute in the non-stationary situation represented by a merger. Post-Newtonian approximation is more than a calculational tool since it brings in conservation laws from Newtonian mechanics and fixes the coordinate system used to that assignable to empty Minkowski space. Further observations about blackhole mergers might force to ask whether Post-Newtonian approximation actually feeds in the idea that space-time is surface in imbedding space. If the mergers are accompanied by gamma ray bursts as a rule, one is forced to challenge the notion of blackhole and GRT itself.

For details see the chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time" or the article LIGO and TGD.



LIGO and TGD

The recent detection of gravitational radiation by LIGO (see the posting of Lubos at and the article) can be seen as birth of gravito-astronomy. The existence of gravitational waves is however an old theoretical idea: already Poincare proposed their existence at the time when Einstein was starting the decade lasting work to develop GRT (see this).

Gravitational radiation has not been observed hitherto. This could be also seen as indicating that gravitational radiation is not quite what it is believed to be and its detection fails for this reason. This has been my motivation for considering the TGD inspired possibility that part or even all of gravitational radiation could consist of dark gravitons (see this). Their detection would be different from that for ordinary gravitons and this might explain why they have not been detected although they are present (Hulse-Taylor binary).

In this respect the LIGO experiment provided extremely valuable information: the classical detection of gravitational waves - as opposed to quantum detection of gravitons - does not seem to differ from that predicted by GRT. On the other hand, TGD suggests that the gravitational radiation between massive objects is mediated along flux tubes characterized by dark gravitational Planck constant hgr =GMm/v0 identifiable as heff=n× h (see this). This allows to develop in more detail TGD view about the classical detection of dark gravitons.

A further finding was that there was an emission of gamma rays .4 seconds after the merger (see the posting of Lubos and the article from Fermi Gamma Ray Burst Monitor). The proposal that dark gravitons arrive along dark magnetic flux tubes inspires the question whether these gamma rays were actually dark cyclotron radiation in extremely weak magnetic field associated with these flux tubes. There was also something anomalous involved. The mass scale of the merging blackholes deduced from the time evolution for so called chirp mass was 30 solar masses and roughly twice too large as compared to the upper bound from GRT based models (see this).

Development of theory of gravitational radiation

A brief summary about the development of theory of gravitational radiation is useful.

  1. After having found the final formulation of GRT around 1916 after ten years hard work Einstein found solutions representing gravitational radiation by linearizing the field equations. The solutions are very similar in form to the radiation solutions of Maxwell's equations. The interpretation as gravitational radiation looks completely obvious in the light of after wisdom but the existence of gravitational radiation was regarded even by theoreticians far from obvious until 1957. Einstein himself wrote a paper claiming that gravitons might not exist after all: fortunately the peer review rejected it (see this)!
  2. During 1916 Schwartschild published an exact solution of field equations representing a non-rotating black hole. At 1960 Kerr published an exact solution representing rotating blackhole. This gives an idea about how difficult the mathematics involved is.
  3. After 1970 the notion of quasinormal mode was developed. Quasinormal modes are like normal modes and characterized by frequencies. Dissipation is however taken into account and this makes the frequencies complex. In the picture representing the gravitational radiation detected by LIGO, the damping is clearly visible after the maximum intensity is reached. These modes represent radiation, which can be thought of as incoming radiation totally reflected at horizon. These modes are needed to describe gravitational radiation after the blackhole is formed.
  4. After 1990 post-Newtonian methods and numerical relativity developed and extensive calculations became possible allowing also precise treatment of the merger of two blackholes to single one.
I do not have experience in numerics nor in findings solutions to field equations of GRT. General Coordinate Invariance is extremely powerful symmetry but it also makes difficult the physical interpretation of solutions and finding of them. One must guess the coordinates in which everything is simple and here symmetries are of crucial importance. This is why I have been so enthusiastic about sub-manifold gravity: M4 factor of imbedding space provides preferred coordinates and physical interpretation becomes straightforward. In TGD framework the construction of extremals - mostly during the period 1980-1990 - was surprisingly easy thanks to the existence of the preferred coordinates. In TGD framework also conservations laws are exact and geodesic motion can be interpreted in terms of analog of Newton's equations at imbedding level: at this level gravitation is a genuine force and post-Newtonian approximation can be justified in TGD framework.

Evolution of the experimental side

  1. The first indirect proof for gravitational radiation was Hulse-Taylor binary pulsar (see this. The observed increase of the rotation period could be understood as resulting from the loss of rotational energy by gravitational radiation.
  2. Around 1960 Weber suggests a detector based on mass resonance with resonance frequency 1960 Hz. Weber claimed of detecting gravitational radiation on daily basis but his observations could not be reproduced and were probably due to an error in computer program used in the data analysis.
  3. At the same time interferometers as detectors were proposed. Interferometer has two arms and light travels along both arms arms, is reflected from mirror at the end, and returns back. The light signals from the two arms interfere at crossing. Gravitational radiation induces the oscillation of the distance between the ends of interferometer arm and this in turn induces an oscillating phase shift. Since the shifts associated with the two arms are in general different, a dynamical interference pattern is generated. Later laser interferometers emerged.

    One can also allow the laser light to move forth and back several times so that the phase shifts add and interference pattern becomes more pronounced. This requires that the time spent in moving forth and back is considerably shorter than the period of gravitational radiation. Even more importantly, this trick also allows to use arms much shorter than the wavelength of gravitational radiation: for 35 Hz defining the lower bound for frequency in LIGO experiment the wavelength is of the order of Earth radius!

  4. One can also use several detectors positioned around the globe. If all detectors see the signal, there are good reasons to take it seriously. It becomes also possible to identity precisely the direction of the source. A global network of detectors can be constructed.
  5. The fusion of two massive blackholes sufficiently near to Earth (now they were located at distance of about Gly!) is optimal for the detection since the total amount of radiation emitted is huge.
What was observed?

LIGO detected an event that lasted for about .2 seconds. The interpretation was as gravitational radiation and numerical simulations are consistent with this interpretation. During the event the frequency of gravitational radiation increased from 35 Hz to 250 Hz. Maximum intensity was reached at 150 Hz and correspond to the moment when the blackholes fuse together. The data about the evolution of frequency allows to deduce information about the source if post-Newtonian approximation is accepted and the final state is identified as Kerr blackhole.

  1. The merging objects could be also neutron stars but the data combined with the numerical simulations force the interpretation as blackholes. The blackholes begin to spiral inwards and since energy is conserved (in post-Newtonian approximation), the kinetic energy increases because potential energy decreases. The relative rotational velocity for the fictive object having reduced mass increases. Since gravitational radiation is emitted at the rotational frequency and its harmonics, its frequency increases and the time development of frequency codes for the time development of the rotational velocity. This rising frequency is in audible range and known as chirp.

    In the recent situation the rotational frequency increases from 35 Hz to maximum of 150 Hz at which blackholes fuse together. After that a spherically symmetric blackhole is formed very rapidly and exponentially damped gravitational radiation is generated (quasinormal modes) as frequency increases to 250 Hz. A ball bouncing forth and back in gravitational field of Earth and losing energy might serve as a metaphor.

  2. The time evolution of the frequency of radiation coded to the time evolution of interference pattern provides the data allowing to code the masses of the initial objects and of final state object using numerical relativity. So called chirp mass can be expressed in two manners: using the masses of fusing initial objects and the rotation frequency and its time derivative. This allows to estimate the masses of the fusing objects. They are 36 and 29 solar masses respectively. The sizes of these blackholes are obtained by scaling from the blackhole radius 3 km of Sun. The objects must be blackholes. For neutron stars the radii would be much larger and the fusion would occur at much lower rotation frequency.
  3. Assuming that the rotating final state blackhole can be described as Kerr's blackhole, one can model the situation in post-Newtonian approximation and predict the mass of the final state blackhole. The mass of the final state blackhole would be 62 solar masses so that 3 solar masses would transform to gravitational radiation! The intensity of the gravitational radiation at peak was more than the entire radiation by stars int the observed Universe. The second law of blackhole thermodynamics holds true: the sum of mass squared for the initial state is smaller than the mass squared for the final state (322+292< 622).
Are observations consistent with TGD predictions

The general findings about masses of blackholes and their correlations with the frequency and about the net intensity of radiation are also predictions of TGD. The possibility of dark gravitons as large heff quanta however brings in possible new effects and might affect the detection. The consistency of the experimental findings with GRT based theory of detection process raises critical question: are dark gravitons there?

About the relationship between GRT and TGD

The proposal is that GRT plus standard model defines the QFT limit of TGD replacing many-sheeted space-time with slightly curved region of Minkowski space carrying gauge potentials defined as sums of the components of the induced spinor connection and the deviation of metric from flat metric as sum of similar deviations for space-time sheets (see this). This picture follows from the assumption that the test particle touching the space-time sheets experience the sum of the classical fields associated with the sheets.

The open problems of GRT limit of TGD have been the origin of Newton's constant - CP2 size is almost four orders of magnitude longer than Planck length.Amusingly, a dramatic progress occurred in this respect just during the week when LIGO results were published.

The belief has been that Planck length is genuine quantal scale not present in classical TGD. The progress in twistorial approach to classical TGD however demonstrated that this belief was wrong. The idea is to lift the dynamics of 6-D space-time surface to the dynamics of their 6-D twistor spaces obeying the analog of the variational principle defined by Kähler action. I had thought that this would be a passive reformulation but I was completely wrong (see this).

  1. The 6-D twistor space of the space-time surface is a fiber bundle having space-time as base space and sphere as fiber and assumed to be representable as a 6-surface in 12-D twistor space T(M4)× T(CP2). The lift of Kähler action to Kähler action requires that the twistor spaces T(M4) T(CP2) have Kähler structure. These structures exist only for S4, E4 and its Minkowskian analog M4 and CP2 so that TGD is completely unique if one requires the existence of twistorial formulation. In the case of M4 one has a hybrid of complex and hyper-complex structure.
  2. The radii of the two spheres bring in new length scales. The radius in the case of CP2 is essentially CP2 radius R. In the case of M4 the radius is very naturally Planck length so that the origin of Planck length is understood and it is purely classical notion whereas Planck mass and Newton's constant would be quantal notions.
  3. The 6-D Kähler action must be made dimensionless by dividing with a constant with dimensions of length squared. The scale in question is actually the area of S2(M4), not the inverse of cosmological constant as the first guess was. The reason is that this would predict extremely large Kähler coupling strength for the CP2 part of Kähler action.

    There are however two contributions to Kähler action corresponding to T(CP2) and T(M4) and the corresponding Kähler coupling strengths - the already familiar αK and the new αK(M4) - are independent. The value of αK(M4)× 4π R(S2(M4) corresponds essentially to the inverse of cosmological constant and to a length scale which is of the order of the size of Universe in the recent cosmology. Both Kähler coupling strengths are analogous to critical temperature and are predicted to have a spectrum of values. According to the earlier proposal, αK(M4) would be proportional to p-adic prime p≈ 2k, k prime, so that in very early times cosmological constant indeed becomes extremely large. This has been the problem of GRT based view about gravitation. The prediction is that besides the volume term coming from S(M4) there is also the analog of Kähler action associated with M4 but is extremely small except in very early cosmology.

  4. A further new element is that TGD predicts the possibility of large heff=n× h gravitons. One has heff=hgr= GMm/v0, where v0 has dimensions of velocity and satisfies v0/c<1: the value of v0/c is of order .5 × 10-3 for the inner planets. hgr seems to be absolutely essential for understanding how perturbative quantum gravitation emerges.

    What is nice is that the twistor lift of Kähler action suggests also a concrete explanation for heff/h=n. It would correspond to winding number for the map S2(X4)→ S2(M4) and one would indeed have covering of space-time surface induced by the winding as assumed earlier. This covering would have the special property that the base base for each branch of covering would reduce to same 3-surface at the ends of the space-time surface at the light-like boundaries of causal diamond (CD) defining fundamental notion in zero energy ontology (ZEO).

Twistor approach thus shows that TGD is completely unique in twistor formulation, explains Planck length geometrically, predicts cosmological constant and assigns p-adic length scale hypothesis to the cosmic evolution of cosmological constant, and also suggests an improved understanding of the hierarchy of Planck constants.

Can one understand the detection of gravitational waves if gravitons are dark?

The problem of quantum gravity is that if the parameter GMm/h=Mm/mP2 associated with two masses characterizes the interaction strength and is larger than unity, perturbation theory fails to converge. If one can assume that there is no quantum coherence, the interactions can be reduced to those between elementary particles for which this parameter is below unity so that the problem would disappear. In TGD framework however fermionic strings mediate connecting partonic 2-surface mediate the interaction even between astrophysical objects and quantum coherence in astrophysical scales is unavoidable.

The proposal is that Nature has been theoretician friendly and arranged so that a phase transition transforming gravitons to dark gravitons takes place so that Planck constant is replaced with hgr=GMm/v0. This implies that v0/c<1 becomes the expansion parameter and perturbation theory converges. Note that the notion of hgr makes sense only of one has Mm/mP2>1. The notion generalizes also to other interactions and their perturbative description when the interaction strength is large. Plasmas are excellent candidates in this respect.

  1. The notion of hgr was proposed first by Nottale from quite different premises was that planetary orbits are analogous to Bohr orbits and that the situation is characterized by gravitational Planck constant hgr= GMm/v0. This replaces the parameter GMm/h with v0 as perturbative parameter and perturbation theory converges. hgr would characterize the magnetic flux tubes connecting masses M and m along which gravitons mediating the interaction propagate.

    According to the model of Nottale > for planetary orbits as Bohr orbits the entire mass of star behaves as dark mass from the point of view particles forming the planet. hgr=GMm/v0 appears as in the quantization of angular momentum and if dark mass MD<M is assumed, the integer characterizing the angular momentum must be scaled up by M/MD. In some sense all astrophysical objects would behave like quantum coherent systems and many-sheeted space-time suggests that the magnetic body of the system along which gravitons propagate is responsible for this kind of behavior.

  2. The crucial observation is that hgr depends on the product of interacting masses so that hgr characterizes a pair of systems satisfying Mm/mP2>1 rather than either mass. If so, the gravitons at magnetic flux tubes mediating gravitational interaction between masses M and m are always dark and have hgr=heff. One cannot say that the systems themselves are characterized by hgr. Rather, only the magnetic bodies or parts of them can be characterized by hgr. The magnetic bodies can be associated with mass pairs and also with self interactions of single massive object (as analog of dipole field).
  3. The general vision is that ordinary particles and large heff particles can transform to each other at quantum criticality (see this). Above temperatures corresponding to critical temperature particle would be ordinary, in a finite temperature range both kind of particles would be present, and below the lower critical temperature the particles would be dark. High Tc super-conductivity would provide a school example about this.
One would expect that for pairs of quantum coherent objects satisfying GMm/h>1, the graviton exchange is by dark gravitons. This could affect the model for the detection of gravitons.
  1. Since Planck constant does not appear in classical physics, one might argue that the classical detection does not distinguish between dark and ordinary gravitons. Gravitons corresponds classically to radiation with same frequency but amplitude scaled up by n1/2. One would obtain for hgr/h>1 a sequence of pulses with large amplitude length oscillations rather than continuous oscillation as in GRT. The average intensity would be same as for classical gravitational radiation.

    Interferometers detect gravitational radiation classically as distance oscillations and the finding of LIGO suggests that all of the radiation is detected. Irrespective of the value of heff all gravitons couple to the geometry of the measuring space-time sheets. This looks very sensible in the geometric picture for this coupling. A more quantitative statement would be that dark and ordinary gravitons do not differ for detection times longer than the oscillation period. This would be the case now.

    The detection is based on laser light which goes forth and back along arm. The total phase shift between beams associated with the two arms matters and is a sum over the shifts associated with pulses. The quantization to bunches should be smoothed out by this summation process and the outcome is same as in GRT since average intensity must be same irrespective of the value of hgr. Since all detection methods use interferometers there would be no difference in the detection of gravitons from other sources.

  2. The quantum detection heff gravitons - as opposed to classical detection - is expected to differ from that of ordinary gravitons. Dark gravitons can be regarded as bunches of n ordinary gravitons and thus is n times higher energy. Genuine quantum measurement would correspond to an absorption of this kind of giant graviton. Since the signal must be "visible dark gravitons must transform to ordinary gravitons with same energy in the detection. For 35 Hz graviton the energy would have been GMm/v0h times the energy or ordinary graviton with the same frequency. This would give energy of 19 (c/v0) MeV: one would have gravitational gamma rays. The detection system should be quantum critical. The transformation of dark gravitons with frequency scale done by 1/n and energy increased correspondingly would serve as a signature for darkness.

    Living systems in TGD Universe are quantum critical and bio-photons are interpreted as dark photons with energies in visible and UV range but frequencies in EEG range and even below (see this). It can happen that only part of dark graviton radiation is detected and it can remain completely undetected if the detecting system is not critical. One can also consider the possibility that dark gravitons first decay to a bunch of n ordinary gravitons. Now however the detection of individual gravitons is impossible in practice.

A gamma ray pulse was detected .4 seconds after the merger

The Fermi Gamma-ray Burst Monitor detected 0.4 seconds after the merger a pulse of gamma rays with red shifted energies about 50 keV (see the posting of Lubos and the article from Fermi Gamma Ray Burst Monitor). At the peak of gravitational pulse the gamma ray power would have been about one millionth of the gravitational radiation. Since the gamma ray bursts do not occur too often, it is rather plausible that the pulse comes from the same source as the gravitational radiation. The simplest model for blackholes does not suggest this but it is not difficult to develop more complex models involving magnetic fields.

Could this observation be seen as evidence for the assumption that dark gravitons are associated with magnetic flux tubes?

  1. The radiation would be dark cyclotron gravitation generated at the magnetic flux tubes carrying the dark gravitational radiation at cyclotron frequency fc= qB/m and its harmonics (q denotes the charge of charge carrier and B the intensity of the magnetic field and its harmonics and with energy E=heffeB/m .
  2. If heff= hgr= GMm/v0 holds true, one has E= GMB/v0 so that all particles with same charge respond at the same the same frequency irrespective of their mass: this could be seen as a magnetic analog of Equivalence Principle. The energy 50 keV corresponds to frequency f∼ 5× 1018 Hz. For scaling purposes it is good to remember that the cyclotron frequency of electron in magnetic field Bend=.2 Gauss (value of endogenous dark magnetic field in TGD inspired quantum biology) is fc=.6 Mhz.

    From this the magnetic field needed to give 50 keV energy as cyclotron energy would be Bord= (f/fc)Bend=.4 GT corresponds to electrons with ordinary value of Planck constant the strength of magnetic field. If one takes the redshift of order v/c∼ .1 for cosmic recession velocity at distance of Gly one would obtain magnetic field of order 4 GT. Magnetic fields of with strength of this order of magnitude have been assigned with neutron stars.

  3. On the other hand, if this energy corresponds to hgr= GMme c/v0 one has B= (h/hgr)Bord = (v0mP2/Mme)× Bord∼ (v0/c)× 10-11 T (c=1). This magnetic field is rather weak (fT is the bound for detectability) and can correspond only to a magnetic field at flux tube near Earth. Interstellar magnetic fields between arms of Milky way are of the order of 5× 10-10 T and are presumably weaker in the intergalactic space.
  4. Note that the energy of gamma rays is by order or magnitude or two lower than that for dark gravitons. This suggests that the annihilation of dark gamma rays could not have produced dark gravitons by gravitational coupling bilinear in collinear photons.
One can of course forget the chains of mundane realism and ask whether the cyclotron radiation coming from distant sources has its high energy due to large value of hgr rather than due to the large value of magnetic field at source. The presence of magnetic fields would reflects itself also via classical dynamics (that is frequency). In the recent case the cyclotron period would be of order (.03/v0) Gy, which is of the same order of magnitude as the time scale defined by the distance to the merger.

In the case of Sun the prediction for energy of cyclotron photons would be E=[v0(Sun)/v0] × [M(Sun)/M(BH)] × 50 keV ∼ [v0(Sun)/v0] keV. From v0(Sun)/c≈ 2-11 one obtains E=(c/v0)× .5 eV> .5 eV. Dark photons in living matter are proposed to correspond to hgr=heff and are proposed to transform to bio-photons with energies in visible and UV range (see this).

Good dialectic would ask next whether both views about the gamma rays are actually correct. The "visible" cyclotron radiation with standard value of Planck constant at gamma ray energies would be created in the ultra strong magnetic field of blackhole, would be transformed to dark gamma rays with the same energy, and travel to Earth along the flux tubes. In TGD Universe the transformation ordinary photons to dark photons would occur in living matter routinely. One can of course ask whether this transformation takes place only at quantum criticality and whether the quantum critical period corresponds to the merger of blackholes.

The time lag was .4 second and the merger event lasted .2 seconds. If the gamma rays were ordinary photons so that dark gravitons would have travelled along different flux tubes, one can ask whether the propagation velocities differed by &Delta: c/c∼ 10-17. Since the geodesics of the space-time surface are in general not geodesics of the imbedding space signals moving with light velocity along space-time sheet do not move with maximal signal velocity in imbedding space and the time taken to travel from A to B depends on space-time sheet. Could the later arrival time reflect slightly different signal velocities for photons and gravitons?

For details see the chapter Quantum Astrophysics or the article LIGO and TGD.



To the index page