What's new in

Physics in Many-Sheeted Space-Time

Note: Newest contributions are at the top!



Year 2011



About the notion of conscious hologram

In earlier posting Comparison of Maxwellian and TGD views about classical gauge fields I compared Maxwellian and TGD based notions of classical fields. The comparison was restricted to the linear superposition of fields which in TGD framework is restricted to linear superposition of effects of classical fields: the space-time sheets representing classical fields are indeed separate and it is enough that their M4 projections intersect. Summation of effects means at classical level summation of forces caused by them due to the fact that p"../articles/ have topological sum contacts to both space-time sheets. The notion of hologram relies crucially on the superposition of fields and this this forces to reformulate the notion of hologram in TGD framework.

In TGD inspired theory of consciousness the idea about living system as a conscious hologram is central. It is of course far from clear what this notion means. Since the notions of interference and superposition of fields are crucial for the description of the ordinary hologram, the proposed general description for the TGD counterpart for the superposition of fields is a natural starting point for the more precise formulation of the notion of conscious hologram. In the following only the notion of conscious hologram is discussed. Also the formulation of the notion of ordinary hologram in TGD framework is an interesting challenge.

  1. Consider ordinary hologram first. Reference wave and reflected wave interfere and produce an interference pattern to which the substrate of the hologram reacts so that its absorption coefficient is affected. When the substrate is illuminated with the conjugate of the reference wave, the original reflected wave is generated. The modification of the absorption coefficient is assumed to be proportional to the modulus squared fro the sum of the reflected and reference waves. This implies that the wave reflected from the hologram is in good approximation identical with the original reflected wave.

  2. Conscious hologram would be dynamical rather than static. It would be also quantal: the quantum transitions of p"../articles/ in the fields defined by the hologram would be responsible for the realization of the interference pattern as a conscious experience. The previous considerations actually leave only this option since the interference of classical fields does not happen. Reference wave and reflected wave correspond now to any field configurations. The charged p"../articles/ having wormhole contacts to the space-time sheets representing the field configurations experience the sum of the fields involved, and this induces quantum jumps between the quantum states associated with the situation in which only the reference wave is present.

    This would induce a conscious experience representing an interference pattern. The reference wave can also correspond to a flux tube of magnetic body carrying a static magnetic field and defining cyclotron states as stationary state. External time dependent magnetic field can replace reflected wave and induces cyclotron transitions. Also radiation fields represented by MEs can represent the reference wave and reflected wave.

    If there is need for the "reading" of the hologram it would correspond to the addition of a space-time sheet carrying fields which in good approximation have opposite sign and same magnitude as those in the sheet representing reference wave so that the effect on the charged p"../articles/ reduces to that of the "reflected wave". This step might be un-necessary since already the formation of hologram would give rise to a conscious experience. The conscious holograms created when the hologram is created and when the conjugate of the reference wave is added give rise to two different conscious representations. This might have something to do with holistic and reductionistic views about the same situation.

  3. One can imagine several realizations for the conscious hologram. It seems that the realization at the macroscopic level is essentially four-dimensional. By quantum holography it would reduce at microscopic level to a hologram realized at the 3-D light-like surfaces defining the surfaces at which the signature of induce metric changes (generalized Feynman diagrams having also macroscopic size - anyons or space-like 3-surfaces at the ends of space-time sheets at the two light-like boundaries of CD. Strong form of holography implied by the strong form of general coordinate invariance requires that holograms correspond to collections of partonic 2-surfaces in given measurement resolution. This could be understood in the sense that the charged p"../articles/ defining the substrate can be described mathematically in terms of the ends of the corresponding light-like 3-surfaces at the ends of CDs. The cyclotron transitions could be thought of as occurring for p"../articles/ represent as partonic 2-surfaces topologically condensed at several space-time sheets.

One can imagine several applications in TGD inspired quantum biology.

  1. One can develop a model for how certain aspects of sensory experience could be understood in terms of interference patterns for signals sent from the biological body to the magnetic body. The information about the relative position of the magnetic body and biological body would be coded by the interference patterns giving rise to conscious sensory percepts. This information would represent geometric qualia giving information about distances and angles basically. There would be a magnetic flux tube representing the analog of the reference wave and magnetic flux tube carrying the analog of reflected wavel which could represent the effect of neural activity. When the signal changes with time, cyclotron transitions are induced and conscious percept is generated. In principle it there is no need not compensate for the reference wave although also this is possible.

  2. The natural first guess is that EEG rhythms (and those for its fractal generalization) represent reference waves and that the frequencies in question are either harmonics of cyclotron frequencies or linear combinations of these and Josephson frequency assignable to cell membrane (and possibly its harmonics). The modulation of membrane potential (membrane is regarded as a super-conductor) would induce modulations of Josephson frequency and if large enough would generate nerve pulses. These modulations would define the counterpart of the reflected wave. The flux tubes representing unperturbed magnetic field would represent reference waves.

  3. For instance, the motion of the biological body changes the signal at the space-time sheets carrying the signal and this generates cyclotron transitions giving rise to a conscious experience. Perhaps the sensation of having a body is based in this mechanism. The signals could emerge from directly from cells: it could be that this sensation corresponds to lower level selves rather than us. Second option is that nerve pulses to brain induce the signals sent to the our magnetic body.

  4. The motion of biological body relative to biological body generates virtual sensory experience which could be responsible for the illusions like train illusion and the unpleasant sensory experience about falling down from cliff by just imagining it. OBEs could be also due to the virtual sensory experiences of the magnetic body. One interesting illusion results when one swims long time in windy sea. When one returns to the shore one has rather long lasting experience of being in sea. Magnetic body gradually learns to compensate the motion of sea so that the perception of the wavy motion is reduced. At the shore this compensation mechanism however continues to work. This mechanism represents an example of adaptation and could be a very general mechanism. Since also magnetic body uses metabolic energy, this mechanism could have justification in terms of metabolic economy.

    Also thinking as internal, silent speech might be assigned with magnetic body and would represent those aspects of the sensory experience of ordinary speech which involve the quantum jumps at magnetic body- the associated geometric qualia- but not the primary sensory percept. This speech would be internal speech since there would be no real sound signal or virtual sound signal from brain to cochlea.

  5. Conscious hologram would make possible to represent phase information. This information is especially important for hearing. The mere power spectrum is not enough since it is same for speech and its time reversal. Cochlea performs an analysis of sounds to frequencies. It it is not easy to imagine how this process could preserve the phase information associated with the Fourier components. It is believed that both right and left cochlea are needed to abstact the phase difference between the signals arriving to right and left ear allowing to deduce the direction of the source neural mechanisms for this has been proposed but these mechanism are not enough in case of speech. Could there exists a separate holistic representation in which sound wave as a whole generates a single signal interfering with the reference wave at the magnetic body and in this manner represents as a conscious experience the phase?

  6. Also the control and reference signals from the magnetic body to biological body could create time dependent interference patterns giving rise to neural response initiating motor actions and other responses. Basically the quantum interference should reduce the magnitude of membrane resting potentials so that nerve pulses would be generated and give rise to motor action. Similar mechanism would be at work at the level of sensory receptors - at least retina. The generation of nerve pulses would mean kind of emergency situation at the neuronal level. Frequency modulation of Josephson radiation would be the normal situation.

For background see the chapter General View About Physics in Many-Sheeted Space-Time : Part I.



Comparison of Maxwellian and TGD views about classical gauge fields

In TGD Universe gauge fields are replaced with topological field quanta. Examples are topological light rays, magnetic flux tubes and sheets, and electric flux quanta carrying both magnetic and electric fields. Flux quanta form a fractal hierarchy in the sense that there are flux quanta inside flux quanta. It is natural to assume quantization of Kähler magnetic flux. Braiding and reconnection are basic topological operations for flux quanta.

One important example is the description of non-perturbative aspects of strong interactions in terms of reconnection of color magnetic flux quanta carrying magnetic monopole fluxes. These objects are string like structures and one can indeed assign to them string world sheets. The transitions in which the thickness of flux tube increases so that flux conservation implies that part of magnetic energy is liberated unless the length of the flux quantum increases, are central in TGD inspired cosmology and astrophysics. The magnetic energy of flux quantum is interpreted as dark energy and magnetic tension as negative "pressure" causing accelerated expansion.

This picture is beautiful and extremely general but raises challenges. How to describe interference and linear superposition for classical gauge fields in terms of topologically quantized classical fields? How the interference and superposition of Maxwellian magnetic fields is realized in the situation when magnetic fields decompose to flux quanta? How to describe simple systems such as solenoidal current generating constant magnetic field using the language of flux quanta?

Superposition of fields in terms of flux quanta

The basic question concerns the elegant description of superposition of classical fields in terms of topological field quanta. What it means that magnetic fields superpose.

  1. In Maxwell's linear theory the answer would be trivial but not now. Linear superposition holds true only inside topological light rays for signals propagating in fixed direction with light velocity and with same local polarization. The easy solution would be to say that one considers small perturbations of background space-time sheet and linearizes the theory. Linearization would apply also to induced gauge fields and metric and one would obtain linear superposition approximately. This does not look elegant. Rather, quantum classical correspondence requires the space-time counterpart for the expansion of quantum fields as sum of modes in terms of topological field quanta. Topological field quanta should not lose their identity in the superposition.

  2. In the spirit of topological field quantization it would be nice to have topological representation for the superposition and interference without any linearization. To make progress one must return to the roots and ask how the fields are operationally defined. One has test particle and it experiences a gauge force in the field. From the acceleration of the test particle the value of field is deduced. What one observes is the superposition of gauge forces, not of gauge fields.

    1. Let us just assume that we have two space-time sheets representing field configurations to be effectively superposed. Suppose that they are "on top" of each other with respect to CP2 degrees of freedom so that their M4 volumes overlap. The points of the sheets representing the field values that would sum in Maxwell's theory are typically at distance of CP2 radius of about 104 Planck lengths. Wormhole contacts representing he interaction between the field configurations are formed. Hence the analog of linear superposition does not hold true exactly. For instance, amplitude modulation becomes possible. This is however not essential for the argment.

    2. Test particle could be taken to be fermion which is simultaneously topologically condensed to both sheets. In other words, fermionic CP2 type almost vacuum extremal touches both sheets and wormhole throats at which the signature of the induced metric changes is formed. Fermion experiences the sum of gauge forces from the two space-time sheets through its wormhole throats. From this one usually concludes that superposition holds true for the induced gauge fields. This assumption is however not true and is also un-necessary in the recent case. In case of topological light rays the representation of modes in given direction in terms of massless extremals makes possible to realize the analogy for the representation of quantum field as sum of modes. The representation does not depend on approximate linearity as in the case of quantum field theories and therefore removes a lot of fuzziness related to the quantum theory. In TGD framework the bosonic action is indeed extremely non-linear.

  3. This view about linear superposition has interesting implications. In effective superposition the superposed field patterns do not lose their identity which means that the information about the sources is not lost - this is true at least mathematically. This is nothing but quantum classical correspondence: it is the decomposition of radiation into quanta which allows to conclude that the radiation arrives from a particular astrophysical object. It is also possible to have superposition of fields to zero field in Maxwellian sense but in the sense of TGD both fields patterns still exist. Linear superposition in TGD sense might allow testing using time dependent magnetic fields. In the critical situation in which the magnetic field created by AC current passes through zero, flux quanta have macroscopic size and the direction of the flux quantum changes to opposite.

Time varying magnetic fields described in terms of flux quanta

An interesting challenge to describe time dependent fields in terms of topological field quanta which are in many respects static structures (for instance, flux is constant). The magnetic fields created by time dependent currents serves as a good example from which one can generalize. In the simplest situation the magnetic field strength experiences time dependent scaling. How to describe this scaling?

Consider first the scaling of the magnetic field strength in flux tube quantization.

  1. Intuitively it seems clear that the field decomposes into flux quanta, whose M4 projections can partially overlap. To get a connection to Maxwell's theory one can assume that the average field intensity is defined in terms of the flux of the magnetic field over a surface with area S. For simplicity consider constant magnetic field so tht one has BaveS= Φ= nΦ0, where Φ0 is the quantized flux for a flux tube assumed to have minimum value Φ0. Integer n is proportional to the average magnetic field Bave. Bave must be reasonably near to the typical local value of the magnetic field which manifest itself quantum mechanically as cyclotron frequency.

  2. What happens in the scaling B→ B/x. If the transversal area of flux quantum is scaled up by x the flux quantum is conserved. To get the total flux correctly, the number of flux quanta must scale down: n → n/x. One indeed has (n/x)× xS= nS. This implies that the total area associated with flux quanta within total area S is preserved in the scaling.

  3. The condition that the flux is exact integer multiple of Φ0 would pose additional conditions leading to the quantization of magnetic flux if the total area can be regarded as fixed. This need not to be true.

Consider as the first example slowly varying magnetic field created by an alternating running in current in cylindrical solenoid. There are flux tubes inside the cylindrical solenoid and return flux tubes outside it flowing in opposite direction. Flux tubes get thicker as magnetic field weakens and shift from the interior of solenoid outside. For some value x of the time dependent scaling B→ B/x the elementary flux quantum Φ0 reaches the radius of the solenoid. Quantum effects must become important and make possible the change of the sign of the elementary flux quantum. Perhaps quantum jump turning the flux quantum around takes place. After this the size of the flux quantum begins to decrease as the magnitude of the magnetic field increases. At the maximum value the size of the flux quantum is minimum.

This example generalizes to the magnetic field created by a linear alternating current. In this case flux quanta are cylinderical flux sheets for which magnetic field strength and thickness oscillators with time. Also in this case the maximum transversal area to the system defines a critical situation in which there is just single flux sheet in the system carrying elementary flux. This flux quantum changes its sign as the sign of the current changes.

For background see the chapter General View About Physics in Many-Sheeted Space-Time : Part I.



Inflation and TGD

The process leading to this posting was boosted by the irritation caused by the newest multiverse hype in New Scientist which was commented by Peter Woit. Also Lubos told about Brian Greene's The Fabric of Cosmos IV which is similar multiverse hype with Guth, Linde, Vilenkin, and Susskind as stars but also single voice of criticism was accepted (David Gross who could not hide his disgust).

The message of New Scientist article was that multiverse is now a generally accepted paradigm, that it follows unavoidably from modern physics and has three strong pillars: dark energy, eternal inflation, and the string model landscape. Even LHC has demonstrated its correctness by finding no evidence for the standard SUSY. That was the prediction of superstring models but then some-one realized that there had been some-one predicting that multiverse predicts no super-symmetry! As a matter fact, every single prediction inspired by super string models went wrong, there are good reasons to expect that Higgs will not be found, and standard SUSY has been excluded. Besides this an increasing amount of evidence for new physics not predicted by standard TOEs. And one should not forget neutrino super-luminality. All this shakes the foundations of both super-string theory, where GUT is believed to be the low energy limit of the theory with Higgs fields playing a key role. In inflationary scenarios Higgs like scalar fields carrying the vacuum energy give rise to radiation and therefore also ordinary matter.

The three pillars of the multiverse become catastrophic weaknesses if the Higgs paradigm fails. Vacuum energy cannot correspond to Higgs, the scalar fields driving inflation are not there, and one cannot say anything about possible low energy limits of super string theories since even the basic language describing them is lost!

Maybe I am becoming an old angry man but I must confess that this kind of hype is simply too much for me. Why colleagues who know what the real situation is do not react to this bullshit? Are they so lazy that they allow physics to degenerate into show business without bothering to do anything? Or does a culture of Omerta prevail as some participant in Peter Woit's blog suggested? Even if a man has seen a crime to take place, he is not allowed to reveal it. It he does, he suffers vendetta. I have experienced the academic equivalent of vendetta: not for this reason but for having the courage to think with my own brain. Maybe laziness is a more plausible explanation.

But I do not have any right to doom my colleagues if I am myself too lazy to do anything. My moral duty is to tell that this hype is nothing but unashamed lying. On the other hand, the digging of a heap of shit is really depressing. Is there any hope of learning anything? I refuse from spending time in superstring landscape but should I see the trouble of comparing eternal inflation with TGD?

In this mixed mood I decided to refresh my views about how TGD based cosmology differs from inflationary scenario. The pleasant surprise was that this comparison combined with new results about TGD inspired cosmology provided fresh insights to the relationship of TGD and standard approach and shows how TGD cures the lethal diseases of the eternal inflation. Very roughly: the replacement of the energy of the scalar field with magnetic energy replaces eternal inflation with a fractal quantum critical cosmology allowing to see more sharply the TGD counterpart of inflation and accelerating expansion as special cases of criticality. Hence it was not wasted time after all.

Wikipedia gives a nice overall summary inflationary cosmology and I recommend it to the non-specialist physics reader as a manner to refresh his or her memory.

1. Brief summary of the inflationary scenario

Inflationary scenario relies very heavily on rather mechanical unification recipes based on GUTs. Standard model gauge group is extended to a larger group. This symmetry group breaks down to standard model gauge group in GUT scale which happens to correspond to CP2 size scale. Leptons and quarks are put into same multiplet of the gauge group so that enormous breaking of symmetries occurs as is clear from the ratio of top quark mass scale and neutrino mass scale. These unifiers want however a simple model allowing to calculate so that neither aesthetics nor physics does not matter. The instability of proton is one particular prediction. No decays of proton in the predicted manner have been observed but this has not troubled the gurus. As a matter fact, even Particle Data Tables tell that proton is not stable! The lobbies of GUTs are masters of their profession!

One of the key features of GUT approach is the prediction Higgs like fields. They allow to realize the symmetry breaking and describe particle massivation. Higgs like scalar fields are also the key ingredient of the inflationary scenario and inflation goes to down to drain tub if Higgs is not found at LHC. It is looking more and more probable that this is indeed the case. Inflation has endless variety of variants and each suffers from some drawback. In this kind of situation one would expect that it is better to give up but it has become a habit to say that inflation is more that a theory, it is a paradigm. When superstring models turned out to be a physical failure, they did not same thing and claimed that super string models are more like a calculus rather than mere physical theory.

1.1 The problems that inflation was proposed to solve

The basic problems that inflation was proposed to solve are magnetic monopole problem, flatness problem, and horizon problem. Cosmological principle is a formulation for the fact that cosmic microwave radiation is found to be isotropic and homogenous in an excellent approximation. There are fluctuations in CMB believed to be Gaussian and the prediction for the spectrum of these fluctuations is an important prediction of inflationary scenarios.

  1. Consider first the horizon problem. The physical state inside horizon is not causally correlated with that outside it. If the observer today receives signals from a region of past which is much larger than horizon, he should find that the universe is not isotropic and homogenous. In particular, the temperature of the microwave radiation should fluctuate wildly. This is not the case and one should explain this.

    The basic idea is that the potential energy density of the scalar field implies exponential expansion in the sense that the "radius" of the Universe increases with an exponential rate with respect to cosmological time. This kind of Universe looks locally like de-Sitter Universe. This fast expansion smooths out any inhomogenities and non-isotropies inside horizon. The Universe of the past observed by a given observer is contained within the horizon of the past so that it looks isotropic and homogenous.

  2. GUTs predict a high density of magnetic monopoles during the primordial period as singularities of non-abelian gauge fields. Magnetic monopoles have not been however detected and one should be able to explain this. The idea is very simple. If Universe suffers an exponential expansion, the density of magnetic monopoles gets so diluted that they become effectively non-existent.

  3. Flatness problem means that the curvature scalar of 3-space defined as a hyper-surface with constant value of cosmological time parameter (proper time in local rest system) is vanishing in an excellent approximation. de-Sitter Universe indeed predicts flat 3-space for a critical mass density. The contribution of known elementary p"../articles/ to the mass density is however much below the critical mass density so that one must postulate additional forms of energy. Dark matter and dark energy fit the bill. Dark energy is very much analogous to the vacuum energy of Higgs like scalar fields in the inflationary scenario but the energy scale of dark energy is by 27 orders of magnitude smaller than that of inflation, about 10-3 eV.

1.2 The evolution of the inflationary models

The inflationary models developed gradually more realistic.

  1. Alan Guth was the first to realize that the decay of false (unstable) vacuum in the early universe could solve the problem posed by magnetic monopoles. What would happen would be the analog of super-cooling in thermodynamics. In super-cooling the phase transition to stable thermodynanical phase does not occur at the critical temperature and cooling leads to a generation of bubbles of the stable phase which expand with light velocity.

    The unstable super-cooled phase would locally correspond to exponentially expanding de-Sitter cosmology with a non-vanishing cosmological constant and high energy density assignable to the scalar field. The exponential expansion would lead to a dilution of the magnetic monopoles and domain walls. The false vacuum corresponds to a value of Higgs field for which the symmetry is not broken but energy is far from minimum. Quantum tunneling would generate regions of true vacuum with a lower energy and expanding with a velocity of light. The natural hope would be that the energy of the false vacuum would generate radiation inducing reheating. Guth however realized that nucleation does not generate radiation. The collisions of bubbles do so but the rapid expansion masks this effect.

  2. A very attractive idea is that the energy of the scalar field transforms to radiation and produces in this manner what we identify as matter and radiation. To realize this dream the notion of slow-roll inflation was proposed. The idea was that the bubbles were not formed at at all but that the scalar field gradually rolled down along almost flat hill. This gives rise to an exponential inflation in good approximation. At the final stage the slope of the potential would come so steep that reheating would took place and the energy of the scalar field would transform to radiation. This requires a highly artificial shape of the potential energy. There is also a fine tuning problem: the predictions depend very sensitively on the details of the potential so that strictly speaking there are no predictions anymore. Inflaton should have also a small mass and represent new kind of particle.

  3. The tiny quantum fluctuations of the inflaton field have been identified as the seed of all structures observed in the recent Universe. These density fluctuations make them visible also as fluctuations in the temperature of the cosmic microwave background and these fluctuations have become an important field of study (WMAP).

  4. In the hybrid model of inflation there are two scalar fields. The first one gives rise to slow-roll inflation and second one puts end to inflationary period when the first one has reached a critical value by decaying to radiation. It is of course imagine endless number of speculative variants of inflation and Wikipedia article summarizes some of them.

  5. In eternal inflation the quantum fluctuations of the scalar field generate regions which expand faster than the surrounding regions and gradually begin to dominate. This means that there is eternal inflation meaning continual creation of Universes. This is the basic idea behind multiverse thinking. Again one must notice that scalar fields are essential: in absence of them the whole vision falls down like a card house.

The basic criticism of Penrose against inflation is that it actually requires very specific initial conditions and that the idea that the uniformity of the early Universe results from a thermalization process is somehow fundamentally wrong. Of course, the necessity to assume scalar field and a potential energy with a very weird shape whose details affect dramatically the observed Universe, has been also criticized.

2. Comparison with TGD inspired cosmology

It is good to start by asking what are the empirical facts and how TGD can explain them.

2.1 What about magnetic monopoles in TGD Universe?

Also TGD predicts magnetic monopoles. CP2 has a non-trivial second homology and second geodesic sphere represents a non-trivial element of homology. Induced Kähler magnetic field can be a monopole field and cosmic strings are objects for which the transversal section of the string carries monopole flux. The very early cosmology is dominated by cosmic strings carrying magnetic monopole fluxes. The monopoles do not however disappear anywhere. Elementary p"../articles/ themselves are string like objects carrying magnetic charges at their ends identifiable as wormhole throats at which the signature of the induced metric changes. For fermions the second end of the string carries neutrino pair neutralizing the weak isospin. Also color confinement could involve magnetic confinement. These monopoles are indeed seen: they are essential for both the screening of weak interactions and for color confinement!

2.2. The origin of cosmological principle

The isotropy and homogenity of cosmic microwave radiation is a fact as are also the fluctuations in its temperature as well as the anomalies in the fluctuation spectrum suggesting the presence of large scale structures. Inflationary scenarios predict that fluctuations correspond to those of nearly gauge invariant Gaussian random field. The observed spectral index measuring the deviation from exact scaling invariance is consistent with the predictions of inflationary scenarios.

Isotropy and homogenity reduce to what is known as cosmological principle. In general relativity one has only local Lorentz invariance as approximate symmetry. For Robertson-Walker cosmologies with sub-critical mass density one has Lorentz invariance but this is due to the assumption of cosmological principle - it is not a prediction of the theory. In inflationary scenarios the goal is to reduce cosmological principle to thermodynamics but fine tuning problem is the fatal failure of this approach.

In TGD framework cosmological principle reduces sub-manifold gravity in H=M4× CP2 predicting a global Poincare invariance reducing to Lorentz invariance for the causal diamonds. This represent extremely important distinction between TGD and GRT. This is however not quite enough since it predicts that Poincare symmetries treat entire partonic 2-surfaces at the end of CD as points rather than affecting on single point of space-time. More is required and one expects that also now finite radius for horizon in very early Universe would destroy the isotropy and homogenity of 3 K radiation. The solution of the problem is simple: cosmic string dominated primordial cosmology has infinite horizon size so that arbitrarily distance regions are correlated. Also the critical cosmology, which is determined part from the parameter determining its duration by its imbeddability, has infinite horizon size. Same applies to the asymptotic cosmology for which curvature scalar is extremized.

The hierarchy of Planck constants and the fact that gravitational space-time sheets should possess gigantic Planck constant suggest a quantum solution to the problem: quantum coherence in arbitrary long length scales is present even in recent day Universe. Whether and how this two views about isotropy and homogenity are related by quantum classical correspondence, is an interesting question to ponder in more detail.

2.3 Three-space is flat

The flatness of three-space is an empirical fact and can be deduced from the spectrum of microwave radiation. Flatness does not however imply inflation, which is much stronger assumption involving the questionable scalar fields and the weird shaped potential requiring a fine tuning. The already mentioned critical cosmology is fixed about the value value of only single parameter characterizing its duration and would mean extremely powerful predictions since just the imbeddability would fix the space-time dynamics almost completely.

Exponentially expanding cosmologies with critical mass density do not allow imbedding to M4× CP2. Cosmologies with critical or over-critical mass density and flat 3-space allow imbedding but the imbedding fails above some value of cosmic time. These imbeddings are very natural since the radial coordinate $r$ corresponds to the coordinate r for the Lorentz invariant a=constant hyperboloid so that cosmological principle is satisfied.

Can one imbed exponentially expanding sub-critical cosmology? This cosmology has the line element

ds2 =dt2-ds32,

ds32= sinh2(t) dΩ32,

where ds32 is the metric of the a=constant hyperboloid of M4+.

  1. The simplest imbedding is as vacuum extremal to M4× S2, S2 the homologically trivial geodesic sphere of CP2. The imbedding using standard coordinates (a,r,θ,φ) of M4+ and spherical coordinates (Θ,Φ) for S2 is to a geodesic circle (the simplest possibility)

    Φ= f(a) , Θ=π/2 .

  2. Φ=f(a) is fixed from the condition

    a = sinh(t) ,

    giving

    gaa=(da/dt)2= 1/cosh2(t)

    and from the condition for the gaa as a component of induced metric tensor

    gaa= 1-R2 (df/da)2 =(dt/da)2 = 1/cosh2(t) .

  3. This gives

    df/da=+/- 1/R × tanh(t)

    giving f(a)= (cosh(t)-1)/R. Inflationary cosmology allows imbedding but this imbedding cannot have a flat 3-space and therefore cannot make sense in TGD framework.

2.4 Replacement of the inflationary cosmology with critical cosmology

In TGD framework inflationary cosmology is replaced with critical cosmology. The vacuum extremal representing critical cosmology is obtained has 2-D CP2 projection - in the simplest situation geodesic sphere. The dependence of Φ on r and Θ on a is fixed from the condition that one obtains flat 3- metric

a2/1+r2 - R2sin2(Θ)(dΦ/dr)2= a2

This gives

sin(Θ)=+/- ka , dΦ/dr=+/- (1/kR)× (r/(1+r2)1/2 .

The imbedding fails for |ka| >1 and is unique apart from the parameter k characterizing the duration of the critical cosmology. The radius of the horizon is given by

R= &int (1/a) × [(1-R2k2)/(1-k2a2)]1/2

and diverges. This tells that there are no horizons and therefore cosmological principle is realized. Infinite horizon radius could be seen as space-time correlate for quantum criticality implying long range correlations and allowing to realize cosmological principle. Therefore thermal realization of cosmological principle would be replaced with quantum realization in TGD framework predicting long range quantal correlations in all length scales. Obviously this realization is a in well-defined sense the diametrical opposite of the thermal realization. The dark matter hierarchy is expected to correspond to the microscopic realization of the cosmological principle generating the long range correlations.

This cosmology could describe the phase transition increasing Planck constant associated with a magnetic flux tube leading to its thickening. Magnetic flux would be conserved and the magnetic energy for the thicknened portion would be reduced via its partial transformation to radiation giving rise to ordinary and dark matter.

2.5 Fractal hierarchy of cosmologies within cosmologies

Many-sheeted space-time leads to a fractal hierarchy of cosmologies within cosmologies. The zero energy realization is in terms of causal diamonds within causal diamonds with causal diamond identified as intersection of future and past directed light-cones. The temporal distance between the tips of CD is given as an integer multiple of CP2 time in the most general case and boosts of CDs are allowed. The are also other moduli associated with CD and discretization of the moduli parameters is strong suggestive.

Critical cosmology corresponds to negative value of "pressure" so that it also gives rise to accelerating expansion. This suggests strongly that both the inflationary period and the accelerating expansion period which is much later than inflationary period correspond to critical cosmologies differing from each other by scaling. Continuous cosmic expansion is replaced with a sequence of discrete expansion phases in which the Planck constant assignable to a magnetic flux quantum increases and implies its expansion. This liberates magnetic energy as radiation so that a continual creation of matter takes place in various scales.

This fractal hierarchy is the TGD counterpart for the eternal inflation. This fractal hierarchy implies also that the TGD counterpart of inflationary period is just a scaled up invariant of critical cosmologies within critical cosmologies. Of course, also radiation and matter dominated phases as well as asymptotic string dominated cosmology are expected to be present and correspond to cosmic evolutions within given CD.

2.6 Vacuum energy density as magnetic energy of magnetic flux tubes and accelerating expansion

TGD allows also a microscopic view about cosmology based on the vision that primordial period is dominated by cosmic strings which during cosmic evolution develop 4-D M4 projection meaning that the thickness of the M4 projection defining the thickness of the magnetic flux tube gradually increases. The magnetic tension corresponds to negative pressure and can be seen as a microscopic cause of the accelerated expansion. Magnetic energy is in turn the counterpart for the vacuum energy assigned with the inflaton field. The gravitational Planck constant assignable to the flux tubes mediating gravitational interaction nowadays is gigantic and they are thus in macroscopic quantum phase. This explains the cosmological principle at quantum level.

The phase transitions inducing the boiling of the magnetic energy to ordinary matter are possible. What happens that the flux tube suffers a phase transition increasing its radius. This however reduces the magnetic energy so that part of magnetic energy must transform to ordinary matter. This would give rise to the formation of stars and galaxies. This process is the TGD counterpart for the re-heating transforming the potential energy of inflaton to radiation. The local expansion of the magnetic flux could be described in good approximation by critical cosmology since quantum criticality is in question.

One can of course ask whether inflationary cosmology could describe the transition period and critical cosmology could correspond only to the outcome. This does not look very attractive idea since the CP2 projections of these cosmologies have dimension D=1 and D=2 respectively.

In TGD framework the fluctuations of the cosmic microwave background correspond to mass density gradients assignable to the magnetic flux tubes. An interesting question is whether the flux tubes could reveal themselves as a fractal network of linear structures in CMB. The prediction is that galaxies are like pearls in a necklace: smaller cosmic strings around long cosmic strings. The model for the formation of stars and galaxies gives a more detailed view about this.

2.7 What is the counterpart of cosmological constant in TGD framework?

In TGD framework cosmological constant emerges, as one asks what might be the GRT limit of TGD. Space-time surface decomposes to regions with both Minkowskian and Euclidian signature of the induced metric and Euclidian regions have interpretation as counterparts of generalized Feynman graphs. Also GRT limit must allow space-time regions with Euclidian signature of metric - in particular CP2 itself -and this requires positive cosmological constant in this regions. The action principle is naturally Maxwell-Einstein action with cosmological constant which is vanishing in Minkowskian regions and very large in Euclidian regions of space-time. Both Reissner-Nordström metric and CP2 are solutions of field equations with deformations of CP2 representing the GRT counterparts of Feynman graphs. The average value of the cosmological constant is very small and of correct order of magnitude since only Euclidian regions contribute to the spatial average. This picture is consistent with the microscopic picture based on the identification of the density of magnetic energy as vacuum energy since Euclidian particle like regions are created as magnetic energy transforms to radiation.

For details and background see the "../articles/ Do we really understand the solar system? and Inflation and TGD, and the chapter TGD and Astrophysics.



The origin of cosmic rays

The origin of cosmic rays remains still one of the mysteries of astrophysics and cosmology. The recent finding of a super bubble emitting cosmic rays might cast some light in the problem.

1. What has been found?

The following is the abstract of the article published in Science.

The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population. The usual thinking is that cosmic rays are not born in states with ultrahigh energies but are boosted to high energies by some mechanism. For instance, super nova explosions could accelerate them. Shock waves could serve as an acceleration mechanism. Cosmic rays could also result from the decays of heavy dark matter p"../articles/.

The story began when astronomers detected a mysterious source of cosmic rays in the direction of the constellation Cygnus X. Supernovae happen often in dense clouds of gas and dust, where stars between 10 to 50 solar masses are born and die. If supernovae are responsible for accelerating of cosmic rays, it seems that these regions could also generate cosmic rays. Cygnus X is therefore a natural candidate to study. It need not however be the source of cosmic rays since magnetic fields could deflect the cosmic rays from their original direction. Therefore Isabelle Grenier and her colleagues decided to study, not cosmic rays as such, but gamma rays created when cosmic rays interact with the matter around them since they are not deflected by magnetic fields. Fermi gamma-ray space telescope was directed toward Cygnus X. This led to a discovery of a superbubble with diameter more than 100 light years. Superbubble contains a bright regions which looks like a duck. The spectrum of these gamma rays implies that the cosmic rays are energetic and freshly accelerated so that they must be close to their sources.

The important conclusions are that cosmic rays are created in regions in which stars are born and gaint their energies by some acceleration mechanism. The standard identification for the acceleration mechanism are shock waves created by supernovas but one can imagine also other mechanisms.

2. Cosmic rays in TGD Universe?

In TGD framework one can imagine several mechanisms producing cosmic rays. According to the vision discussed already earlier, both ordinary and dark matter would be produced from dark energy identified as Kähler magnetic energy and producing as a by product cosmic rays. What causes the transformation of dark energy to matter, was not discussed earlier, but a local phase transition increasing the value of Planck constant of the magnetic flux tube could be the mechanism. A possible acceleration mechanism would be acceleration in an electric field along the magnetic flux tube. Another mechanism is super-nova explosion scaling-up rapidly the size of the closed magnetic flux tubes associated with the star by hbar increasing phase transition preserving the Kähler magnetic energy of the flux tube, and accelarating the highly energetic dark matter at the flux tubes radially: some of the p"../articles/ moving along flux tubes would leak out and give rise to cosmic rays and associated gamma rays.

2.1. The mechanism transforming dark energy to dark matter and cosmic rays

Consider first the mechanism transforming dark energy to dark matter.

  1. The recent model for the formation of stars and also galaxies is based on the identification magnetic flux tubes as carriers of mosly dark energy identified as Kähler magnetic energy giving rise to a negative "pressure" as magnetic tension and explaining the accelerated expansion of the Universe. Stars and galaxies would be born as bubbles of ordinary are generated inside magnetic flux tubes. Inside these bubbles dark energy would transform to dark and ordinary matter. Kähler nagnetic flux tubes are characterized by the value of Planck constant and for the flux tubes mediating gravitational interactions its value is gigantic. For a start of mass M its value for flux tubes mediating self-gravitation it would be hbargr=GM2/v0, v0<1 (v0 is a parameter having interpretation as a velocity).

  2. On possible mechanism liberating Kähler magnetic energy as cosmic rays would be the increase of the Planck constant for the magnetic flux tube occurring locally and scaling up quantal distances. Assume that the radius of the flux tube is this kind of quantum distance. Suppose that the scaling hbar→ rhbar implies that the radius of the flux tube scales up as rn, n=1/2 or n=1 (n=1/2 turns out to be the sensible option). Kähler magnetic field would scale as 1/r2n. Magnetic flux would remain invariant as it should and Kähler magnetic energy would be reduced as 1/r2n. For both options Kähler magnetic energy would be liberated. The liberated Kähler magnetic energy must go somewhere and the natural assumption is that it transforms to p"../articles/ giving rise to matter responsible for the formation of star.

    Could these p"../articles/ include also cosmic rays? This would conform with the observation that stellar nurseries could be also the birth places of cosmic rays. One must of course remember that there are many kinds of cosmic rays. For instance, this mechanism could produce ultra high energy cosmic rays having nothing to do with the cosmic rays in 1-100 GeV rays studied in the recent case.

  3. The simplest assumption is that the thickening of the magnetic flux tubes during cosmic evolution is based on phase transitions increasing the value of Planck constant in step-wise manner. This is not a new idea and I have proposed that entire cosmic expansion at the level of space-time sheets corresponds to this kind of phase transitions. The increase of Planck constant by a factor of two is a good guess since it would increase the size scale by two. In fact, Expanding Earth hypothesis having no standard physics realization finds a beautiful realization in this framework. Also the periods of accelerating expansion could be identified as these phase transition periods.

  4. For the values of gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions, the Planck length scaling like r1/2 would scale up to black-hole horizon radius. The proposal would imply for n=1/2 option that magnetic flux tubes having M4 projection with radius of order Planck length primordially would scale up to blackhole horizon radius if gravitational Planck constant has a value GM2/v0, v0<1, assignable to a star. Obviously this evelutionary scenario is consistent with with what is known about the relations ship between masses and radii of stars.

2.2 What is the precise mechanism transforming dark energy to matter?

What is the precise mechanism transforming the dark magnetic energy to ordinary or dark matter? This is not clear but this mechanism could produce very heavy exotic p"../articles/ not yet observed in laboratory which in turn decay to very energetic ordinary hadrons giving rise to cosmic rays spectrum. I have considered a mechanism for the production of ultrahigh energy cosmic rays based on the decays of hadrons of scaled up copies of ordinary hadron physics. In this case no acceleration mechanism would be necessary. Cosmic rays lose their energy in interstellar space. If they correspond to a large value of Planck constant, situation would change and the rate of the energy loss could be very slow. The above described experimental finding about Cygnus X however suggests that acceleration takes place for the ordinary cosmic rays with relatively low energies. This of course does not exclude particle decays as the primary production mechanism of very high energy cosmic rays. In any case, dark magnetic energy transforming to matter gives rise to both stars and high energy cosmic rays in TGD based proposal.

2.3. What is the acceleration mechanism?

How cosmic rays are created by this general process giving rise to the formation of stars?

  1. Cosmic rays could be identified as newly created matter leaking out from the system. Even in the absence of accelerating fields the p"../articles/ created in the boiling of dark energy to matter, p"../articles/ moving along magnetic flux tubes would move essentially like free p"../articles/ whereas in orthogonal directions they would feel 1/ρ gravitational force. For large values of hbar this could explain very high energy cosmic rays. The recent findings about gamma ray spectrum however suggests that there is an acceleration involved for cosmic rays with energies 1-100 GeV.

  2. One possible alternative acceleration mechanism relies on the motion along magnetic flux tubes deformed in such a manner that there is an electric field orthogonal to the magnetic field in such a manner that the field lines of these fields rotate around the direction of the flux tube. The simplest imbeddings of constant magnetic fields allow deformations allowing also electric field, and one can expect the existence of preferred extremals with similar structure. Electric field would induce an acceleration along the flux tube. If the flux tube corresponds to large non-standard value of Planck constant, dissipation rate would be low and the acceleration mechanism would be very effective.

    Similar mechanism might even explain the observations about ultrahigh energy electrons associated with lightnings at the surface of Earth: they should not be there because the dissipation in the atmosphere should not allow free acceleration in the radial electric field of Earth.

    Here one must be very cautious: the findings are based on a model in which gamma rays are generated with collising of cosmic rays with matter. If cosmic rays travel along magnetic flux tubes with a gigantic value of Planck constant, they should dissipate extremely slowly and no gamma rays would be generated. Hence the gamma rays must be produced by the collisions of cosmic rays which have leaked out from the magnetic flux tubes. If the flux tubes are closed (say associated with the star) the leakage must indeed take place if the cosmic rays are to travel to Earth.

  3. There could be a connection with supernovae although it would not be based on shock waves. Also supernova expansion could be accompanied by a phase transition increasing the value of Planck constant. Suppose that Kähler magnetic energy is conserved in the process. This is the case if the lengths of the magnetic flow tubes r and radii by r1/2. The closed flux tubes associated with supernova would expand and the size scale of flux tubes would increase by factor r. The fast radial scaling of the flux tubes would accelerate the dark matter at the flux tubes radially.

    Cosmic rays having ordinary value of Planck constant could be created when some of the dark matter leaks out from the magnetic flux tubes as their expanding motion in radial direction accelerates or slows down. High energy dark p"../articles/ moving along flux tube would leak out in the tangential direction. Gamma rays would be generated as the resulting p"../articles/ interact with the environment. The energies of cosmic rays would be the outcome of acceleration process: only their leakage would be caused by it so that the mechanism differs in a decisice manner from the mechanism involving shock waves.

  4. The energy scale of cosmic rays - let us take it to be about E=100 GeV for definiteness- gives an order of magnitude estimate for the Planck constant of dark matter at the Kähler magnetic flux tubes if one assumes that supernovae is producing the cosmic rays. Assume that electro-magnetic field equals to induced Kähler field (the space-time projection of space-time surface to CP2 belongs homologically non-trivial geodesic sphere). Assume that E equals the cyclotron energy scale given by Ec= hbar eB/me in non-relativistic situation and by Ec= (hbar eB)1/2 in relativistic situation. The situation is relativistic for both proton and electron now and at this limit the cyclotron energy scale does not depend on the mass of the charged particle at all. This means that same value of hbar produces same energy for both electron and proton.

    1. The magnetic field of pulsar can be estimated from the knowledge how much the field lines are pulled together and from the conservation of magnetic flux: a rough estimate is B=108 Tesla and will be used also now. This field is 2× 1012BE where BE=.5 Gauss is the nominal value of the Earth's magnetic field.

    2. The cyclotron frequency of electron in Earth's magnetic field is fc(e)=6× 105 Hz in a good approximation and correspond to cyclotron energy Ec=10-14(fc/Hz) eV from the approximate correspondence eV↔ 1014 Hz true for E=hf. For the ordinary value of Planck constant electron's cyclotron energy would be for supernova magnetic field BS=108 Tesla equal to Ec=2× 10-2 (fc/Hz) eV and much below the energy scale E= 100 GeV.

    3. The required scaling hbar→ r×hbar of Planck constant is obtained from the condition Ec=E giving in the case of electron one can write

      r= (E/Ec)2×(BE/BS) × hbar eBE/me2.

      The dimensionless parameter hbar eBE/me2=1.2×10-14 follows from me=.5 MeV. The estimate gives r∼ 2× 1012. Values of Planck constant of this order of magnitude and even larger ones appear in TGD inspired model of brain but in this case magnetic field is Earth's magnetic field and the large thickness of the flux tube makes possible to satisfy the quantization of magnetic flux in which scaled up hbar defines the unit.

    To sum up, large values of Planck constant would be absolutely essential making possible high energy cosmic rays and just the presence of high energy cosmic rays could be seen as an experimental support for the hierarchy of Planck constants. The acceleration mechanisms of cosmic rays are poorly understood and TGD option predicts that there is no acceleration mechanism to search for.

For details and background see the article Do we really understand the solar system? and the chapter TGD and Astrophysics.



Cold dark matter in difficulties

Cold dark matter scenario assumes that dark matter consists of exotic p"../articles/ having extremely weak interactions with ordinary matter and which clump together gravitationally. These concentrations of dark matter would grow and attract ordinary matter forming eventually the galaxies.

Cold dark matter scenario has several problems.

  1. Computer simulations support the view that dark matter should be densely packed in galactic nuclei. This prediction is problematic since the constant velocity spectrum of distant stars rotting around galactic nucleus requires that the mass of dark matter within sphere of radius R is proportional to R so that the density of dark matter would decrease as 1/r2. This if one assumes that the distribution of dark matter is spherically symmetric.
  2. Observations show that in the inner parts of galactic disk velocity spectrum depend linearly on the radial distance (see this). Dark matter density should be constant in good approximation (assuming spherical symmetry) whereas the cold dark matter model represent is strong peaking of the mass density in the galactic center. This is known as core/cusp problem.
  3. Cold dark matter scenario predicts also large number of dwarf galaxies with mass which would be about one thousandth of that for the Milky Way. They are not observed. This is known as missing satellites problem.
  4. Cold dark matter scenario predicts signficant amounts of low angular momentum material which is not observed.

Cold dark matter scenario is however in difficulties as one learns from Science Daily articleDark Matter Mystery Deepens. Observational data about the structure of dar matter in dwarf galaxies is however in conflict with this picture. New measurements about two dwarf galaxies tell that dark matter distribution is smooth. Dwarf galaxies are believed to contain 99 per cent of dark matter and are therefore ideal for the attempts to understand dark matter. Dwarf galaxies differ from ordinary ones in that stars inside them move like bees in beehive instead of moving along nice circular orbits. The distribution of the dark matter was found to be uniform over a region with diameter of several hundred light years which corresponds to the size scale of the galactic nucleus. For comparison purposes note that Milky Way has at its center a bar like structure with size between 3300-16000 ly. Notice also that also in ordinary galaxies constant density core is highly suggestive (core/cusp problem) so that dwarf galaxies and ordinary galaxies need not be so different after all.

In TGD framework the simplest model for the galactic dark matter assumes that galaxies are like pearls in a necklace. Necklace would be long magnetic flux tube carrying dark energy identified as magnetic energy and galaxies would be bubbles inside the flux tube which would have thicknened locally. Similar model would apply to start. The basic prediction is that the motion of stars along flux tube is free apart from the gravitational force caused by the visible matter. Constant velocity spectrum for distant stars follows from the logarithmic gravitational potential of the magnetic flux tube and cylindrical symmetry would be absolutely essential and distinguish the model from the cold dark matter scenario.

What can one say about the dwarf galaxies in TGD framework? The thickness of the flux tube is a good guess for the size scale in which dark matter distribution is approximately constant: this for any galaxy (recall that dark and ordinary matter would have formed as dark energy transforms to matter). The scale of hundred light years is roughly by a factor of 1/10 smaller than the size of the center of the Milky Way nucleus. The natural question is whether the dark matter distribution could spherically symmetric and constant in this scale also for ordinary galaxies. If so, the cusp/core problem would disappear and orinary galaxies and dwarf galaxies would not differ in an essential manner as far as dark matter is considered. The problem would be essentially that of cold dark matter scenario.

For details and background see the chapter Cosmic strings.



ICARUS refutes OPERA: really?

Tommaso Dorigo managed to write the hype of his life about super-luminal neutrinos. This kind of accidents are unavoidable and any blogger sooner or later becomes a victim of such an accident. To my great surprise Tommaso described in a completely uncritical and hypeish manner a study by ICARUS group in Gran Sasso and concluded that it definitely refutes OPERA result. This if of course a wrong conclusion and based on the assumption that special and general relativity hold true as such and neutrinos are genuinely superluminal.

Also Sascha Vongehr wrote about ICARUS a a reaction to Tommaso's surprising posting but this was purposely written half-joking hype claiming that ICARUS proves that neutrinos travel the first 18 meters with a velocity at least 10 times higher than c. Sascha also wrote a strong criticism of the recent science establishment. The continual uncritical hyping is leading to the loss of the respectability of science and I cannot but share his views. Also I have written several times about the ethical and moral decline of the science community down to what resembles the feudal system of middle ages in which Big Boys have first night privilege to new ideas: something which I have myself had to experience many times.

What ICARUS did was to measure the energy distribution of muons detecteded in Gran Sasso. This result is used to claim that OPERA result is wrong. The measured energy distribution is compared with the distribution predicted assuming that Cohen-Glashow interpretation is correct. This is an extremely important ad hoc assumption without which the ICARUS demonstration fails completely.

  1. Cohen and Glashow assume a genuine super-luminality and argue that this leads to the analog of Cherenkov radiation leading to a loss of neutrino energy: 28.2 GeV at CERN is reduced to averge of 12.1 GeV at Gran Sasso. From this model one can predict the energy distribution of muons in Gran Sasso.
  2. The figure 2 of Icarus preprint demonstrates that the distribution assuming now energy loss fits rather well the measured energy distribution of muons. The figure does not show the predicted distribution but the figure text tells that the super-luminal distribution would be much "leaner", which one can interpret as a poor fit.
  3. From this ICARUS concludes that neutrinos cannot have exceeded light velocity. The experimental result of course tells only that neutrinos did not lose energy: about the neutrino velocity it says nothing without additional assumptions.

At the risk of boring the reader I repeat: the fatal assumption is that a genuine super-luminality is in question. The probably correct conclusion from this indeed is that neutrinos would lose their energy during their travel by Cherenkov radiation.

In TGD framework situation is different (see this, this, this, and also the article). Neutrinos move in excellent approximation velocity which is equal to the maximal signal velocity but slightly below it and without any energy loss. The maximal signal velocity is however higher for a neutrino carrying space-time sheets than those carrying photons- a basic implication sub-manifold gravity. I have explained this in detail in previous postings and in the article.

The conclusion is that ICARUS experiment supports the TGD based explanation of OPERA result. Note however that at this stage TGD does not predict effective superluminality but only allows and even slightly suggests it and provides also a possible explanation for its energy independence and dependences on length scale and particle. TGD suggests also new tests using relativistic electrons instead of neutrinos.

It is also important to realize that the the apparent neutrino super-luminality -if true- provides only single isolated piece evidence for sub-manifold gravity. The view about space-time as 4-surface permeates the whole physics from Planck scale to cosmology predicting correctly particle spectrum and providing unification of fundamental interactions, it is also in a key role in TGD inspired quantum biology and also in quantum consciousness theory inspired by TGD.

Let us sincerely hope that the conclusion of ICARUS will not be accepted as uncritically as Tommasso did.

For details and background see the article Are neutrinos superluminal and the chapter TGD and GRT.



Why neutrinos propagate faster in short length scales?

Sascha Vongehr written several interesting blog postings about superluminal neutrinos. The latest one is titled A million times the speed of light. I glue below my comment about explaining how one can understand qualitatively why the dependence of the maximal signal velocity at space-time sheet along with the relativistic particle propagates is lower in long length scales.

The explanation involves besides the induced metric also the notion of induced gauge field (induced spinor connection): here brane theorists reproducing TGD predictions are bound to meet difficulties and an instant independent discovery of the notion of induced gauge field and spinor structure is needed in order to proceed;-). Here is my comment in somewhat extended form.

-------------------------------------------------------------------------------

Dear Sascha,

I would be critical about two points.

  1. I would take Poincare invariance and general coordinate invariance as a starting point. I am not sure whether your arguments are consistent with these requirements.
  2. The assumption that neutrinos slow down and have gigantic maximal signal velocities initially does not seem plausible to me. Just the dependence of the maximal signal velocity on length scale is enough to understand the difference between SN1987A and OPERA. What this means in standard physics framework is not however easy to understand.

If one is ready to accept sub-manifold gravity a la TGD, this boils down to the identification of space-time sheets carrying the neutrinos (or any relativistic p"../articles/ from point A to point B). This TGD prediction is about 25 years old: from Peter Woit's blog's comment section I learned that brane people are now proposing something similar: my prediction at viXra log and my own blog was that this will happen within about week: nice to learn that my blog has readers!

This predicts that the really maximal signal velocity (that for M4) is not probably very much higher than the light velocity in cosmic scales and Robertson-Walker cosmology predicts that the light velocity in cosmic scales is about 73 percent of the really maximal one.

The challenge for sub-manifold gravity approach is to understand the SN1987A-OPERA difference qualitatively. Why neutrino (and any relativistic particle) travels faster in short length scales?

  1. Suppose that this space-time sheet is massless extremal topologically condensed on a magnetic flux tube thickened from a string like object X2×: Y2 subset M4× CP2 to a tube of finite thickness. The longer and less straight the tube, the slower the maximal signal velocity since the light-like geodesic along it is longer in the induced metric (time-like curve in M4× CP2). There is also rotation around the flux lines increasing the path length: see below.
  2. For a planar cosmic string (X2 is just plane of M4) the maximal signal velocity would be as large as it can be but is expected to be reduced as the flux tube develops 4-D M4 projection. In thickening process flux is conserved so that B scales as 1/S, S the transversal area of the flux tube. Magnetic energy per unit length scales as 1/S and energy conservation requires that the length of the flux tube scales up like S during cosmic expansion. Flux tubes become longer and thicker as time passes.
  3. The particle -even neutrino!!- can rotate along the flux lines of electroweak fields inside the flux tube and this makes the path longer. The thicker/longer the flux tube,- the longer the path- the lower the maximal signal velocity. I emphasize that classical Z0 and W fields (and also gluon fields!) are the basic prediction of TGD distinguishing it from standard model: again the notion of induced gauge field pops up!
  4. Classically the cyclotron radius is proportional to the cyclotron energy. For a straight flux tube there is free relativistic motion in longitudinal degrees of freedom and cyclotron motion in transversal degrees of freedom and one obtains essentially harmonic oscillator like states with degeneracy due to the presence of rotation giving rise to angular momentum as an additional quantum number. If the transversal motion is non-relativistic, the radii of cyclotron orbits are proportional to a square root of integer. In Bohr orbitology one has quantization of the neutrino speeds: wave mechanically the same result is obtained in average sense. Fermi statistics implies that the states are filled up to Fermi energy so that several discrete effective light velocities are obtained. In the case of a relativistic electron the velocity spectrum would be of form

    ceff= L/T= [1+n×(hbar eB/m)]-1/2× c#

    Here L denotes the length of the flux tube and T the time taken by a motion along a helical orbit when the longitudinal motion is relativistic and transversal motion non-relativistic. In this case the spectrum for ceff is quasi-continuous. Note that for large values of hbar =nhbar0 (in TGD Universe) quasicontinuity is lost and in principle the spectrum might allow to the determination of the value of hbar.

  5. Neutrino is a mixture of right-handed and left handed components and right-handed neutrino feels only gravitation where left-handed neutrino feels long range classical Z0 field. In any case, neutrino as a particle having weakest interactions should travel faster than photon and relativistic electron should move slower than photon. One must be however very cautious here. Also the energy of the relativistic particle matters.

    Here brane-theorists trying to reproduce TGD predictions are in difficulties since the notion of induced gauge field is required besides that of induced metric. Also the geometrization of classical electro-weak gauge fields in terms of the spinor structure of imbedding space is needed. It is almost impossible to avoid M4× CP2 and TGD.

    To sum up, this would be the qualitative mechanism explaining why the neutrinos travel faster in short scales. The model can be also made quantitative since the cyclotron motion can be understood quantitatively once the field strength is known.

    -------------------------------------------------------------------------------------

    For details and background see the article Are neutrinos superluminal and the chapter TGD and GRT.



Cosmic evolution as transformation of dark energy to matter

The anomalous behavior of the equinox precession and the recent surprising findings about heliosphere by NASA can be combined with the TGD inspired model for stars. The mdoel relies on the heuristic idea that stars (as also galazies) are like pearls in a necklace defined by long magnetic flux tubes carrying dark matter and strong magnetic field responsible for dark energy and possibly accompanied by the analog of solar wind. Heliosphere would be like a bubble in the flow defined by magnetic field inside the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy: this would provide a beautiful overall view about the emergence of stars and their heliospheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays. The bubble option is favored by the fact that Newtonian theory works so well inside planetary system. The model suggests bound state precessing solutions without nutation as the first approximation expected to be stable against dissipation. A small nutation around the equilibrium solution could explain the slow variation of the precession rate and can be treated as a small oscillatory perturbation around non-nutating ground state. The variation could be also caused by external perturbations. What is amusing from the mathematical point of view is that the model is analytically solvable and that the solution involves elliptic functions just as the Newtonian two-body problem does.

The model suggests a universal fractal mechanism leading to the formation of astrophysical and even biological structures as a formation of bubbles of ordinary or dark matter inside magnetic flux tubes carrying dark energy identified as magnetic energy of the flux tubes. In primordial cosmology these flux tubes would have been cosmic strings with enormous mass density, which is however below the black hole limit for straight strings. Strongly entangled strings could form black holes if general relativistic criteria hold true in TGD.

One must be very critical concerning the model since in TGD framework the accelerated cosmic expansion has several alternative descriptions, which should be mutually consistent. It seems that these descriptions corresponds to the descriptions of one and same thing in different length scales.

  1. The critical and over-critical cosmologies representable as four-surfaces in M4× CP2 are unique apart from their duration (see this). The critical cosmology corresponds to flat 3-space and would effectively replace inflationary cosmology in TGD framework and criticality would serve as a space-time correlate for quantum criticality in cosmological scales natural if hierarchy of Planck constants is allowed. The expansion is accelerating for the critical cosmology and is caused by a negative "pressure" basically due to the constraint force induced by the imbeddability condition, which is actually responsible for most of the explanatory power of TGD (say geometrization of standard model gauge fields and quantum numbers).

  2. A more microscopic manner to understand the accelerated expansion would be in terms of cosmic strings. Cosmic strings (see this) expand during cosmic evolution to flux tubes and serve as the basic building bricks of TGD Universe. The magnetic tension along them generates a negative "pressure", which could explain the accelerated expansion. Dark energy would be magnetic energy.

    The proposed boiling of the flux tubes with bubbles representing galaxies, stars, ..., cells, etc.. would serve as a universal mechanism generating ordinary and dark matter. The model should be consistent with the Bohr orbitology for the planetary systems (see this) in which the flux tubes mediating gravitational interaction between star and planet have a gigantic Planck constant. This is the case if the magnetic flux tubes quite generally correspond to gigantic values of Planck constant of form hbargr=GM1M2/v0, v0/c<1, where M1 and M2 are the masses of the objects connected by the flux tube.

  3. Even more microscopic description of the accelerated expansion would be in terms of elementary p"../articles/. In TGD framework space-time decomposes into regions having both Minkowskian and Euclidian signatures of the induced metric (see this). The Euclidian regions are something totally new as compared to the more conventional theories and have interpretation as space-time regions representing lines of generalized Feynman diagrams.

    The simplest GRT limit of TGD relies of Einstein-Maxwell action with a non-vanishing cosmological constant in the Euclidian regions of space-time (see this): this allows both Reissner-Nordström metric and CP2 as special solutions of field equations. The cosmological constant is gigantic but associated only with the Euclidian regions representing p"../articles/ having typical size of order CP2 radius. The cosmological constant explaining the accelerated expansion at GRT limit could correspond to the space-time average of the cosmological constant and therefore would be of a correct sign and order of magnitude (very small) since most of the space-time volume is Minkowskian.

    This picture can be consistent with the idea that magnetic flux tubes which have Minkowskian signature of the induced metric are responsible for the efffective cosmological constant if the magnetic energy inside the magnetic flux tubes transforms to elementary p"../articles/ in a phase transition generating dark and ordinary matter from dark energy and therefore gives rise to various visible astrophysical objects.

For details and background see the article Do we really understand the solar system? and the chapter TGD and Astrophysics.



Do we really understand the solar system?

The recent experimental findings have shown that our understanding of the solar system is surprisingly fragmentary. As a matter fact, so fragmentary that even new physics might find place in the description of phenomena like the precession of equinoxes (I am grateful for my friend Pertti Kärkkäinen for telling me about the problem) and the recent discoveries about the bullet like shape of heliosphere and strong magnetic fields near its boundary bringing in mind incompressible fluid flow around obstacle.

TGD inspired model is based on the heuristic idea that stars are like pearls in a necklace defined by long magnetic flux tubes carrying dark energy and strong magnetic field and possibly accompanied by the analog of solar wind. Heliosphere would be like bubble in the flow defined by magnetic field in the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy: this would provide a beautiful overall view about the emergence of stars and their heliospheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays.

For details and background see the article Do we really understand the solar system? and the chapter TGD and Astrophysics.



Are neutrinos superluminal?

OPERA collaboration in CERN has reported that the neutrinos travelling from CERN to Gran Sasso in Italy move with a super-luminal speed. There exists also earlier evidence for the super-luminality of neutrinos: for instance, the neutrinos from SN1987A arrived for few hours earlier than photons. The standard model based on tachyonic neutrinos is formally possible but breaks causality and is unable to explain all results. TGD based explanation relies on sub-manifold geometry replacing abstract manifold geometry as the space-time geometry. The notion of many-sheeted space-time predicts this kind of effects plus many other effects for which evidence exists as various anomalies which have not taken seriously by the main stream theorists.

For details and background see the article Are neutrinos superluminal? and the chapter TGD and GRT.



Could TGD be an integrable theory?

During years evidence supporting the idea that TGD could be an integrable theory in some sense has accumulated. The challenge is to show that various ideas about what integrability means form pieces of a bigger coherent picture. Of course, some of the ideas are doomed to be only partially correct or simply wrong. Since it is not possible to know beforehand what ideas are wrong and what are right the situation is very much like in experimental physics and it is easy to claim (and has been and will be claimed) that all this argumentation is useless speculation. This is the price that must be paid for the luxury of genuine thinking.

Integrable theories allow to solve nonlinear classical dynamics in terms of scattering data for a linear system. In TGD framework this translates to quantum classical correspondence. The solutions of modified Dirac equation define the scattering data. The conjecture is that octonionic real-analyticity with space-time surfaces identified as surfaces for which the imaginary part of the biquaternion representing the octonion vanishes solves the field equations. This conjecture generalizes the conformal invariance to its octonionic analog. If this conjecture is correct, the scattering data should define a real analytic function whose octonionic extension defines the space-time surface as a surface for which its imaginary part in the representation as bi-quaternion vanishes. There are excellent hopes about this thanks to the reduction of the modified Dirac equation to geometric optics.

For details and background the reader can consult to the article An attempt to understand preferred extremals of Kähler action and to the chapter Basic Extremals of Kähler action.



Entropic gravity in difficulties?

Eric Verlinde's Entropic Gravity is one of the fashions of recent day theoretical physics which come and go (who still remembers Lisi's "Exceptionally simple theory of everything", which raised Lisi for a moment a potential follower of Einstein?) That this would happen was rather clear to me from the beginning and I expressed my views in several postings: see this, this, and this. The idea that gravitons are there not all and gravitational force is purely thermodynamical force looks nonsensical to me on purely mathematical grounds. But what about physics? Kobakhidze wrote a paper in which he demonstrated that the neutron interferometry experiments disfavor the notion of entropic gravity. Neutron behaves like a quantal particle obeying Schrödinger equation in the gravitational field of Earth and it is difficult to understand this if gravitation is entropic force.

I wrote detailed comments about this in the second posting and proposed different interpretation of the basic formulas for gravitational temperature and entropy based on zero energy ontology predicting that even elementary particle are at least mathematically analogous to thermodynamical objects. The temperature and entropy would be associated with the ensemble of gravitons assigned with the flux tubes mediating gravitational interaction and temperature behaves naturally as 1/r2 in absence of other graviton/heat sourcers and entropy is naturally proportional to the flux tube length and therefore to the radial distance r. This allows to understand the formulas deduced by Sabine Hossenfelder who has written one of the rather few clear expositions about entropic gravity (Somehow it reflects the attitudes towards women in physics that her excellent contribution was not mentiond in the reference list of the Wikipedia article. Disgusting.). Entropic gravitons are of course quite different thing than gravitation as entropic force.

The question about the proper interpretation of the formulas was extremely rewarding since it also led to ask what is the GRT limit of TGD could be. This led to beautiful answer and in turn forced to ask what black holes really are in TGD Universe. We have no empirical information about their interiors so that general relativistic answer can be taken only as one possibility which is even plagued by mathematical difficulties. Blackhole horizon is quite concretely the door to the new physics so that one should have be very openminded here- we really do not know what is behind the door!

The TGD based answer was surprising: black holes in TGD Universe correspond to the regions of space-time with Euclidian signature of the induced metric. In particular, the lines of generalized Feynman diagrams are blackholes in this sense. This view would unify elementary p"../articles/ and blackholes. This proposal also leads to a concrete proposal for how to understand the extremely small value of the cosmological constant as the average value of cosmological constant which vanishes for Minkowskian regions but is large for Euclidian regions and determined by CP2 size.

The first article of Kobakhidze appeared in arXiv already two years ago but was not noticed by bloggers (except me but as a dissident I am of course not counted;-). Here the fact that I was asked to act as a referee helped considerably. Unfortunately I did not have time for this!).) . The new article Once more: gravity is not an entropic force of Kobakhidze was however noticed by media and also by physics bloggers.

Lubos came first: Lubos however had read the article carelessly (even its abstract) and went to claim that M. Chaichian, M. Oksanen, and A. Tureanu state in their article that Kobakhidze's claim is wrong and that they support entropic gravity. This was of course not the case: the authors agreed with Kobakhidze about entropic gravity but argued that there was a mistake in his reasoning. In honor of Lubos one must say that he noticed the problems caused by the lack of quantum mechanical interferene effects already much earlier.

Also Johannes Koelman wrote about the topic with inspiration coming from the popular web article Experiments Show Gravity Is Not an Emergent Phenomenon inspired by Kobakhidze's article.

To my opinion Verlinde's view is wrong but it would be a pity if one would not try to explain the highly suggestive formulas for entropy and temperature like parameters nicely abstracted by Sabine Hossenfelder from Verlinde's work. I have already described briefly my own interpretation inspired by zero energy ontology. In TGD framework it seems impossible to avoid the conclusion that also the mediators of other interactions are in thermal equilibrium at corresponding space-time sheets and that the temperature is universally the Unruh temperature determined by acceleration. Also the expression for the entropy can be deduced as the following little argument shows.

What makes the situation so interesting is that the sign of both temperature and entropy are negative for repulsive interactions suggesting thermo-dynamical instability. This leads to the question whether matter antimatter separation could relate to are reversal of the arrow of geometric time at space-time sheets mediating repulsive long range interactions. This statement makes sense in zero energy ontology and the arrow of time has a concrete mathematical content as a property of zero energy states. In the following I will consider identification of the temperature and entropy assignable to the flux tubes mediating gravitational or other interactions. I was too lazy to deduce explicit formulas in the original version of the article about this topic and added the formulas also into it.

Graviton temperature

Consider first the gravitonic temperature. The natural guess for the temperature parameter would be as Unruh temperature

Tgr= (hbar/2π) a ,

where a is the projection of the gravitational acceleration along the normal of the gravitational potential = constant surface. In the Newtonian limit it would be acceleration associated with the relative coordinates and correspond to the reduced mass and equal to a=G(m1+m2)/r2.

One could identify Tgr also as the magnitude of gravitational acceleration. In this case the definition would involved only be purely local. This is in accordance with the character of temperature as intensive property.

The general relativistic objection against the generalization is that gravitation is not a genuine force: only a genuine acceleration due to other interactions than gravity should contribute to the Unruh temperature so that gravitonic Unruh temperature should vanish. On the other hand, any genuine force should give rise to an acceleration. The sign of the temperature parameter would be different for attractive and repulsive forces so that negative temperatures would become possible. Also the lack of general coordinate invariance is a heavy objection against the formula.

In TGD Universe the situation is different. In this case the definition of temperature as magnitude of local acceleration is more natural.

  1. Space-time surface is sub-manifold of the imbedding space and one can talk about acceleration of a point like particle in imbedding space M4× CP2. This acceleration corresponds to the trace of the second fundamental form for the imbedding and is completely well-defined and general coordinate invariant quantity and vanishes for the geodesics of the imbedding space. Since acceleration is a purely geometric quantity this temperature would be same for flux sheets irrespective of whether they mediate gravitational or some other interactions so that all kinds of virtual p"../articles/ would be characterized by this same temperature.

  2. One could even generalize Tgr to a purely local position dependent parameter by identifying it as the magnitude of second fundamental form at given point of space-time surface. This would mean that the temperature in question would have purely geometric correlate. This temperature would be alwas non-negative. This purely local definition would also save from possible inconsistencies in the definition of temperature resulting from the assumption that its sign depends on whether the interaction is repulsive or attractive.

  3. The trace of the second fundamental form -call it H- and thus Tgr vanishes for minimal surfaces. Examples of minimal surfaces are cosmic strings, massless extremals and CP2 vacuum extremals with M4 projection which is light-like geodesic. Vacuum extremals with at most 2-D Lagrangian CP2 projection has a non-vanishing H and this is true also for their deformations defining the counterpart of GRT space-time. Also the deformations of cosmic strings with 2-D M4 projection to magnetic flux tubes with 4-D M4 projection are expected to be non-minimal surfaces. Same applies to the deformations of CP2 vacuum extremals near the region where the signature of the induced metric changes. The predicted cosmic string dominated phase of primordial cosmology would correspond to the vanishing gravitonic temperature. Also generic CP2 type vacuum extremals have non-vanishing H.

  4. Massless extremals are excellent macroscopic space-time correlate for gravitons. The massivation of gravitons is however strongly suggested by simple considerations encouraged by twistorial picture and wormhole throats connecting parallel MEs definine the basic building bricks of gravitons and would bring in non-vanishing geometric temperature, (extremely small but non-vanishing) graviton mass, and gravitonic entropy.

    1. The M4 projection of CP2 type vacuum extremal is random light-like curve rather than geodesic of M4 (this gives rise to Virasoro conditions kenociteallb/class). The mass scale defined by the second fundamental form describing acceleration is non-vanishing. I have indeed assigned this scale as well as the mixing of M4 and CP2 gamma matrices inducing mixing of M4 chiralities giving rise to massivation. The original proposal was that the trace of second fundamental form could be identifiable as classical counterpart of Higgs field. One can speak of light-like randomness above a given length scale defined by the inverse of the length of the acceleration vector.

    2. This suggests a connection with p-adic mass calculations: the p-adic mass scale mp is proportional to the acceleration and thus could be given by the geometric temperature: mp=n R-1p-1/2∼ hbar H=hbar a, where R∼ 104LPl is CP2 radius, and n some numerical constant of order unity. This would determine the mass scale of the particle and relate it to the momentum exchange along corresponding CP2 type vacuum extremal. Local graviton mass scale at the flux tubes mediating gravitational interaction would be essentially the geometric temperature.

    3. Interestingly, for photons at the flux tubes mediating Coulomb interactions in hydrogen atom this mass scale would be of order

      hbar a ∼ e2hbar/[mpn4a02]∼ 10-5 /n4 eV,

      which is of same order of magnitude as Lamb shift, which corresponds to 10-6 eV energy scale for n=2 level of hydrogen atom. Hence it might be possible to kill the hypothesis rather easily.

    4. Note that momentum exchange is space-like for Coulomb interaction and the trace Hk of the second fundamental form would be space-like vector. It seems that one define mass scale as H=(-HkHk)1/2 to get a real quantity.

    5. This picture is in line with the view that also the bosons usually regarded as massless possess a small mass serving as an IR cufoff. This vision is inspired by zero energy ontology and twistorial considerations kenociteallb/twistor. The prediction that Higgs is completely eaten by gauge bosons in massivation is a prediction perhaps testable at LHC already during year 2011.

Remark: In MOND theory of dark matter a critical value of acceleration is introduced. I do not believe personally to MOND and TGD explains galactic rotation curves without any modification of Newtonian dynamics in terms of dark matter assignable to cosmic strings containing galaxies like around it like pearls in necklace. In TGD framework the critical acceleration would be the acceleration above which the gravitational acceleration caused by the dark matter associated with the cosmic strings traversing along galactic plane orthogonally and behaving as 1/ρ overcomes the acceleration caused by the galactic matter and behaving as 1/ρ2. Could this critical acceleration correspond to a critical temperature Tgr- presumably determined by an appropriate p-adic length scale and coming as a power 2-k/2 by p-adic length scale hypothesis? Could critical value of H perhaps characterize also a critical magnitude for the deformation from minimal surface extremal? The critical acceleration in Milgrom's model is about 1.2*10-10 m/s2 and corresponds to a time scale of 1012 years, which is of the order of the age of the Universe.

The formula contains Planck constant and the obvious question of the inhabitant of TGD Universe is whether the Planck constant can be identified with the ordinary Planck constant or with the effective Planck constant coming as integer multiple of it (see this).

  1. For the ordinary value of hbar the gravitational Unruh temperature is extremely small. To make things more concrete one can express the Unruh temperature in gravitational case in terms of Schwartschild radius rS=2GMm at Newtonian limit. This gives

    Tgr= (hbar/4π rS) [ (M+m)/M] (rS/r)2 .

    Even at Schwartschild radius the temperature corresponds to Compton length of order 4π rS for m<<M.

  2. Suppose that Planck constant is gravitational Planck constant hbargr= GMm/v0, where v0≈ 2-11 holds true for inner planets in solar system (see this). This would give

    Tgr= (m/8π v0) [(M+m)/M] (rS/r)2 .

    The value is gigantic so that one must assume that the temperature parameter corresponds to the minimum value of Planck constant. This conforms with the identification of the p-adic mass scale in terms of the geometric temperature.

Gravitonic entropy

A good guess for the value of gravitational entropy (gravitonic entropy associated with the flux tube mediating gravitational interaction) comes from the observation that it should be proportional to the flux tube length. The relationship dE= TdS suggests S∝ φgr/Tgr as the first guess in Newtonian limit. A better guess would be

Sgr= -Vgr/Tgr= [(M+m)/M] (r/hbar m) ,

The replacement M→ M+m appearing in the Newtonian equations of motion for the reduced mass has been performed to obtain symmetry with respect to the exchange of the masses.

The entropy would depend on the interaction mediated by the space-time sheet in question which suggests that the generalization is

S=-V(r)/Tgr .

Here V(r) is the potential energy of the interaction. The sign of S depends on whether the interaction is attractive or repulsive and also on the sign of the temperature. For a repulsive interaction the entropy would be negative so that the state would be thermodynamically unstable in ordinary thermodynamics.

The integration of dE= TdS in the case of Coulomb potential gives E= V(r)-V(0) for both options. If the charge density near origin is constant, one has V(r) proportional to r2 in this region implying V(0)=0 so that one obtains Coulombic interaction energy E=V(r). Hence thermodynamical interpretation makes sense formally.

The challenge is to generalize the formula of entropy in Lorentz invariant and general coordinate invariant manner. Basically the challenge is to express the interaction energy in this manner. Entropy characterizes the entire flux tube and is therefore a non-local quantity. This justifies the use of interaction energy in the formula. In principle the dynamics defined by the extremals of Kähler action predicts the dependence of the interaction energy on Minkowskian length of the flux tube, which is well-defined in TGD Universe. Entropy should be also a scalar. This is achieved since the rest frame is fixed uniquely by the time direction defined by the time-like line connecting the tips of CD: the interaction energy in rest frame of CD defines a scalar. Note that the sign of entropy correlates with the sign of interaction energy so that the repulsive situation would be thermodynamically unstable and this suggests that matter antimatter asymmetry could relate to thermal instability.

See the article TGD inspired vision about entropic gravity. For background see the chapter TGD and GRT.



Possible role of Beltrami flows and symplectic invariance in the description of gauge and gravitational interactions

One of the most recent observations made by people working with twistors is the finding of Monteiro and O'Connell described in the preprint The Kinematic Algebra From the Self-Dual Sector . The claim is that one can obtain supergravity amplitudes by replacing the color factors with kinematic factors which obey formally 2-D symplectic algebra defined by the plane defined by light-like momentum direction and complexified variable in the plane defined by polarizations. One could say that momentum and polarization dependent kinematic factors are in exactly the same role as the factors coming from Yang-Mills couplings. Unfortunately, the symplectic algebra looks rather formal object since the first coordinate is light-like coordinate and second coordinate complex transverse coordinate. It could make sense only in the complexification of Minkowski space.

In any case, this would suggest that the gravitational gauge group (to be distinguished from diffeomorphisms) is symplectic group of some kind having enormous representative power as we know from the fact that the symmetries of practically any physical system are realized in terms of symplectic transformations. According to the authors of kenocitebthe/kinealgebra one can identify the Lie algebra of symplectic group of sphere with that of SU(N) at large N limit in suitable basis. What makes this interesting is that at large N limit non-planar diagrams which are the problem of twistor Grassmann approach vanish: this is old result of t'Hooft, which initiated the developments leading to AdS/CFT correspondence.

The symplectic group of δ M4+/-× CP2 is the isometry algebra of WCW and I have proposed that the effective replacement of gauge group with this group implies the vanishing of non-planar diagrams (see this). The extension of SYM to a theory of also gravitation in TGD framework could make Yangian symmetry exact, resolve the infrared divergences, and the problems caused by non-planar diagrams. It would also imply stringy picture in finite measurement resolution. Also the the construction of the non-commutative homology and cohomology in TGD framework led to the lifting of Galois group algebras to their braided variants realized as symplectic flows and to the conjecture that in finite measurement resolution the cohomology obtained in this manner represents WCW ("world of classical worlds") spinor fields (or at least something very essential about them) [see this].

It is however difficult to understand how one could generalize the symplectic structure so that also symplectic transformations involving light-like coordinate and complex coordinate of the partonic 2-surface would make sense in some sense. In fact, a more natural interpretation for the kinematic algebra would in terms of volume preserving flows which are also Beltrami flows (see for instance this). This gives a connection with quantum TGD since Beltrami flows define a basic dynamical symmetry for the preferred extremals of Kähler action which might be called Maxwellian phase.

  1. Classical TGD is defined by Kähler action which is the analog of Maxwell action with Maxwell field expressed as the projection of CP2 Kähler form. The field equations are extremely non-linear and only the second topological half of Maxwell equations is satisfied. The remaining equations state conservation laws for various isometry currents. Actually much more general conservation laws are obtained.

  2. As a special case one obtains solutions analogous to those for Maxwell equations but there are also other objects such as CP2 type vacuum extremals providing correlates for elementary p"../articles/ and string like objects: for these solutions it does not make sense to speak about QFT in Minkowski space-time. For the Maxwell like solutions linear superposition is lost but a superposition holds true for solutions with the same local direction of polarization and massless four-momentum. This is a very quantal outcome (in accordance with quantum classical correspondence) since also in quantum measurement one obtains final state with fixed polarization and momentum. So called massless extremals (topological light rays) analogous to wave guides containing laser beam and its phase conjugate are solutions of this kind. The solutions are very interesting since no dispersion occurs so that wave packet preserves its form and the radiation is precisely targeted.

  3. Maxwellian preferred extremals decompose in Minkowskian space-time regions to regions that can be regarded as classical space-time correlates for massless p"../articles/. Massless p"../articles/ are characterized by polarization direction and light-like momentum direction. Now these directions can depend on position and are characterized by gradients of two scalar functions Φ and Ψ. Φ defines light-like momentum direction and the square of the gradient of Φ in Minkowski metric must vanish. Ψ defines polarization direction and its gradient is orthogonal to the gradient of Φ since polarization is orthogonal to momentum.

  4. The flow has the additional property that the coordinate associated with the flow lines integrates to a global coordinate. Beltrami flow is the term used by mathematicians. Beltrami property means that the condition j kenowedge dj =0 is satisfied. In other words, tjhe current is in the plane defined by its exterior derivative. The above representation obviously guarantees this. Beltrami property allows to assign order parameter to the flow depending only the parameter varying along flow line.

    This is essential for the hydrodynamical interpretation of the preferred extremals which relies on the idea that varies conservation laws hold along flow lines. For instance, super-conducting phase requires this kind of flow and velocity along flow line is gradient of the order parameter. The breakdown of super-conductivity would mean topologically the loss of the Beltrami flow property. One might say that the space-time sheets in TGD Universe represent analogs of supra flow and this property is spoiled only by the finite size of the sheets. This strongly suggests that the space-time sheets correspond to perfect fluid flows with very low viscosity to entropy ratio and one application is to the observed perfect flow behavior of quark gluon plasma.

  5. The current J=Φ∇ Ψ has vanishing divergence if besides the orthogonality of the gradients the functions Ψ and Φ satisfy massless d'Alembert equation. This is natural for massless field modes and when these functions represent constant wave vector and polarization also d'Alembert equations are satisfied. One can actually add to ∇Ψ a gradient of an arbitrary function of Φ this corresponds to U(1) gauge invariance and the addition to the polarization vector a vector parallel to light-like four-momentum. One can replace Φ by any function of Φ so that one has Abelian Lie algebra analogous to U(1) gauge algebra restricted to functions depending on Φ only.

The general Beltrami flow gives as a special case the kinetic flow associated by Monteiro and O'Connell with plane waves. For ordinary plane wave with constant direction of momentum vector and polarization vector one could take Φ =cos(φ), φ=kkenocdot m and Ψ = εkenocdot m. This would give a real flow. The kinematical factor in SYM diagrams corresponds to a complexified flow Φ =exp(iφ) and Ψ= φ+ w, where w is complex coordinate for polarization plane or more naturally, complexificaton of the coordinate in polarization direction. The flow is not unique since gauge invariance allows to modify φ term. The complexified flow is volume preserving only in the formal algebraic sense and satisfies the analog of Beltrami condition only in Dolbeault cohomology where d is identified as complex exterior derivative (df=df/dzdz for holomorphic functions). In ordinary cohomology it fails. This formal complex flow of course does not define a real diffeomorphism at space-time level: one should replace Minkowski space with its complexification to get a genuine flow.

The finding of Monteiro and O'Connell encourages to think that the proposed more general Abelian algebra pops up also in non-Abelian YM theories. Discretization by braids would actually select single polarization and momentum direction. If the volume preserving Beltrami flows characterize the basic building bricks of radiation solutions of both general relativity and YM theories, it would not be surprising if the kinematic Lie algebra generators would appear in the vertices of YM theory and replace color factors in the transition from YM theory to general relativity. In TGD framework the construction of vertices at partonic two-surfaces would define local kinematic factors as effectively constant ones.

For background see the chapter Basic Extremals of Kähler Action.



MOND and TGD

Sean Carroll writes about breakdown of classical gravity in Cosmic variance. Recall that the galactic dark matter problem arose with the observation that the velocity spectrum of distance star is constant rather than behaving as 1/r as Newton's law assuming that most mass is in the galactic center predicts.

The MOND theory and its variants predict that there is a critical acceleration below which Newtonian gravity fails. This would mean that Newtonian gravitation is modified at large distances. String models and also TGD predict just the opposite since in this regime General Relativity should be a good approximation.

  1. The 1/r2 force would transform to 1/r force at some critical acceleration of about a=10-10 m/s2: this is a fraction of 10-11 about the gravitational acceleration at the Earth's surface.

  2. What Sean Carroll wrote about was the empirical study giving support for this kind of transition in the dynamics of stars at large distances and therefore breakdown of Newtonian gravity in MOND like theories.

In TGD framework critical acceleration is predicted but the recent experiment does not force to modify Newton's laws. Since Big Science is like market economy in the sense that funding is more important than truth, the attempts to communicate TGD based view about dark matter have turned out to be hopeless. Serious Scientist does not read anything not written on silk paper.

  1. One manner to produce this spectrum is to assume density of dark matter such that the mass inside sphere of radius R is proportional to R at last distances. Decay products of and ideal cosmic strings would predict this. The value of the string tension predicted correctly by TGD using the constraint that p-adic mass calculations give electron mass correctly.

  2. One could also assume that galaxies are distributed along cosmic string like pearls in necklace. The mass of the cosmic string would predict correct value for the velocity of distant stars. In the ideal case there would be no dark matter outside these cosmic strings.

    1. The difference with respect to the first mechanism is that this case gravitational acceleration would vanish along the direction of string and motion would be free motion. The prediction is that this kind of motions take place along observed linear structures formed by galaxies and also along larger structures.

    2. An attractive assumption is that dark matter corresponds to phases with large value of Planck constant is concentrated on magnetic flux tubes. Holography would suggest that the density of the magnetic energy is just the density of the matter condensed at wormhole throats associated with the topologically condensed cosmic string.
    3. Cosmic evolution modifies the ideal cosmic strings and their Minkowski space projection gets gradually thicker and thicker and their energy density - magnetic energy - characterized by string tension could be affected
TGD option differs from MOND in some respects and it is possible to test empirically which option is nearer to the truth.

  1. The transition at same critical acceleration is predicted universally by this option for all systems-now stars- with given mass scale if they are distributed along cosmic strings like like pearls in necklace. The gravitational acceleration due the necklace simply wins the gravitational acceleration due to the pearl. Fractality encourages to think like this.

  2. The critical acceleration predicted by TGDr depends on the mass scale as a ∝ GT2/M, where M is the mass of the object- now star. Since the recent study considers only stars with solar mass it does not allow to choose between MOND and TGD and Newton can continue to rest in peace in TGD Universe. Only a study using stars with different masses would allow to compare the predictions of MOND and TGD and kill either option or both. Second test distinguishing between MOND and TGD is the prediction of large scale free motions by TGD option.

TGD option explains also other strange findings of cosmology.

  1. The basic prediction is the large scale motions of dark matter along cosmic strings. The characteristic length and time scale of dynamics is scaled up by the scaling factor of hbar. This could explain the observed large scale motion of galaxy clusters -dark flow- assigned with dark matter in conflict with the expectations of standard cosmology.

  2. Cosmic strings could also relate to the strange relativistic jet like structures meaning correlations between very distant objects. Universe would be a spaghetti of cosmic strings around which matter is concentrated.

  3. The TGD based model for the final state of star actually predicts the presence of string like object defining preferred rotation axis. The beams of light emerging from supernovae would be preferentially directed along this lines- actually magnetic flux tubes. Same would apply to the gamma ray bursts from quasars, which would not be distributed evenly in all directions but would be like laser beams along cosmic strings.

For more about TGD based vision about cosmology and astrophysics see the chapter TGD and Astrophysics.



Entropic gravity in TGD framework

I discussed entropic gravity of Verlinde for some time ago in rather critical spirit but made also clear that quantum TGD in the framework of zero energy ontology could be called square root of thermodynamics so that thermodynamics- or its square root- should emerge at the level of the lines of generalized Feynman diagrams. The intolerable-to-me features of entropic gravity idea are the claimed absence of gravitons and the nonsense talk about the emergence of dimensions assuming at the same time basic formulas of general relativity.

I returned to the topic later again with a boost given by one of the few people in the finnish academic establishment who have regarded me as a life form with some indications about genuine intelligence. What demonstrates the power of a good idea is that just posing some naturally occurring questions led rapidly to a TGD inspired phenomenology of EG allowing to see what is good and what is bad in EG hypothesis and also to see possible far reaching connections with apparently completely unrelated basic problems of recent day physics.

Consider first the phenomenology of EG in TGD framework.

  1. Gravitating bodies can be seen as sources of virtual and real gravitons propagating along flux tubes. The gravitons at flux tubes are thermalized and thus characterized by temperature and entorpy when the wavelength is much shorter than the distance between the source and receiver. One can say that massive object serves as a heat source. One could also say that the pair of bodies connected by flux tubes serves as a heat source for the flux tubes with temperature determined by reduced mass so that their is a complete symmetry between the two bodies.

  2. The expression for the gravitonic entropy of the flux tube is naturally proportional to the length of flux tube at a given "holographic screen" - and for the gravitonic temperature-naturally proportional to the inverse of distance squared in absence of other heat sources from standard Laplace equation- are consistent with their forms at the non-relativistic limit discussed by Sabine Hossenfelder in very transparent manner. In general case, the stringy slicing for the preferred extremals of Kähler action provide the preferred coordinates in which gravitational potential and the counterpart of the radial coordinate can be identified.

  3. EG generalizes to all interactions but negative temperatures mean a severe problem. This in turn suggests a direct connection with matter-antimatter asymmetry. Could thermally stable matter and antimatter correspond in zero energy ontology to different arrows of geometric time and appear therefore in different space-time regions? I have made this question also earlier but with a motivation coming directly from the formalism of quantum TGD.

This approach leads to the question whether the mathematical formalism of quantum TGD could make sense also in General Relativity when appropriately modified. In particular, do the notions of zero energy ontology and causal diamond and the identification of generalized Feynman diagrams as space-time regions of Euclidian signature of the metric make sense? Does the Kähler geometry for world of classical worlds realizing holography in strong sense lead to a formulation of GRT as almost topological QFT characterized by Chern-Simons action with a constraint depending on metric?

  1. Einstein-Maxwell theory generalizes Kähler action and the conditions guaranteing reduction of action to 3-D "boundary term" are realized automatically by Einstein-Maxwell equations and the weak form of electric-magnetic duality leads to Chern-Simons action.

  2. One distinction beween GRT and TGD is the possibility of space-time regions of Euclidian signature of the induced metric in TGD representing the lines of generalized Feynman diagrams. The deformations of CP2 type vacuum extremals with Euclidian signature of the induced metric represent these lines replace black holes in TGD Universe. Black hole horizons are big p"../articles/ and are suggested to possess gigantic effective value of Planck constant for which Schwartshild radius is essentially the Compton length for gravitational Planck constant so that black hole becomes indeed a particle in quantum sense. Blackholes represent dark matter in TGD sense.

  3. CP2 type vacuum extremals are solutions of Einstein's equations with a unique value of cosmological constant fixing CP2 radius and this constant can be non-vanishing only in regions of Euclidian signature. The average value of the cosmological constant would be proportional to the ratio of the three-volume of Euclidian regions to the whole volue of 3-space and therefore very small. Could this be equivalent with the smallness of the actual cosmological constant? To answer the question one should understand the interaction between Euclidian and Minkowskian regions. I have proposed alternative manners to understand apparent cosmological constant in TGD Universe. Negative pressure could be understood in terms of the magnetic energy of magnetic flux tubes. On the other hand, quantum critical cosmology replacing inflation in TGD framework characterized by single parameter - its duration- corresponds to "negative pressure". These explanations need not be mutually exclusive.

At the formal level the formalism for WCW Kähler geometry generalizes as such to almost topological quantum field theory but the conditions of mathematical existence are extremely powerful and the conjecture is that this requires sub-manifold property.

  1. The number of physically allowed space-times is much larger in GRT than in TGD framework and this leads to space-time with over-critical and arbitrarily large mass density and other problems plaguing GRT. M-theory exponentiates the problem and leads to landscape misery. The natural conjecture is that one cannot do without assuming that physically acceptable metrics are representable as surfaces in M4× CP2.

  2. CP2 type regions give rise to electroweak quantum numbers and Minkowskian regions to four-momentum and spin. This almost gives standard model quantum numbers just from Einstei-Maxwell system! It is however far from clear whether one obtains both of them at the wormhole throats between the Minkowskian and Euclidian regions (perhaps from the representations of super-conformal algebras associated with light-like 3-surfaces by their geometric 2-dimensionality). Since both are needed it seems that one must replace geometry with sub-manifold geometry. Also electroweak spin is obtained naturally only if spinors are induced spinors of the 8-D imbedding space rather than 4-D spinors for which also the existence of spinor structure poses problems in the general case.

For more details see the chapter TGD and GRT of "Physics in Many-Sheeted Space-time". See also the article TGD inspired vision about entropic gravitation.



Can graviton have mass?

Both Sean Carroll and Lubos report that the LIGO has not detected gravitational waves from black holes with masses in the range 25-100 solar masses. This conforms with theoretical predictions. Earlier searches from Super Novae give also null result: in this case the searches are already at the boundaries of resolution so that one can start to worry.

The reduction of the spinning rate of Hulse-Taylor binary is consistent with the emission of gravitational waves with the predicted rate so that it seems that gravitons are emitted. One can however ask whether gravitational waves might remain undetected for some reason.

Massive gravitons is the first possibility. For a nice discussion see the article of Goldhaber and Nieto giving in their conclusions a table summarizing upper bounds on graviton mass coming from various arguments involving model dependent assumptions. The problem is that it is not at all clear what massive graviton means and whether a simple Yukawa like behavior (exponential damping) for Newtonian gravitational potential is consistent with the general coordinate invariance. In the case of massive photons one has similar problem with gauge invariance. One can of course naiively assume Yukawa like behavior for the Newtonian gravitational potential and derive lower bounds for the Compton wave length of gravitons. The bound is given by λc> 100 Mpc (parsec (pc) is about four light years).

Second bound comes from the pulsar timing measurements. The photons emitted by the pulsar are assume to surf in the sea of gravitational waves created by the pulsar. If gravitons are massive in Yukawa sense they arrive with velocities which are below light velocity, a dispersion of both graviton and photon arrival times is predicted. This gives a much weaker lower bound λc> 1 pc. Note that the distance of Hulse-Taylor binary is 6400 pc so that this upper bound for graviton mass could explain the possible absence of gravitational waves from Hulse-Taylor binary. There are also other bounds on graviton mass but all are plagued by model dependent assumptions.

Also in TGD framework one can imagine explanations for the possible absence of gravitational waves. I have discussed the possibility that gravitons are emitted as dark gravitons with gigantic value of hbar, which decay eventually to bunches of ordinary gravitons meaning that continous stream of gravitons is replaced with bursts which would not be interpreted in terms of gravitons but as noise (see this).

One of the breakthroughs of the last year was related to the twistor approach to TGD in zero energy ontology (ZEO).

  1. This approach leads to the vision that all building blocks (light-like wormhole throats) of physical p"../articles/ -including also virtual p"../articles/ and also string like objects- are massless. On mass shell p"../articles/ are bound states of massless p"../articles/ but virtual states do not satisfy bound state constraint and because negative energies are possible, also space-like virtual momenta are possible.

  2. Massive physical p"../articles/ are identified as bound states of massless wormhole throats: since the three momenta can have different (as a special case opposite) directions, the bound states of light-like wormhole throats can be indeed massive.

  3. Masslessness of the fundamental objects saves from problems with gauge invariance and general coordinate invariance. It also makes it possible to apply twistor formalism, implies the absence of UV divergences, and yields an enormous simplification of generalized Feynman diagrammatics since mass shell constraints are satisfied at lines besides momentum conservation at vertices.

  4. A simple argument forces to conclude that all spin one and spin two p"../articles/- in particular graviton- identified in terms of multi-wormhole throat states must have arbitrary small but non-vanishing mass. The resulting physical IR cutoff guarantees the absence of IR divergences. This allows to preserve the exact Yangian symmetry of the M-matrix. One implication is that photon eats the TGD counterpart of the neutral Higgs and that only pseudoscalar counterpart of Higgs survives. The scalar counterparts of gluons suffer the same fate whereas their pseudoscalar partners would survive.

Is the massivation of gauge bosons and gravitons in this sense consistent with the Yukawa type behavior?

  1. The first thing to notice is that this massivation would be essentially a non-local quantal effect since both emitter and receiver both emit and receive light-like momenta. Therere the description of the massivation in terms of Yukawa potential and using ordinary QFT might well be impossible or be a good approximation at best.

  2. If the massive gauge bosons (gravitons) correspond to wormhole throat pair (pair of these) such that the three-momenta are light-like but in exactly opposite directions, no Yukawa type screening and velocity dispersion should take place.

  3. If the three momenta are not exactly opposite as is possible in quantum theory, Yukawa screening could take place since the classical cm velocity calculated from the total momentum for a massive particle is smaller than maximal signal velocity. The massivation of intermediate gauge bosons and the fact that Yukawa potential description works for them satisfactorily supports this interpretation.

  4. If the space-time sheets mediating gravitational interaction have gigantic values of gravitational Planck constant Compton length of graviton is scaled up dramatically so that screening would be absent but velocity dispersion would remain. This leaves open the possibility that gravitons from Hulse-Taylor binary could reveal the velocity dispersion if they are detected some day.

For details about large hbar gravitons see the chapter Quantum Astro-Physics. For the twistor approach to TGD see the chapter Yangian Symmetry, Twistors, and TGD of "Towards M-Matrix".



The challenge of six planets

NASA has published the first list of exoplanets found by Kepler satellite. In particular, the NASA team led by Jack Lissauer reports a discovery of a system of six closely packed planets (see the article in Nature) around a Sunlike star christened as Kepler-11a located in the direction of constellation Cygnus at distance of about 2000 light years. The basic data about the six planets Kepler-11i, i=b,c,d,e,f,g and star Kepler-11a can be found in Wikipedia. Below I will refer to the star by Kepler-11 and planets with label i=b,c,d,e,f,g.

Lissauer regards it as quite possible that there are further planets at larger distances. The fact that the radius of planet g is only .462AU together with what we know about solar system suggests that this could be be the case. This leaves door for Earth like planet.

The conclusions from the basic data

Let us list the basic data.

  1. The radius and mass and surface temperature of Kepler-11 are very near to those of Sun.

  2. The orbital radii using AU as unit are given by
    (.091,.106,.159,.194,.250,.462).
    The orbital radii can be deduced quite accurately from the orbital periods by using Kepler's law stating that the squares of periods are proportional to cubes of orbital radii.The orbital periods of the five inner planets are between 10 and 47 days whereas g has a longer period of 118.37774 days (note the amazing accuracy). The orbital radii of e and f are .194 AU and .250 AU so that the temperature is expected to be much higher than at Earth so that life as we know it is not expected to be there. The average temperature of the radiation from Kepler-11 scaling as 1/r2 would be 4 times the temperature at Earth. The fact that gas forms a considerable fraction of the planet's mass could however mean that this does not give a good estimate for the temperature of the planet.

  3. The mass estimates using Earth mass as unit are
    (4.3,13.5,6.1,8.4,2.3 , <300).
    There are considerable uncertainties involved here, of order factor of 1/2.

  4. The estimates for the radii of the planets using the radius of Earth as unit are
    (1.97, 3.15,3.43,4.52,2.61,3.66).
    The uncertainties are about 20 per cent.

  5. From the estimates for the radii and mass estimates one can conclude that the estimates for the densities of the planets are considerably lower than those for Earth. Density of (e,f) is about (1/8,1/4) of that for Earth. The surface gravitation for e and f is roughly 1/2 of that at Earth. For g it is same as for Earth if g has mass roughly m≈ 15. For planet g only an upper bound 300 so that one can only that surface gravity is weaker than 20g.

The basic conclusions are following. One cannot exclude the possibility that the planetary system could contain Earth like planets. Furthermore, the distribution of the orbital radii of the planets differs dramatically from that in solar system.

How to understand the tight packing of the inner planets?

The striking aspect of the planetary system is how tightly packed it is. The ratio for the radii of g and b is about 5. This is a real puzzle for model builders with me included. TGD suggests three phenomenological approaches.

  1. Titius-Bode law
    r(n)=r0+ r12n
    is supported by p-adic length scale hypothesis. Stars would have onion-like structure consisting of spherical shells with inner and outer radii of the shell differing by factor two. The formation of planetary system involves condensation of matter to planets at these spherical shells. The preferred extremals of Kähler action describing stationary axially symmetric system corresponds to spherical shells containing most of the matter. A rough model for star would be in terms of this kind of spherical shells defined an onion-like structure defining a hierarchy of space-time sheets topologically condensed on each other. The value of the parameter r0 could be vanishing in the initial situation but subsequent gravitational dynamics could make it positive reducing the ratio r(n)/r(n-1) from its value 2.

  2. Bohr orbitology suggested by the proposal that gravitonic space-time sheets assigned with a given planet-star pair correspond to a gigantic value of gravitational Planck constant given by
    hbargr= GMm/v0,
    where v0 has dimensions of velocity and actually equal to the orbital velocity for the lowest Bohr orbit. For inner planets in solar system one has v0/c≈ 2-11.

    The physical picture is visible matter concentrates around dark matter and in this matter makes it astroscopic quantum behavior visible. The model is extremely predictive since the spectrum of orbital radii would depend only on the mass of the star and planetary systems would be much like atoms with obvious implications for the probability of Earth like systems supporting life. This model is consistent with the Titius-Bode model only if the Bohr orbitology is a late-comer in the planetary evolution.

  3. The third model is based on same general assumptions as the second one but only assumes that dark matter in astrophysical length scales associated with anyonic 2-surfaces (with light-like orbits in induced metric in accordance with holography) characterized by the value of the gravitational Planck constant. In this case the hydrogen atom inspired Bohr orbitology is just the first guess and cannot be taken too seriously. What would be important would be genuinely quantal dynamics for the formation of planetary system.

Can one interpret the radii in this framework in any reasonable manner?

  1. Titius-Bode predicts
    [r(n)-r(n-1)]/[r(n-1)-r(n-2)]=2
    and works excellently for c, f, and g. For b, d and e the law fails. This suggests that the four inner planets a,b,c,d, whose radii span single 2-adic octave in good approximation (!) correspond to single system which has split from single plane or will fuse to single planet distant future.

  2. Hydrogenic Bohr orbitology works only if g corresponds to n=2 orbit. n=1 orbit would have radius .116AU. From the proportionality r ∝ hbargr2 ∝ 1/v02, one obtains that the value one must have

    R==v02(Kepler)/v02(Sun)=3.04.

    This would result in a reasonable approximation for v0(Kepler)/v0(Sun)=7/4 (note that the value of Planck constant are predicted to be integer multiples of the standard value) giving R=7/42 ≈ 3.06.

    Note that the planets would correspond to those missing in Earth-Sun system for which one has n=3,4,5 for the inner planets Mercury, Venus, Earth.

    One could argue that Bohr orbits result as the planets fuse to two planets at these radii. This picture is not consistent with Titius-Bode law which predicts three planets in the final situation unless n=2 planet remains unrealized. By looking the graphical representation of the orbital radii of the planet system one has tendency to say that b,c,d,e, and f form a single subsystem and could eventually collapse to single planet. The ratio of gravitational forces between g and f is larger than that between f and e for m(g) > 6mE so that one can ask whether f could be eventually caught be g in this case. Also the fact that one has r(g)/r(f)<2 mildly suggests this.

For background see the chapter TGD and Astrophysics.



To the index page