What's new in
Physics in Many-Sheeted Space-Time
Note: Newest contributions are at the top!
In earlier posting Comparison of Maxwellian and TGD views about classical gauge fields I compared Maxwellian and TGD based notions of classical fields. The comparison was restricted to the linear superposition of fields which in TGD framework is restricted to linear superposition of effects of classical fields: the space-time sheets representing classical fields are indeed separate and it is enough that their M4 projections intersect. Summation of effects means at classical level summation of forces caused by them due to the fact that p"../articles/ have topological sum contacts to both space-time sheets. The notion of hologram relies crucially on the superposition of fields and this this forces to reformulate the notion of hologram in TGD framework.
In TGD inspired theory of consciousness the idea about living system as a conscious hologram is central. It is of course far from clear what this notion means. Since the notions of interference and superposition of fields are crucial for the description of the ordinary hologram, the proposed general description for the TGD counterpart for the superposition of fields is a natural starting point for the more precise formulation of the notion of conscious hologram. In the following only the notion of conscious hologram is discussed. Also the formulation of the notion of ordinary hologram in TGD framework is an interesting challenge.
One can imagine several applications in TGD inspired quantum biology.
In TGD Universe gauge fields are replaced with topological field quanta. Examples are topological light rays, magnetic flux tubes and sheets, and electric flux quanta carrying both magnetic and electric fields. Flux quanta form a fractal hierarchy in the sense that there are flux quanta inside flux quanta. It is natural to assume quantization of Kähler magnetic flux. Braiding and reconnection are basic topological operations for flux quanta.
One important example is the description of non-perturbative aspects of strong interactions in terms of reconnection of color magnetic flux quanta carrying magnetic monopole fluxes. These objects are string like structures and one can indeed assign to them string world sheets. The transitions in which the thickness of flux tube increases so that flux conservation implies that part of magnetic energy is liberated unless the length of the flux quantum increases, are central in TGD inspired cosmology and astrophysics. The magnetic energy of flux quantum is interpreted as dark energy and magnetic tension as negative "pressure" causing accelerated expansion.
This picture is beautiful and extremely general but raises challenges. How to describe interference and linear superposition for classical gauge fields in terms of topologically quantized classical fields? How the interference and superposition of Maxwellian magnetic fields is realized in the situation when magnetic fields decompose to flux quanta? How to describe simple systems such as solenoidal current generating constant magnetic field using the language of flux quanta?
Superposition of fields in terms of flux quanta
The basic question concerns the elegant description of superposition of classical fields in terms of topological field quanta. What it means that magnetic fields superpose.
Time varying magnetic fields described in terms of flux quanta
An interesting challenge to describe time dependent fields in terms of topological field quanta which are in many respects static structures (for instance, flux is constant). The magnetic fields created by time dependent currents serves as a good example from which one can generalize. In the simplest situation the magnetic field strength experiences time dependent scaling. How to describe this scaling?
Consider first the scaling of the magnetic field strength in flux tube quantization.
Consider as the first example slowly varying magnetic field created by an alternating running in current in cylindrical solenoid. There are flux tubes inside the cylindrical solenoid and return flux tubes outside it flowing in opposite direction. Flux tubes get thicker as magnetic field weakens and shift from the interior of solenoid outside. For some value x of the time dependent scaling B→ B/x the elementary flux quantum Φ0 reaches the radius of the solenoid. Quantum effects must become important and make possible the change of the sign of the elementary flux quantum. Perhaps quantum jump turning the flux quantum around takes place. After this the size of the flux quantum begins to decrease as the magnitude of the magnetic field increases. At the maximum value the size of the flux quantum is minimum.
This example generalizes to the magnetic field created by a linear alternating current. In this case flux quanta are cylinderical flux sheets for which magnetic field strength and thickness oscillators with time. Also in this case the maximum transversal area to the system defines a critical situation in which there is just single flux sheet in the system carrying elementary flux. This flux quantum changes its sign as the sign of the current changes.
For background see the chapter General View About Physics in Many-Sheeted Space-Time : Part I.
The process leading to this posting was boosted by the irritation caused by the newest multiverse hype in New Scientist which was commented by Peter Woit. Also Lubos told about Brian Greene's The Fabric of Cosmos IV which is similar multiverse hype with Guth, Linde, Vilenkin, and Susskind as stars but also single voice of criticism was accepted (David Gross who could not hide his disgust).
The message of New Scientist article was that multiverse is now a generally accepted paradigm, that it follows unavoidably from modern physics and has three strong pillars: dark energy, eternal inflation, and the string model landscape. Even LHC has demonstrated its correctness by finding no evidence for the standard SUSY. That was the prediction of superstring models but then some-one realized that there had been some-one predicting that multiverse predicts no super-symmetry! As a matter fact, every single prediction inspired by super string models went wrong, there are good reasons to expect that Higgs will not be found, and standard SUSY has been excluded. Besides this an increasing amount of evidence for new physics not predicted by standard TOEs. And one should not forget neutrino super-luminality. All this shakes the foundations of both super-string theory, where GUT is believed to be the low energy limit of the theory with Higgs fields playing a key role. In inflationary scenarios Higgs like scalar fields carrying the vacuum energy give rise to radiation and therefore also ordinary matter.
The three pillars of the multiverse become catastrophic weaknesses if the Higgs paradigm fails. Vacuum energy cannot correspond to Higgs, the scalar fields driving inflation are not there, and one cannot say anything about possible low energy limits of super string theories since even the basic language describing them is lost!
Maybe I am becoming an old angry man but I must confess that this kind of hype is simply too much for me. Why colleagues who know what the real situation is do not react to this bullshit? Are they so lazy that they allow physics to degenerate into show business without bothering to do anything? Or does a culture of Omerta prevail as some participant in Peter Woit's blog suggested? Even if a man has seen a crime to take place, he is not allowed to reveal it. It he does, he suffers vendetta. I have experienced the academic equivalent of vendetta: not for this reason but for having the courage to think with my own brain. Maybe laziness is a more plausible explanation.
But I do not have any right to doom my colleagues if I am myself too lazy to do anything. My moral duty is to tell that this hype is nothing but unashamed lying. On the other hand, the digging of a heap of shit is really depressing. Is there any hope of learning anything? I refuse from spending time in superstring landscape but should I see the trouble of comparing eternal inflation with TGD?
In this mixed mood I decided to refresh my views about how TGD based cosmology differs from inflationary scenario. The pleasant surprise was that this comparison combined with new results about TGD inspired cosmology provided fresh insights to the relationship of TGD and standard approach and shows how TGD cures the lethal diseases of the eternal inflation. Very roughly: the replacement of the energy of the scalar field with magnetic energy replaces eternal inflation with a fractal quantum critical cosmology allowing to see more sharply the TGD counterpart of inflation and accelerating expansion as special cases of criticality. Hence it was not wasted time after all.
Wikipedia gives a nice overall summary inflationary cosmology and I recommend it to the non-specialist physics reader as a manner to refresh his or her memory.
1. Brief summary of the inflationary scenario
Inflationary scenario relies very heavily on rather mechanical unification recipes based on GUTs. Standard model gauge group is extended to a larger group. This symmetry group breaks down to standard model gauge group in GUT scale which happens to correspond to CP2 size scale. Leptons and quarks are put into same multiplet of the gauge group so that enormous breaking of symmetries occurs as is clear from the ratio of top quark mass scale and neutrino mass scale. These unifiers want however a simple model allowing to calculate so that neither aesthetics nor physics does not matter. The instability of proton is one particular prediction. No decays of proton in the predicted manner have been observed but this has not troubled the gurus. As a matter fact, even Particle Data Tables tell that proton is not stable! The lobbies of GUTs are masters of their profession!
One of the key features of GUT approach is the prediction Higgs like fields. They allow to realize the symmetry breaking and describe particle massivation. Higgs like scalar fields are also the key ingredient of the inflationary scenario and inflation goes to down to drain tub if Higgs is not found at LHC. It is looking more and more probable that this is indeed the case. Inflation has endless variety of variants and each suffers from some drawback. In this kind of situation one would expect that it is better to give up but it has become a habit to say that inflation is more that a theory, it is a paradigm. When superstring models turned out to be a physical failure, they did not same thing and claimed that super string models are more like a calculus rather than mere physical theory.
1.1 The problems that inflation was proposed to solve
The basic problems that inflation was proposed to solve are magnetic monopole problem, flatness problem, and horizon problem. Cosmological principle is a formulation for the fact that cosmic microwave radiation is found to be isotropic and homogenous in an excellent approximation. There are fluctuations in CMB believed to be Gaussian and the prediction for the spectrum of these fluctuations is an important prediction of inflationary scenarios.
1.2 The evolution of the inflationary models
The inflationary models developed gradually more realistic.
The basic criticism of Penrose against inflation is that it actually requires very specific initial conditions and that the idea that the uniformity of the early Universe results from a thermalization process is somehow fundamentally wrong. Of course, the necessity to assume scalar field and a potential energy with a very weird shape whose details affect dramatically the observed Universe, has been also criticized.
2. Comparison with TGD inspired cosmology
It is good to start by asking what are the empirical facts and how TGD can explain them.
2.1 What about magnetic monopoles in TGD Universe?
Also TGD predicts magnetic monopoles. CP2 has a non-trivial second homology and second geodesic sphere represents a non-trivial element of homology. Induced Kähler magnetic field can be a monopole field and cosmic strings are objects for which the transversal section of the string carries monopole flux. The very early cosmology is dominated by cosmic strings carrying magnetic monopole fluxes. The monopoles do not however disappear anywhere. Elementary p"../articles/ themselves are string like objects carrying magnetic charges at their ends identifiable as wormhole throats at which the signature of the induced metric changes. For fermions the second end of the string carries neutrino pair neutralizing the weak isospin. Also color confinement could involve magnetic confinement. These monopoles are indeed seen: they are essential for both the screening of weak interactions and for color confinement!
2.2. The origin of cosmological principle
The isotropy and homogenity of cosmic microwave radiation is a fact as are also the fluctuations in its temperature as well as the anomalies in the fluctuation spectrum suggesting the presence of large scale structures. Inflationary scenarios predict that fluctuations correspond to those of nearly gauge invariant Gaussian random field. The observed spectral index measuring the deviation from exact scaling invariance is consistent with the predictions of inflationary scenarios.
Isotropy and homogenity reduce to what is known as cosmological principle. In general relativity one has only local Lorentz invariance as approximate symmetry. For Robertson-Walker cosmologies with sub-critical mass density one has Lorentz invariance but this is due to the assumption of cosmological principle - it is not a prediction of the theory. In inflationary scenarios the goal is to reduce cosmological principle to thermodynamics but fine tuning problem is the fatal failure of this approach.
In TGD framework cosmological principle reduces sub-manifold gravity in H=M4× CP2 predicting a global Poincare invariance reducing to Lorentz invariance for the causal diamonds. This represent extremely important distinction between TGD and GRT. This is however not quite enough since it predicts that Poincare symmetries treat entire partonic 2-surfaces at the end of CD as points rather than affecting on single point of space-time. More is required and one expects that also now finite radius for horizon in very early Universe would destroy the isotropy and homogenity of 3 K radiation. The solution of the problem is simple: cosmic string dominated primordial cosmology has infinite horizon size so that arbitrarily distance regions are correlated. Also the critical cosmology, which is determined part from the parameter determining its duration by its imbeddability, has infinite horizon size. Same applies to the asymptotic cosmology for which curvature scalar is extremized.
The hierarchy of Planck constants and the fact that gravitational space-time sheets should possess gigantic Planck constant suggest a quantum solution to the problem: quantum coherence in arbitrary long length scales is present even in recent day Universe. Whether and how this two views about isotropy and homogenity are related by quantum classical correspondence, is an interesting question to ponder in more detail.
2.3 Three-space is flat
The flatness of three-space is an empirical fact and can be deduced from the spectrum of microwave radiation. Flatness does not however imply inflation, which is much stronger assumption involving the questionable scalar fields and the weird shaped potential requiring a fine tuning. The already mentioned critical cosmology is fixed about the value value of only single parameter characterizing its duration and would mean extremely powerful predictions since just the imbeddability would fix the space-time dynamics almost completely.
Exponentially expanding cosmologies with critical mass density do not allow imbedding to M4× CP2. Cosmologies with critical or over-critical mass density and flat 3-space allow imbedding but the imbedding fails above some value of cosmic time. These imbeddings are very natural since the radial coordinate $r$ corresponds to the coordinate r for the Lorentz invariant a=constant hyperboloid so that cosmological principle is satisfied.
Can one imbed exponentially expanding sub-critical cosmology? This cosmology has the line element
ds32= sinh2(t) dΩ32,
where ds32 is the metric of the a=constant hyperboloid of M4+.
2.4 Replacement of the inflationary cosmology with critical cosmology
In TGD framework inflationary cosmology is replaced with critical cosmology. The vacuum extremal representing critical cosmology is obtained has 2-D CP2 projection - in the simplest situation geodesic sphere. The dependence of Φ on r and Θ on a is fixed from the condition that one obtains flat 3- metric
a2/1+r2 - R2sin2(Θ)(dΦ/dr)2= a2
sin(Θ)=+/- ka , dΦ/dr=+/- (1/kR)× (r/(1+r2)1/2 .
The imbedding fails for |ka| >1 and is unique apart from the parameter k characterizing the duration of the critical cosmology. The radius of the horizon is given by
R= &int (1/a) × [(1-R2k2)/(1-k2a2)]1/2
and diverges. This tells that there are no horizons and therefore cosmological principle is realized. Infinite horizon radius could be seen as space-time correlate for quantum criticality implying long range correlations and allowing to realize cosmological principle. Therefore thermal realization of cosmological principle would be replaced with quantum realization in TGD framework predicting long range quantal correlations in all length scales. Obviously this realization is a in well-defined sense the diametrical opposite of the thermal realization. The dark matter hierarchy is expected to correspond to the microscopic realization of the cosmological principle generating the long range correlations.
This cosmology could describe the phase transition increasing Planck constant associated with a magnetic flux tube leading to its thickening. Magnetic flux would be conserved and the magnetic energy for the thicknened portion would be reduced via its partial transformation to radiation giving rise to ordinary and dark matter.
2.5 Fractal hierarchy of cosmologies within cosmologies
Many-sheeted space-time leads to a fractal hierarchy of cosmologies within cosmologies. The zero energy realization is in terms of causal diamonds within causal diamonds with causal diamond identified as intersection of future and past directed light-cones. The temporal distance between the tips of CD is given as an integer multiple of CP2 time in the most general case and boosts of CDs are allowed. The are also other moduli associated with CD and discretization of the moduli parameters is strong suggestive.
Critical cosmology corresponds to negative value of "pressure" so that it also gives rise to accelerating expansion. This suggests strongly that both the inflationary period and the accelerating expansion period which is much later than inflationary period correspond to critical cosmologies differing from each other by scaling. Continuous cosmic expansion is replaced with a sequence of discrete expansion phases in which the Planck constant assignable to a magnetic flux quantum increases and implies its expansion. This liberates magnetic energy as radiation so that a continual creation of matter takes place in various scales.
This fractal hierarchy is the TGD counterpart for the eternal inflation. This fractal hierarchy implies also that the TGD counterpart of inflationary period is just a scaled up invariant of critical cosmologies within critical cosmologies. Of course, also radiation and matter dominated phases as well as asymptotic string dominated cosmology are expected to be present and correspond to cosmic evolutions within given CD.
2.6 Vacuum energy density as magnetic energy of magnetic flux tubes and accelerating expansion
TGD allows also a microscopic view about cosmology based on the vision that primordial period is dominated by cosmic strings which during cosmic evolution develop 4-D M4 projection meaning that the thickness of the M4 projection defining the thickness of the magnetic flux tube gradually increases. The magnetic tension corresponds to negative pressure and can be seen as a microscopic cause of the accelerated expansion. Magnetic energy is in turn the counterpart for the vacuum energy assigned with the inflaton field. The gravitational Planck constant assignable to the flux tubes mediating gravitational interaction nowadays is gigantic and they are thus in macroscopic quantum phase. This explains the cosmological principle at quantum level.
The phase transitions inducing the boiling of the magnetic energy to ordinary matter are possible. What happens that the flux tube suffers a phase transition increasing its radius. This however reduces the magnetic energy so that part of magnetic energy must transform to ordinary matter. This would give rise to the formation of stars and galaxies. This process is the TGD counterpart for the re-heating transforming the potential energy of inflaton to radiation. The local expansion of the magnetic flux could be described in good approximation by critical cosmology since quantum criticality is in question.
One can of course ask whether inflationary cosmology could describe the transition period and critical cosmology could correspond only to the outcome. This does not look very attractive idea since the CP2 projections of these cosmologies have dimension D=1 and D=2 respectively.
In TGD framework the fluctuations of the cosmic microwave background correspond to mass density gradients assignable to the magnetic flux tubes. An interesting question is whether the flux tubes could reveal themselves as a fractal network of linear structures in CMB. The prediction is that galaxies are like pearls in a necklace: smaller cosmic strings around long cosmic strings. The model for the formation of stars and galaxies gives a more detailed view about this.
2.7 What is the counterpart of cosmological constant in TGD framework?
In TGD framework cosmological constant emerges, as one asks what might be the GRT limit of TGD. Space-time surface decomposes to regions with both Minkowskian and Euclidian signature of the induced metric and Euclidian regions have interpretation as counterparts of generalized Feynman graphs. Also GRT limit must allow space-time regions with Euclidian signature of metric - in particular CP2 itself -and this requires positive cosmological constant in this regions. The action principle is naturally Maxwell-Einstein action with cosmological constant which is vanishing in Minkowskian regions and very large in Euclidian regions of space-time. Both Reissner-Nordström metric and CP2 are solutions of field equations with deformations of CP2 representing the GRT counterparts of Feynman graphs. The average value of the cosmological constant is very small and of correct order of magnitude since only Euclidian regions contribute to the spatial average. This picture is consistent with the microscopic picture based on the identification of the density of magnetic energy as vacuum energy since Euclidian particle like regions are created as magnetic energy transforms to radiation.
The origin of cosmic rays remains still one of the mysteries of astrophysics and cosmology. The recent finding of a super bubble emitting cosmic rays might cast some light in the problem.
1. What has been found?
The following is the abstract of the article published in Science.
The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population. The usual thinking is that cosmic rays are not born in states with ultrahigh energies but are boosted to high energies by some mechanism. For instance, super nova explosions could accelerate them. Shock waves could serve as an acceleration mechanism. Cosmic rays could also result from the decays of heavy dark matter p"../articles/.
The story began when astronomers detected a mysterious source of cosmic rays in the direction of the constellation Cygnus X. Supernovae happen often in dense clouds of gas and dust, where stars between 10 to 50 solar masses are born and die. If supernovae are responsible for accelerating of cosmic rays, it seems that these regions could also generate cosmic rays. Cygnus X is therefore a natural candidate to study. It need not however be the source of cosmic rays since magnetic fields could deflect the cosmic rays from their original direction. Therefore Isabelle Grenier and her colleagues decided to study, not cosmic rays as such, but gamma rays created when cosmic rays interact with the matter around them since they are not deflected by magnetic fields. Fermi gamma-ray space telescope was directed toward Cygnus X. This led to a discovery of a superbubble with diameter more than 100 light years. Superbubble contains a bright regions which looks like a duck. The spectrum of these gamma rays implies that the cosmic rays are energetic and freshly accelerated so that they must be close to their sources.
The important conclusions are that cosmic rays are created in regions in which stars are born and gaint their energies by some acceleration mechanism. The standard identification for the acceleration mechanism are shock waves created by supernovas but one can imagine also other mechanisms.
2. Cosmic rays in TGD Universe?
In TGD framework one can imagine several mechanisms producing cosmic rays. According to the vision discussed already earlier, both ordinary and dark matter would be produced from dark energy identified as Kähler magnetic energy and producing as a by product cosmic rays. What causes the transformation of dark energy to matter, was not discussed earlier, but a local phase transition increasing the value of Planck constant of the magnetic flux tube could be the mechanism. A possible acceleration mechanism would be acceleration in an electric field along the magnetic flux tube. Another mechanism is super-nova explosion scaling-up rapidly the size of the closed magnetic flux tubes associated with the star by hbar increasing phase transition preserving the Kähler magnetic energy of the flux tube, and accelarating the highly energetic dark matter at the flux tubes radially: some of the p"../articles/ moving along flux tubes would leak out and give rise to cosmic rays and associated gamma rays.
2.1. The mechanism transforming dark energy to dark matter and cosmic rays
Consider first the mechanism transforming dark energy to dark matter.
2.2 What is the precise mechanism transforming dark energy to matter?
What is the precise mechanism transforming the dark magnetic energy to ordinary or dark matter? This is not clear but this mechanism could produce very heavy exotic p"../articles/ not yet observed in laboratory which in turn decay to very energetic ordinary hadrons giving rise to cosmic rays spectrum. I have considered a mechanism for the production of ultrahigh energy cosmic rays based on the decays of hadrons of scaled up copies of ordinary hadron physics. In this case no acceleration mechanism would be necessary. Cosmic rays lose their energy in interstellar space. If they correspond to a large value of Planck constant, situation would change and the rate of the energy loss could be very slow. The above described experimental finding about Cygnus X however suggests that acceleration takes place for the ordinary cosmic rays with relatively low energies. This of course does not exclude particle decays as the primary production mechanism of very high energy cosmic rays. In any case, dark magnetic energy transforming to matter gives rise to both stars and high energy cosmic rays in TGD based proposal.
2.3. What is the acceleration mechanism?
How cosmic rays are created by this general process giving rise to the formation of stars?
Cold dark matter scenario assumes that dark matter consists of exotic p"../articles/ having extremely weak interactions with ordinary matter and which clump together gravitationally. These concentrations of dark matter would grow and attract ordinary matter forming eventually the galaxies.
Cold dark matter scenario has several problems.
Cold dark matter scenario is however in difficulties as one learns from Science Daily articleDark Matter Mystery Deepens. Observational data about the structure of dar matter in dwarf galaxies is however in conflict with this picture. New measurements about two dwarf galaxies tell that dark matter distribution is smooth. Dwarf galaxies are believed to contain 99 per cent of dark matter and are therefore ideal for the attempts to understand dark matter. Dwarf galaxies differ from ordinary ones in that stars inside them move like bees in beehive instead of moving along nice circular orbits. The distribution of the dark matter was found to be uniform over a region with diameter of several hundred light years which corresponds to the size scale of the galactic nucleus. For comparison purposes note that Milky Way has at its center a bar like structure with size between 3300-16000 ly. Notice also that also in ordinary galaxies constant density core is highly suggestive (core/cusp problem) so that dwarf galaxies and ordinary galaxies need not be so different after all.
In TGD framework the simplest model for the galactic dark matter assumes that galaxies are like pearls in a necklace. Necklace would be long magnetic flux tube carrying dark energy identified as magnetic energy and galaxies would be bubbles inside the flux tube which would have thicknened locally. Similar model would apply to start. The basic prediction is that the motion of stars along flux tube is free apart from the gravitational force caused by the visible matter. Constant velocity spectrum for distant stars follows from the logarithmic gravitational potential of the magnetic flux tube and cylindrical symmetry would be absolutely essential and distinguish the model from the cold dark matter scenario.
What can one say about the dwarf galaxies in TGD framework? The thickness of the flux tube is a good guess for the size scale in which dark matter distribution is approximately constant: this for any galaxy (recall that dark and ordinary matter would have formed as dark energy transforms to matter). The scale of hundred light years is roughly by a factor of 1/10 smaller than the size of the center of the Milky Way nucleus. The natural question is whether the dark matter distribution could spherically symmetric and constant in this scale also for ordinary galaxies. If so, the cusp/core problem would disappear and orinary galaxies and dwarf galaxies would not differ in an essential manner as far as dark matter is considered. The problem would be essentially that of cold dark matter scenario.
For details and background see the chapter Cosmic strings.
Tommaso Dorigo managed to write the hype of his life about super-luminal neutrinos. This kind of accidents are unavoidable and any blogger sooner or later becomes a victim of such an accident. To my great surprise Tommaso described in a completely uncritical and hypeish manner a study by ICARUS group in Gran Sasso and concluded that it definitely refutes OPERA result. This if of course a wrong conclusion and based on the assumption that special and general relativity hold true as such and neutrinos are genuinely superluminal.
Also Sascha Vongehr wrote about ICARUS a a reaction to Tommaso's surprising posting but this was purposely written half-joking hype claiming that ICARUS proves that neutrinos travel the first 18 meters with a velocity at least 10 times higher than c. Sascha also wrote a strong criticism of the recent science establishment. The continual uncritical hyping is leading to the loss of the respectability of science and I cannot but share his views. Also I have written several times about the ethical and moral decline of the science community down to what resembles the feudal system of middle ages in which Big Boys have first night privilege to new ideas: something which I have myself had to experience many times.
What ICARUS did was to measure the energy distribution of muons detecteded in Gran Sasso. This result is used to claim that OPERA result is wrong. The measured energy distribution is compared with the distribution predicted assuming that Cohen-Glashow interpretation is correct. This is an extremely important ad hoc assumption without which the ICARUS demonstration fails completely.
At the risk of boring the reader I repeat: the fatal assumption is that a genuine super-luminality is in question. The probably correct conclusion from this indeed is that neutrinos would lose their energy during their travel by Cherenkov radiation.
In TGD framework situation is different (see this, this, this, and also the article). Neutrinos move in excellent approximation velocity which is equal to the maximal signal velocity but slightly below it and without any energy loss. The maximal signal velocity is however higher for a neutrino carrying space-time sheets than those carrying photons- a basic implication sub-manifold gravity. I have explained this in detail in previous postings and in the article.
The conclusion is that ICARUS experiment supports the TGD based explanation of OPERA result. Note however that at this stage TGD does not predict effective superluminality but only allows and even slightly suggests it and provides also a possible explanation for its energy independence and dependences on length scale and particle. TGD suggests also new tests using relativistic electrons instead of neutrinos.
It is also important to realize that the the apparent neutrino super-luminality -if true- provides only single isolated piece evidence for sub-manifold gravity. The view about space-time as 4-surface permeates the whole physics from Planck scale to cosmology predicting correctly particle spectrum and providing unification of fundamental interactions, it is also in a key role in TGD inspired quantum biology and also in quantum consciousness theory inspired by TGD.
Let us sincerely hope that the conclusion of ICARUS will not be accepted as uncritically as Tommasso did.
Sascha Vongehr written several interesting blog postings about superluminal neutrinos. The latest one is titled A million times the speed of light. I glue below my comment about explaining how one can understand qualitatively why the dependence of the maximal signal velocity at space-time sheet along with the relativistic particle propagates is lower in long length scales.
The explanation involves besides the induced metric also the notion of induced gauge field (induced spinor connection): here brane theorists reproducing TGD predictions are bound to meet difficulties and an instant independent discovery of the notion of induced gauge field and spinor structure is needed in order to proceed;-). Here is my comment in somewhat extended form.
I would be critical about two points.
If one is ready to accept sub-manifold gravity a la TGD, this boils down to the identification of space-time sheets carrying the neutrinos (or any relativistic p"../articles/ from point A to point B). This TGD prediction is about 25 years old: from Peter Woit's blog's comment section I learned that brane people are now proposing something similar: my prediction at viXra log and my own blog was that this will happen within about week: nice to learn that my blog has readers!
This predicts that the really maximal signal velocity (that for M4) is not probably very much higher than the light velocity in cosmic scales and Robertson-Walker cosmology predicts that the light velocity in cosmic scales is about 73 percent of the really maximal one.
The challenge for sub-manifold gravity approach is to understand the SN1987A-OPERA difference qualitatively. Why neutrino (and any relativistic particle) travels faster in short length scales?
The anomalous behavior of the equinox precession and the recent surprising findings about heliosphere by NASA can be combined with the TGD inspired model for stars. The mdoel relies on the heuristic idea that stars (as also galazies) are like pearls in a necklace defined by long magnetic flux tubes carrying dark matter and strong magnetic field responsible for dark energy and possibly accompanied by the analog of solar wind. Heliosphere would be like a bubble in the flow defined by magnetic field inside the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy: this would provide a beautiful overall view about the emergence of stars and their heliospheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays. The bubble option is favored by the fact that Newtonian theory works so well inside planetary system. The model suggests bound state precessing solutions without nutation as the first approximation expected to be stable against dissipation. A small nutation around the equilibrium solution could explain the slow variation of the precession rate and can be treated as a small oscillatory perturbation around non-nutating ground state. The variation could be also caused by external perturbations. What is amusing from the mathematical point of view is that the model is analytically solvable and that the solution involves elliptic functions just as the Newtonian two-body problem does.
The model suggests a universal fractal mechanism leading to the formation of astrophysical and even biological structures as a formation of bubbles of ordinary or dark matter inside magnetic flux tubes carrying dark energy identified as magnetic energy of the flux tubes. In primordial cosmology these flux tubes would have been cosmic strings with enormous mass density, which is however below the black hole limit for straight strings. Strongly entangled strings could form black holes if general relativistic criteria hold true in TGD.
One must be very critical concerning the model since in TGD framework the accelerated cosmic expansion has several alternative descriptions, which should be mutually consistent. It seems that these descriptions corresponds to the descriptions of one and same thing in different length scales.
The recent experimental findings have shown that our understanding of the solar system is surprisingly fragmentary. As a matter fact, so fragmentary that even new physics might find place in the description of phenomena like the precession of equinoxes (I am grateful for my friend Pertti Kärkkäinen for telling me about the problem) and the recent discoveries about the bullet like shape of heliosphere and strong magnetic fields near its boundary bringing in mind incompressible fluid flow around obstacle.
TGD inspired model is based on the heuristic idea that stars are like pearls in a necklace defined by long magnetic flux tubes carrying dark energy and strong magnetic field and possibly accompanied by the analog of solar wind. Heliosphere would be like bubble in the flow defined by magnetic field in the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy: this would provide a beautiful overall view about the emergence of stars and their heliospheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays.
OPERA collaboration in CERN has reported that the neutrinos travelling from CERN to Gran Sasso in Italy move with a super-luminal speed. There exists also earlier evidence for the super-luminality of neutrinos: for instance, the neutrinos from SN1987A arrived for few hours earlier than photons. The standard model based on tachyonic neutrinos is formally possible but breaks causality and is unable to explain all results. TGD based explanation relies on sub-manifold geometry replacing abstract manifold geometry as the space-time geometry. The notion of many-sheeted space-time predicts this kind of effects plus many other effects for which evidence exists as various anomalies which have not taken seriously by the main stream theorists.
During years evidence supporting the idea that TGD could be an integrable theory in some sense has accumulated. The challenge is to show that various ideas about what integrability means form pieces of a bigger coherent picture. Of course, some of the ideas are doomed to be only partially correct or simply wrong. Since it is not possible to know beforehand what ideas are wrong and what are right the situation is very much like in experimental physics and it is easy to claim (and has been and will be claimed) that all this argumentation is useless speculation. This is the price that must be paid for the luxury of genuine thinking.
Integrable theories allow to solve nonlinear classical dynamics in terms of scattering data for a linear system. In TGD framework this translates to quantum classical correspondence. The solutions of modified Dirac equation define the scattering data. The conjecture is that octonionic real-analyticity with space-time surfaces identified as surfaces for which the imaginary part of the biquaternion representing the octonion vanishes solves the field equations. This conjecture generalizes the conformal invariance to its octonionic analog. If this conjecture is correct, the scattering data should define a real analytic function whose octonionic extension defines the space-time surface as a surface for which its imaginary part in the representation as bi-quaternion vanishes. There are excellent hopes about this thanks to the reduction of the modified Dirac equation to geometric optics.
For details and background the reader can consult to the article An attempt to understand preferred extremals of Kähler action and to the chapter Basic Extremals of Kähler action.
Eric Verlinde's Entropic Gravity is one of the fashions of recent day theoretical physics which come and go (who still remembers Lisi's "Exceptionally simple theory of everything", which raised Lisi for a moment a potential follower of Einstein?) That this would happen was rather clear to me from the beginning and I expressed my views in several postings: see this, this, and this. The idea that gravitons are there not all and gravitational force is purely thermodynamical force looks nonsensical to me on purely mathematical grounds. But what about physics? Kobakhidze wrote a paper in which he demonstrated that the neutron interferometry experiments disfavor the notion of entropic gravity. Neutron behaves like a quantal particle obeying Schrödinger equation in the gravitational field of Earth and it is difficult to understand this if gravitation is entropic force.
I wrote detailed comments about this in the second posting and proposed different interpretation of the basic formulas for gravitational temperature and entropy based on zero energy ontology predicting that even elementary particle are at least mathematically analogous to thermodynamical objects. The temperature and entropy would be associated with the ensemble of gravitons assigned with the flux tubes mediating gravitational interaction and temperature behaves naturally as 1/r2 in absence of other graviton/heat sourcers and entropy is naturally proportional to the flux tube length and therefore to the radial distance r. This allows to understand the formulas deduced by Sabine Hossenfelder who has written one of the rather few clear expositions about entropic gravity (Somehow it reflects the attitudes towards women in physics that her excellent contribution was not mentiond in the reference list of the Wikipedia article. Disgusting.). Entropic gravitons are of course quite different thing than gravitation as entropic force.
The question about the proper interpretation of the formulas was extremely rewarding since it also led to ask what is the GRT limit of TGD could be. This led to beautiful answer and in turn forced to ask what black holes really are in TGD Universe. We have no empirical information about their interiors so that general relativistic answer can be taken only as one possibility which is even plagued by mathematical difficulties. Blackhole horizon is quite concretely the door to the new physics so that one should have be very openminded here- we really do not know what is behind the door!
The TGD based answer was surprising: black holes in TGD Universe correspond to the regions of space-time with Euclidian signature of the induced metric. In particular, the lines of generalized Feynman diagrams are blackholes in this sense. This view would unify elementary p"../articles/ and blackholes. This proposal also leads to a concrete proposal for how to understand the extremely small value of the cosmological constant as the average value of cosmological constant which vanishes for Minkowskian regions but is large for Euclidian regions and determined by CP2 size.
The first article of Kobakhidze appeared in arXiv already two years ago but was not noticed by bloggers (except me but as a dissident I am of course not counted;-). Here the fact that I was asked to act as a referee helped considerably. Unfortunately I did not have time for this!).) . The new article Once more: gravity is not an entropic force of Kobakhidze was however noticed by media and also by physics bloggers.
Lubos came first: Lubos however had read the article carelessly (even its abstract) and went to claim that M. Chaichian, M. Oksanen, and A. Tureanu state in their article that Kobakhidze's claim is wrong and that they support entropic gravity. This was of course not the case: the authors agreed with Kobakhidze about entropic gravity but argued that there was a mistake in his reasoning. In honor of Lubos one must say that he noticed the problems caused by the lack of quantum mechanical interferene effects already much earlier.
Also Johannes Koelman wrote about the topic with inspiration coming from the popular web article Experiments Show Gravity Is Not an Emergent Phenomenon inspired by Kobakhidze's article.
To my opinion Verlinde's view is wrong but it would be a pity if one would not try to explain the highly suggestive formulas for entropy and temperature like parameters nicely abstracted by Sabine Hossenfelder from Verlinde's work. I have already described briefly my own interpretation inspired by zero energy ontology. In TGD framework it seems impossible to avoid the conclusion that also the mediators of other interactions are in thermal equilibrium at corresponding space-time sheets and that the temperature is universally the Unruh temperature determined by acceleration. Also the expression for the entropy can be deduced as the following little argument shows.
What makes the situation so interesting is that the sign of both temperature and entropy are negative for repulsive interactions suggesting thermo-dynamical instability. This leads to the question whether matter antimatter separation could relate to are reversal of the arrow of geometric time at space-time sheets mediating repulsive long range interactions. This statement makes sense in zero energy ontology and the arrow of time has a concrete mathematical content as a property of zero energy states. In the following I will consider identification of the temperature and entropy assignable to the flux tubes mediating gravitational or other interactions. I was too lazy to deduce explicit formulas in the original version of the article about this topic and added the formulas also into it.
Consider first the gravitonic temperature. The natural guess for the temperature parameter would be as Unruh temperature
Tgr= (hbar/2π) a ,
where a is the projection of the gravitational acceleration along the normal of the gravitational potential = constant surface. In the Newtonian limit it would be acceleration associated with the relative coordinates and correspond to the reduced mass and equal to a=G(m1+m2)/r2.
One could identify Tgr also as the magnitude of gravitational acceleration. In this case the definition would involved only be purely local. This is in accordance with the character of temperature as intensive property.
The general relativistic objection against the generalization is that gravitation is not a genuine force: only a genuine acceleration due to other interactions than gravity should contribute to the Unruh temperature so that gravitonic Unruh temperature should vanish. On the other hand, any genuine force should give rise to an acceleration. The sign of the temperature parameter would be different for attractive and repulsive forces so that negative temperatures would become possible. Also the lack of general coordinate invariance is a heavy objection against the formula.
In TGD Universe the situation is different. In this case the definition of temperature as magnitude of local acceleration is more natural.
Remark: In MOND theory of dark matter a critical value of acceleration is introduced. I do not believe personally to MOND and TGD explains galactic rotation curves without any modification of Newtonian dynamics in terms of dark matter assignable to cosmic strings containing galaxies like around it like pearls in necklace. In TGD framework the critical acceleration would be the acceleration above which the gravitational acceleration caused by the dark matter associated with the cosmic strings traversing along galactic plane orthogonally and behaving as 1/ρ overcomes the acceleration caused by the galactic matter and behaving as 1/ρ2. Could this critical acceleration correspond to a critical temperature Tgr- presumably determined by an appropriate p-adic length scale and coming as a power 2-k/2 by p-adic length scale hypothesis? Could critical value of H perhaps characterize also a critical magnitude for the deformation from minimal surface extremal? The critical acceleration in Milgrom's model is about 1.2*10-10 m/s2 and corresponds to a time scale of 1012 years, which is of the order of the age of the Universe.
The formula contains Planck constant and the obvious question of the inhabitant of TGD Universe is whether the Planck constant can be identified with the ordinary Planck constant or with the effective Planck constant coming as integer multiple of it (see this).
A good guess for the value of gravitational entropy (gravitonic entropy associated with the flux tube mediating gravitational interaction) comes from the observation that it should be proportional to the flux tube length. The relationship dE= TdS suggests S∝ φgr/Tgr as the first guess in Newtonian limit. A better guess would be
Sgr= -Vgr/Tgr= [(M+m)/M] (r/hbar m) ,
The replacement M→ M+m appearing in the Newtonian equations of motion for the reduced mass has been performed to obtain symmetry with respect to the exchange of the masses.
The entropy would depend on the interaction mediated by the space-time sheet in question which suggests that the generalization is
Here V(r) is the potential energy of the interaction. The sign of S depends on whether the interaction is attractive or repulsive and also on the sign of the temperature. For a repulsive interaction the entropy would be negative so that the state would be thermodynamically unstable in ordinary thermodynamics.
The integration of dE= TdS in the case of Coulomb potential gives E= V(r)-V(0) for both options. If the charge density near origin is constant, one has V(r) proportional to r2 in this region implying V(0)=0 so that one obtains Coulombic interaction energy E=V(r). Hence thermodynamical interpretation makes sense formally.
The challenge is to generalize the formula of entropy in Lorentz invariant and general coordinate invariant manner. Basically the challenge is to express the interaction energy in this manner. Entropy characterizes the entire flux tube and is therefore a non-local quantity. This justifies the use of interaction energy in the formula. In principle the dynamics defined by the extremals of Kähler action predicts the dependence of the interaction energy on Minkowskian length of the flux tube, which is well-defined in TGD Universe. Entropy should be also a scalar. This is achieved since the rest frame is fixed uniquely by the time direction defined by the time-like line connecting the tips of CD: the interaction energy in rest frame of CD defines a scalar. Note that the sign of entropy correlates with the sign of interaction energy so that the repulsive situation would be thermodynamically unstable and this suggests that matter antimatter asymmetry could relate to thermal instability.
Possible role of Beltrami flows and symplectic invariance in the description of gauge and gravitational interactions
One of the most recent observations made by people working with twistors is the finding of Monteiro and O'Connell described in the preprint The Kinematic Algebra From the Self-Dual Sector . The claim is that one can obtain supergravity amplitudes by replacing the color factors with kinematic factors which obey formally 2-D symplectic algebra defined by the plane defined by light-like momentum direction and complexified variable in the plane defined by polarizations. One could say that momentum and polarization dependent kinematic factors are in exactly the same role as the factors coming from Yang-Mills couplings. Unfortunately, the symplectic algebra looks rather formal object since the first coordinate is light-like coordinate and second coordinate complex transverse coordinate. It could make sense only in the complexification of Minkowski space.
In any case, this would suggest that the gravitational gauge group (to be distinguished from diffeomorphisms) is symplectic group of some kind having enormous representative power as we know from the fact that the symmetries of practically any physical system are realized in terms of symplectic transformations. According to the authors of kenocitebthe/kinealgebra one can identify the Lie algebra of symplectic group of sphere with that of SU(N) at large N limit in suitable basis. What makes this interesting is that at large N limit non-planar diagrams which are the problem of twistor Grassmann approach vanish: this is old result of t'Hooft, which initiated the developments leading to AdS/CFT correspondence.
The symplectic group of δ M4+/-× CP2 is the isometry algebra of WCW and I have proposed that the effective replacement of gauge group with this group implies the vanishing of non-planar diagrams (see this). The extension of SYM to a theory of also gravitation in TGD framework could make Yangian symmetry exact, resolve the infrared divergences, and the problems caused by non-planar diagrams. It would also imply stringy picture in finite measurement resolution. Also the the construction of the non-commutative homology and cohomology in TGD framework led to the lifting of Galois group algebras to their braided variants realized as symplectic flows and to the conjecture that in finite measurement resolution the cohomology obtained in this manner represents WCW ("world of classical worlds") spinor fields (or at least something very essential about them) [see this].
It is however difficult to understand how one could generalize the symplectic structure so that also symplectic transformations involving light-like coordinate and complex coordinate of the partonic 2-surface would make sense in some sense. In fact, a more natural interpretation for the kinematic algebra would in terms of volume preserving flows which are also Beltrami flows (see for instance this). This gives a connection with quantum TGD since Beltrami flows define a basic dynamical symmetry for the preferred extremals of Kähler action which might be called Maxwellian phase.
The general Beltrami flow gives as a special case the kinetic flow associated by Monteiro and O'Connell with plane waves. For ordinary plane wave with constant direction of momentum vector and polarization vector one could take Φ =cos(φ), φ=kkenocdot m and Ψ = εkenocdot m. This would give a real flow. The kinematical factor in SYM diagrams corresponds to a complexified flow Φ =exp(iφ) and Ψ= φ+ w, where w is complex coordinate for polarization plane or more naturally, complexificaton of the coordinate in polarization direction. The flow is not unique since gauge invariance allows to modify φ term. The complexified flow is volume preserving only in the formal algebraic sense and satisfies the analog of Beltrami condition only in Dolbeault cohomology where d is identified as complex exterior derivative (df=df/dzdz for holomorphic functions). In ordinary cohomology it fails. This formal complex flow of course does not define a real diffeomorphism at space-time level: one should replace Minkowski space with its complexification to get a genuine flow.
The finding of Monteiro and O'Connell encourages to think that the proposed more general Abelian algebra pops up also in non-Abelian YM theories. Discretization by braids would actually select single polarization and momentum direction. If the volume preserving Beltrami flows characterize the basic building bricks of radiation solutions of both general relativity and YM theories, it would not be surprising if the kinematic Lie algebra generators would appear in the vertices of YM theory and replace color factors in the transition from YM theory to general relativity. In TGD framework the construction of vertices at partonic two-surfaces would define local kinematic factors as effectively constant ones.
For background see the chapter Basic Extremals of Kähler Action.
Sean Carroll writes about breakdown of classical gravity in Cosmic variance. Recall that the galactic dark matter problem arose with the observation that the velocity spectrum of distance star is constant rather than behaving as 1/r as Newton's law assuming that most mass is in the galactic center predicts.
The MOND theory and its variants predict that there is a critical acceleration below which Newtonian gravity fails. This would mean that Newtonian gravitation is modified at large distances. String models and also TGD predict just the opposite since in this regime General Relativity should be a good approximation.
In TGD framework critical acceleration is predicted but the recent experiment does not force to modify Newton's laws. Since Big Science is like market economy in the sense that funding is more important than truth, the attempts to communicate TGD based view about dark matter have turned out to be hopeless. Serious Scientist does not read anything not written on silk paper.
TGD option explains also other strange findings of cosmology.
For more about TGD based vision about cosmology and astrophysics see the chapter TGD and Astrophysics.
I discussed entropic gravity of Verlinde for some time ago in rather critical spirit but made also clear that quantum TGD in the framework of zero energy ontology could be called square root of thermodynamics so that thermodynamics- or its square root- should emerge at the level of the lines of generalized Feynman diagrams. The intolerable-to-me features of entropic gravity idea are the claimed absence of gravitons and the nonsense talk about the emergence of dimensions assuming at the same time basic formulas of general relativity.
I returned to the topic later again with a boost given by one of the few people in the finnish academic establishment who have regarded me as a life form with some indications about genuine intelligence. What demonstrates the power of a good idea is that just posing some naturally occurring questions led rapidly to a TGD inspired phenomenology of EG allowing to see what is good and what is bad in EG hypothesis and also to see possible far reaching connections with apparently completely unrelated basic problems of recent day physics.
Consider first the phenomenology of EG in TGD framework.
This approach leads to the question whether the mathematical formalism of quantum TGD could make sense also in General Relativity when appropriately modified. In particular, do the notions of zero energy ontology and causal diamond and the identification of generalized Feynman diagrams as space-time regions of Euclidian signature of the metric make sense? Does the Kähler geometry for world of classical worlds realizing holography in strong sense lead to a formulation of GRT as almost topological QFT characterized by Chern-Simons action with a constraint depending on metric?
At the formal level the formalism for WCW Kähler geometry generalizes as such to almost topological quantum field theory but the conditions of mathematical existence are extremely powerful and the conjecture is that this requires sub-manifold property.
Both Sean Carroll and Lubos report that the LIGO has not detected gravitational waves from black holes with masses in the range 25-100 solar masses. This conforms with theoretical predictions. Earlier searches from Super Novae give also null result: in this case the searches are already at the boundaries of resolution so that one can start to worry.
The reduction of the spinning rate of Hulse-Taylor binary is consistent with the emission of gravitational waves with the predicted rate so that it seems that gravitons are emitted. One can however ask whether gravitational waves might remain undetected for some reason.
Massive gravitons is the first possibility. For a nice discussion see the article of Goldhaber and Nieto giving in their conclusions a table summarizing upper bounds on graviton mass coming from various arguments involving model dependent assumptions. The problem is that it is not at all clear what massive graviton means and whether a simple Yukawa like behavior (exponential damping) for Newtonian gravitational potential is consistent with the general coordinate invariance. In the case of massive photons one has similar problem with gauge invariance. One can of course naiively assume Yukawa like behavior for the Newtonian gravitational potential and derive lower bounds for the Compton wave length of gravitons. The bound is given by λc> 100 Mpc (parsec (pc) is about four light years).
Second bound comes from the pulsar timing measurements. The photons emitted by the pulsar are assume to surf in the sea of gravitational waves created by the pulsar. If gravitons are massive in Yukawa sense they arrive with velocities which are below light velocity, a dispersion of both graviton and photon arrival times is predicted. This gives a much weaker lower bound λc> 1 pc. Note that the distance of Hulse-Taylor binary is 6400 pc so that this upper bound for graviton mass could explain the possible absence of gravitational waves from Hulse-Taylor binary. There are also other bounds on graviton mass but all are plagued by model dependent assumptions.
Also in TGD framework one can imagine explanations for the possible absence of gravitational waves. I have discussed the possibility that gravitons are emitted as dark gravitons with gigantic value of hbar, which decay eventually to bunches of ordinary gravitons meaning that continous stream of gravitons is replaced with bursts which would not be interpreted in terms of gravitons but as noise (see this).
One of the breakthroughs of the last year was related to the twistor approach to TGD in zero energy ontology (ZEO).
Is the massivation of gauge bosons and gravitons in this sense consistent with the Yukawa type behavior?
NASA has published the first list of exoplanets found by Kepler satellite. In particular, the NASA team led by Jack Lissauer reports a discovery of a system of six closely packed planets (see the article in Nature) around a Sunlike star christened as Kepler-11a located in the direction of constellation Cygnus at distance of about 2000 light years. The basic data about the six planets Kepler-11i, i=b,c,d,e,f,g and star Kepler-11a can be found in Wikipedia. Below I will refer to the star by Kepler-11 and planets with label i=b,c,d,e,f,g.
Lissauer regards it as quite possible that there are further planets at larger distances. The fact that the radius of planet g is only .462AU together with what we know about solar system suggests that this could be be the case. This leaves door for Earth like planet.
The conclusions from the basic data
Let us list the basic data.
The basic conclusions are following. One cannot exclude the possibility that the planetary system could contain Earth like planets. Furthermore, the distribution of the orbital radii of the planets differs dramatically from that in solar system.
How to understand the tight packing of the inner planets?
The striking aspect of the planetary system is how tightly packed it is. The ratio for the radii of g and b is about 5. This is a real puzzle for model builders with me included. TGD suggests three phenomenological approaches.
Can one interpret the radii in this framework in any reasonable manner?
For background see the chapter TGD and Astrophysics.