What's new in

Topological Geometrodynamics: an Overview

Note: Newest contributions are at the top!



Year 2017



Are stars borne in pairs?

Stars seem to be born in pairs! For a popular article see this. The research article "Embedded Binaries and Their Dense Cores" is here.

For instance, our nearest neighbor, Alpha Centauri, is a triplet system. Explanation for this have been sought for for a long time. Does star capture occur leading to binaries or triplets. Or does its reverse process in which binary splits up to become single stars occur? There has been even a search for a companion of Sun christened Nemesis.

The new assertion is based on radio survey of a giant molecular cloud filled with recently formed sunlike stars (with age less than 4 million years) in constellation Perseus, a star nursery located 600 ly from us in Milky Way. All singles and twins with separations above 15 AUs were counted.

The proposed mathematical model was able to explain the observations only if all sunlike stars are born as wide binaries. "Wide" means that the mutual distance is more than 500 AU, where AU is the distance of Earth from Sun. After the birth the systems would shrink or split t within time about million years. It was found that wide binaries were not only very young but also tended to be aligned along the long axes of an egg-shaped dense core. Older systems did not have this tendency. For instance, triplets could form as binary captures a single star.

The theory says nothing about why the stars should born as binaries and what could be the birth mechanism. Could TGD say anything interesting about the how the binaries are formed?

  1. TGD based model for galaxies leads to the proposal that the region in which dark matter has constant density corresponds to a very knotted and possibly thickened cosmic string portion or closed very knotted string associated with long cosmic string. There would be an intersection of separate cosmic strings or self-intersection of single cosmic string giving rise to a galactic blackhole from which dark matter emerges and transforms to ordinary matter. Star formation would take place in this region 2-3 times larger than the optical region.
  2. Could an analogous mechanism be at work in star formation? Suppose that there is cosmic string in galactic plane and it has two nearby non-intersecting portions roughly parallel to each other. Deform the other one slightly locally so that it forms intersections with another one. The minimal number of stable intersections is 2 and even number in the general case. Single intersection corresponding to mere touching is a topologically unstable situation. If the intersections give rise to dark blackholes generating later the stars would have explanation for why stars are formed as twin pairs.
This would also explain why the blackholes possibly detected by LIGO are so massive (there is still debate about this going on): they would have not yet produced ordinary stars, a process in which part of dark matter and dark energy of cosmic strings transforms to ordinary matter.
  1. Suppose that these blackhole like objects are indeed intersections of two portions of cosmic string(s). The intersections have gravitational interaction and could move along the second cosmic string towards each other and eventually collide.
  2. More concretely, one can imagine a straight horizontal starionary string A (at x-axis with y=0 in (x,y)-coordinates) and a folded string B with a shape of an inverted vertical parabola (y=-ax2+y0(t), a>0, and moving downwards. In other words, y0(t) decreases with time. The strings A and B have two nearby intersections x+/-= +/- (y0(t)/a)1/2. Their distance decreases with time and eventually the intersection points fuse together at y0(t)=0 and give rise to the fusion of two black-hole like entities to single one.
See the the chapter TGD and astrophysics or the article TGD view about universal galactic rotation curves for spiral galaxies.



New view about blackhole like objects and galaxy formation

One of the topics of discussion were the results related to supermassive blackholes at the centers of galaxies. Gareth gave a link to an article telling about correlations between supermassive blackhole in galactic center and the evolution of galaxy itself.

  1. The size of the blackhole like object - that is its mass if blackhole in GRT sense is in question - correlates with the constant rotation velocity of distant stars for spiral galaxies.
  2. The relationship between the masses of black hole and galactic bulge are in constant relation: the mass ratio is about 700.
  3. A further finding is that galactic blackholes of very old stars are much more massive than the idea about galactic blackhole getting gradually bigger by "eating" surrounding stars would suggests. Unfortunately, I did not find link this article due to the strange FB episode.
This looks strange if one believes in the standard dogma that the galactic blackhole started to form relatively lately. What comes in mind is rather unorthodox idea. What if the large blackhole like entity was there from the beginning and gradually lost its mass? In TGD framework this could make sense!
  1. In TGD Universe galaxies are like pearls in a necklace defined by a long cosmic string. This explains the flat rotational spectrum and predicts essentially free motion along the string related perhaps to coherent motions in very long length scales. This explains also the old observation that galaxies form filament like structures and the correlations between spin directions of galaxies along the same filament since one expects that the spin is parallel to the filament locally. Filament can of course change its direction locally so that charge of direction of rotation gives information about the filament shape.
  2. The channelling of gravitational flux in the radial direction orthogonal to the string makes gravitational force very long ranged (1/transversal distance instead of 1/r2) and also stronger and predicts rotational spectrum. This model of dark matter differs dramatically from the fashionable halo model and involves only the string tension as a parameter unlike the halo model.

    The observed rigid body rotation within radius 2-3 times the optical radius (region inside which most stars are) can be understood if the long cosmic string is either strongly knotted or has closed galactic string around long cosmic string. The knotted portion would formed a highly knotted spaghetti like structure giving approximately constant mass density. Stars would be associated with the knotted structure as sub-knots. Light beams from supernovas could be along the string going through the star. Maybe even planets might be associated with thickened strings! One can also imagine intersections of long cosmic strings and Milky Way could contain such.

  3. Galactic black hole like object could correspond to a self intersection of the long cosmic string or of closed galactic cosmic string bound to it. There could be several intersections. They would contain both dark matter and energy in TGD sense and located inside the string. Matter antimatter asymmetry would mean that there is slightly more antimatter inside string and slightly more matter outside it. Twistor lift of TGD predicts the needed new kind of CP breaking. What is new that the galactic blackhole like objects would be present from the beginning and lose their dark mass gradually. Time evolution would be opposite to what it has been usually thought to be!

    Most of the energy of the cosmic string would be magnetic energy identifiable as dark energy. During the cosmic evolution various perturbations would force the cosmic string to gradually thicken so that in M4 projection ceases to be pointlike. Magnetic monopole flux is conserved (BS= constant, S the transversal area), which forces magnetic energy density per unit length - string tension - to be reduced like 1/S. The lost energy becomes ordinary matter: the energy of inflaton field would be replaced with dark magnetic energy and the TGD counterpart for inflationary period would be transition from cosmic string dominated period to radiation dominated cosmology and also the emergence of space-time in GRT sense.

    The primordial cosmic string dominated phase would consist of cosmic strings in M4×CP2. The explanation for the constancy of CMB temperature would suggest quantum coherence in even cosmic scales made possible by the hierarchy of dark matters labelled by the valued of Planck constant heff/h=n. Maybe characterization as a super-fluid rather than gas discussed with Gareth is more precise manner to say it. What would be fantastic that these primordial structures would be directly visible nowadays.

  4. The dark matter particles emanating from the dark supermassive blackhole would transform gradually to ordinary matter so that galaxy would be formed. This would explain the correlation of the bulge size with the mass (and size) of the blackhole correlating with the string tension. The rotational velocity of distant stars with string tension so that the strange correlation between velocity of distant stars and size of galactic blackhole is implied by a common cause.

    This also explains the appearance of Fermi bubbles. Fermi bubbles are formed when dark particles from the blackhole scatter with dark matter and partially transform to ordinary cosmic rays and produce dark photons transformed to visible photons partially. This occurs only within the region where the spaghetti like structure containing dark matter inside the cosmic string exists. Fermi bubbles indeed have the same size as this region.

  5. While writing this I realized that also the galactic bar (2/3 of spiral galaxies have it) should be understood. This is difficult if there is nothing breaking the rotational symmetry around the long cosmic string. The situation changes if one has a portion of cosmic string along the plane of galaxy.

    There is indeed evidence for the second straight string portion: in Milky Way there are mini-galaxies rotating in the plane forming roughly 60 degrees angle with respect to galactic plane and the presence of two cosmic strings portions roughly orthogonal to each other could explain this (see this). Galactic blackhole could be associated with the intersection of string portions. The horizontal string portion could be part of long cosmic string, a separate closed cosmic string, or even another long cosmic string. One can imagine two basic options for the formation of the bar.

    1. The first option is that galactic bar is formed around the straight portion of string. The gravitational force orthogonal to the string portion would create the bar. The ordinary matter in rigid body rotation would be accelerated while approaching the bar and then slow down and dissipate part of its energy in the process. The slowed down stars would after a further rotation of π tend to stuck around the string portion forming bound states with it and start to rotate around it: a kind of galactic traffic jam. Bars would be asymptotic outcomes of the galactic dynamics. Recent studies have confirmed the idea that bars are now are signs of full maturity as the "formative years" end (see this).
    2. Second option is that the bar is formed as dark matter inside bar is transformed to ordinary matter as the portion thickens and loses dark energy identified as Kähler magnetic energy by a process analogous to the decay of inflaton vacuum energy. Bars would be transients in the evolution of galaxies rather than final outcomes. This option is not consistent with the idea that that only the galactic blackhole serves as the source of dark matter transforming to ordinary matter.
  6. The pearls in string model explains also why elliptic galaxies have declining rotational velocity. They correspond to "free" closed strings which have not formed bound states with long cosmic strings transforming them to spiral galaxies. The recently found 10 billion old galaxies with declining rotational velocity could correspond to elliptical galaxies of this kind.

    One can also imagine the analog of ionization. The bound state of closed cosmic string and long cosmic string decays and spiral galaxy starts to decay under centrifugal force not anymore balanced by the gravitational force of the long cosmic strings and would transform to elliptic galaxy. Also the central bulge would start to increase in size.

    It would also lose its central blackhole if is associated with the long cosmic string. I am grateful for Gareth for giving a link to a popular article telling about this kind of elliptic galaxy with very large size of one million light years and without central blackhole and unusually large bulge region.

This view about galactic blackholes also suggests a profound revision of GRT based view for the formation of blackholes. Note that in TGD one must of course speak about blackhole like objects differing from their GRT counterparts inside Schwartschildt radius and also outside it in microscopic scales (gravitational flux is mediated by magnetic flux tubes carrying dark particles). Perhaps also ordinary blachholes were once intersections of dark cosmic strings containing dark matter which gradually produce the stellar matter! If so, old blackholes would be more massive than the young ones.
  1. This new thinking conform with the findings of LIGO. All the three stellar blackholes have been by more than order of magnitude massive than expected. There are also indications that the members of the second blackhole pair merging together did not have parallel spin directions. This does not fit with the idea that a twin pairs of stars was in question. It is very difficult to understand how two blackholes, which do not form bound system could find each other. Similar problem is encountered in bio-catalysis: who to biomolecules manage to find each other in the molecular crowd. The solution to the both problem is very similar.
  2. TGD suggests that the collision could have occurred when to blackholes travelling along strings or portions of the same knotted string arrived from different directions. The gravitational attraction between strings would have helped to generate the intersection and strings would have guided the blackholes together. In biological context even a phase transition reducing Planck constant to the flux tube connecting the molecules could occur and bring the molecules together.

See the the chapter TGD and astrophysics or the article TGD view about universal galactic rotation curves for spiral galaxies.



Third gravitational wave detection by LIGO collaboration

The news about third gravitational wave detection managed to direct the attention of at least some of us from the doings of Donald J. Trump. Also New York Times told about the gravitational wave detection by LIGO, the Laser Interferometer Gravitational-Wave Observatory. Gravitational waves are estimated to be created by a black-hole merger at distance of 3 billion light years. The results are published in the article "Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2" in Phys Rev Lett.

Two black holes with masses 19× M(Sun) and 31× M(Sun) merged to single blackhole hole of with mass of 49× M(Sun) meaning that roughly one solar mass was transformed to gravitational radiation. During the the climax of the merger, they were emitting more energy in the form of gravitational waves than all the stars in the observable universe.

The colliding blackholes were very massive in all three events. There should be some explanation for this. An explanation considered in the article is that the stars giving rise to blackholes were rather primitive containing light elements and this would have allowed large masses. The transformation to blackholes could have occurred directly without the intervening supernova phase. There is indeed quite recent finding showing a disappearance of very heavy star with 25 solar masses suggesting that direct blackhole formation without super-nova explosion is possible for heavy stars.

It is interesting to take a fresh look to these blackhole like entities in TGD framework. This however requires brief summary about the formation of galaxies and stars in TGD Universe (see this and this).

  1. The simplest possibility allowed by TGD is that galaxies as pearls in necklace are knots (or spagettilike substructures) in long cosmic strings. This does not exclude the original identification as closed strings around long cosmic string. These loops must be however knotted. Galactic super-blackhole could correspond to a self-intersection of the long cosmic string. This view is forced by the experimental finding that for mini spirals, there is volume with radius containing essentially constant density of dark matter. The radius of this volume is 2-3 times larger than the volume containing most stars of the galaxy. This region would contain a galactic knot.

    The important conclusion is that stars would be subknots of these galactic knots as indeed proposed earlier. Part of the magnetic energy would decay to ordinary matter giving rise to visible part of start as the cosmic string thickens. This conforms with the finding that the region in which dark matter density seems to be constant has size few times larger than the region containing the stars (size scale is few kpc).

  2. The light beams from supernovas would most naturally arrive along the flux tubes being bound to helical orbits rotating around them. Primordial cosmic string as stars, galaxies, linear structures of galaxies, even elementary particles, hadrons, nuclei, and biomolecules: all these structures would be magnetic flux tubes possibly knotted and linked. The space-time of GRT as a small deformation of M4 would have emerged from cosmic string dominated phase via the TGD counterpart of inflationary period. The signatures of the primordial cosmic string dominated period would be directly visible in all scales! We would be seeing the incredibly simple truth but our theories would prevent us to become aware about what we are seeing!
The crucial question concerns the dark matter fraction of the star.
  1. The fraction depends on the thickness of the deformed cosmic string having originally 1-D projection E3⊂ M4. If Kähler magnetic energy dominates, the energy per length for a thickened flux tube is proportional to 1/S, S the area of M4 projection and thus decreases rapidly with thickening. The thickness of the flux tube would be in minimum about CP2 size scale of 104 Planck lengths. If S is large enough, the contribution of cosmic string to the mass of the star is smaller than that of visible matter created in the thickening.
  2. What about very primitive stars - say those associated with LIGO mergers. The proportion of visible matter in star should gradually increase as flux tube thickens. Could the detected blackhole fusion correspond to a fusion of dark matter stars rather than that of Einsteinian blackholes? If the radius of the objects satisfies rS=2GM, the blackhole like entities are in question also in TGD. The space-time sheet assigable to blachhole according to TGD has however two horizons. The first horizon would be a counterpart of the usual Schwartschild horizons. At second horizon the signature of the induced metric would become Euclidian - this is possible only in TGD. Cosmic string would topologically condense at this space-time sheet.
  3. Could most of matter be dark even in the case of Sun? What can we really say about the portion of the ordinary matter inside Sun? The total rate of nuclear fusion in the solar core depends on the density of ordinary matter and one can argue that existing model does not allow a considerable reduction of the portion of ordinary matter.

    There is however also another option - dark fusion - which would be at work in TGD based model of cold fusion (see this) (low energy nuclear reactions (LENR) is less misleading term) and also in TGD inspired biology (there is evidence for bio-fusion) as Pollack effect (see this), in which part of protons go to dark phase at magnetic flux tubes to form dark nuclear strings creating negatively charged exclusion zone). Dark fusion would give rise to dark proton sequences at magnetic flux tubes decaying by dark beta emission to beta stable nuclei and later to ordinary nuclei and releasing nuclear binding energy.

    Dark fusion could explain the generation of elements heavier than iron not possible in stellar cores (see this). Standard model assumes that they are formed in supernova explosions by so called r-process but empirical data do not support this hypothesis. In TGD Universe dark fusion could occur outside stellar interiors.

  4. But if heavier elements are formed via dark fusion, why the same could not be true for the lighter elements? The TGD based model of atomic nuclei represents nucleus as a string like object or several of them possibly linked and knotted. Thickened cosmic strings again! Nucleons would be connected by meson like bonds with quark and antiquark at their ends.

    This raises a heretic question: could also ordinary nuclear fusion rely on similar mechanism? Standard nuclear physics relies on potential models approximating nucleons with point like particles: this is of course the only thing that nuclear physicists of past could imagine as children of their time. Should the entire nuclear physics be formulated in terms of many-sheeted space-time concept and flux tubes? I have proposed this kind of formulation long time ago (see this). What would distinguish between ordinary and dark fusion would be the value of heff=n× h.

After this prelude it is possible to speculate about blackholes in the spirit of TGD .
  1. Also the interiors of blackholes would contain dark knots and have magnetic structure. This predicts unexpected features such as magnetic moments not possible for GRT blackholes. Also the matter inside blackhole would be dark (the TGD based explanation for Fermi bubbles assumes this (see this). Already the model for the first LIGO event explained the unexpected gamma ray bursts in terms of the twisting of rotating flux tubes as effect analogous to what causes sunspots: twisting and finally reconnection.
  2. One must also ask whether LIGO blackholes are actually dark stars with very small amount of ordinary matter. If the radius is indeed equal to Schwarschild radius rS= 2GM and mass is really what it is estimated to be rather than being systematically smaller, then the interpretation as TGD counterparts of blackholes makes sense. If mass is considerably smaller, the radius would be correspondingly large, and one would not have genuine blackhole. I do not however take this option too seriously.
  3. What about collisions of blackholes? Could they correspond to two knots moving along same string in opposite directions and colliding? Or two cosmic strings intersecting and forming a cosmic crossroad with second blackhole in the crossing? Or self-intersection of single cosmic string? In any case, cosmic traffic accident would be in question.

    The second LIGO event gave hints that the spin directions of the colliding blackholes were not the same. This does not conform with the assumption that binary blackhole system was in question. Since the spin direction would be naturally that of long cosmic string, this suggests that the traffic accident in cosmic cross road defined by intersection or self-intersection created the merger. Note that intersections tend to occur (think of moving strings in 3-D space) and could be stablized by gravitational attraction: two string world sheets at 4-D space-time surface have stable intersections just like strings in plane unless they reconnect.

See the the chapter Quantum astrophysics or the article LIGO and TGD.



TGD view about universal galactic rotation curves for spiral galaxies

The observed universality of rotation curves for spiral galaxies is a challenge for TGD inspired model of galaxy formation. In TGD universality reduces to scaling invariance of the rotation curves natural since TGD Universe is quantum critical. The study of mini spiral galaxies supports the conclusion that they have a dark matter core of radius of few parsecs - 2-3 times the optical radius. This is a problem in the halo models. The simplest TGD based explanation is that galaxies correspond to knots or even spaghetti like tangles of long dark strings defining a kind of necklace containing galaxies as pearls. The model also suggests that dark matter core gives rise to Fermi bubble. Dark cosmic ray protons from supermassive galactic black hole containing dark matter would scatter from dark matter and some fraction of the produced dark photons would transform to ordinary ones. This would take place only inside the dark matter sphere and double sphere structure would be due to the fact that cosmic rays would not proceed far in galactic plane.

See the the chapter Astrophysics in TGD or the article TGD view about universal galactic rotation curves for spiral galaxies.



Galactic blackholes as a test for TGD view about formation of galaxies?

Galactic blackholes (or blackhole like entities) could serve as a test for the proposal. Galactic blackholes are supermassive having masses measured in billions of solar masses. These blackhole like entities is thought to grow rapidly as matter falls into them. In this process light is emitted and makes the blackhole a quasar (see this), one of the most luminous objects in the Universe.

TGD based model predicts that the seed of galaxy would be formed in the reconnection of cosmic strings and consist of dark matter. If galaxies are formed in this manner, the blackhole like entity formed in the reconnection point would get its mass from cosmic strings as dark mass and visible galactic mass would result from dark matter "boiling" to ordinary particles (as in the decay of inflaton field to particles). Matter from cosmic strings could flow to the reconnection point and a fraction of antimatter would remain inside cosmic string as dark matter.

During the "boiling" period intense radiation is generated, which leads to ask whether an interpretation as a formation of a quasar makes sense. The flow of matter would be from the blackhole like object rather than into it as in the ordinary model of quasar. Quasar like objects could of course be created also by the standard mechanism as ordinary matter starts to fall into the galactic dark blackhole and transforms to dark matter. This would occur much later than the formation of galactic blackhole like objects and galaxies around them.

Now three odd-ball quasars have been discovered in the early universe (13 billion years in past, less than billion years after Big Bang) by Eilers et al (see this). The authors conclude that the most compelling scenario is that these quasars have been shining only about 105 years. This time is not enough to build the mass that they have. This challenges the standard mechanism for the formation of galactic blackholes. What about the situation in TGD Universe? Could the odd-balls quasars be quasars in the usual sense of the word being created as ordinary matter starts to fall to the galactic dark matter blackhole and transforms to dark matter? Quantum phase transition would be involved.

See the the chapter Breaking of CP, P, and T in cosmological scales in TGD Universe or the article with the same title.



Getting even more quantitative about CP violation

The twistor lift of TGD forces to introduce the analog of Kähler form for M4, call it J. J is covariantly constant self-dual 2-form, whose square is the negative of the metric. There is a moduli space for these Kähler forms parametrized by the direction of the constant and parallel magnetic and electric fields defined by J. J partially characterizes the causal diamond (CD): hence the notation J(CD) and can be interpreted as a geometric correlate for fixing quantization axis of energy (rest system) and spin.

Kähler form defines classical U(1) gauge field and there are excellent reasons to expect that it gives rise to U(1) quanta coupling to the difference of B-L of baryon and lepton numbers. There is coupling strength α1 associated with this interaction. The first guess that it could be just Kähler coupling strength leads to unphysical predictions: α1 must be much smaller. Here I do not yet completely understand the situation. One can however check whether the simplest guess is consistent with the empirical inputs from CP breaking of mesons and antimatter asymmetry. This turns out to be the case.

One must specify the value of α1 and the scaling factor transforming J(CD) having dimension length squared as tensor square root of metric to dimensionless U(1) gauge field F= J(CD)/S. This leads to a series of questions.

How to fix the scaling parameter S?

  1. The scaling parameter relating J(CD) and F is fixed by flux quantization implying that the flux of J(CD) is the area of sphere S2 for the twistor space M4× S2. The gauge field is obtained as F=J/S, where S= 4π R2(S2) is the area of S2.
  2. Note that in Minkowski coordinates the length dimension is by convention shifted from the metric to linear Minkowski coordinates so that the magnetic field B1 has dimension of inverse length squared and corresponds to J(CD)/SL2, where L is naturally be taken to the size scale of CD defining the unit length in Minkowski coordinates. The U(1) magnetic flux would the signed area using L2 as a unit.
How R(S2) relates to Planck length lP? lP is either the radius lP=R of the twistor sphere S2 of the twistor space T=M4× S2 or the circumference lP= 2π R(S2) of the geodesic of S2. Circumference is a more natural identification since it can be measured in Riemann geometry whereas the operational definition of the radius requires imbedding to Euclidian 3-space.

How can one fix the value of U(1) coupling strength α1? As a guideline one can use CP breaking in K and B meson systems and the parameter characterizing matter-antimatter symmetry.

  1. The recent experimental estimate for so called Jarlskog parameter characterizing the CP breaking in kaon system is J≈ 3.0× 10-5. For B mesons CP breading is about 50 times larger than for kaons and it is clear that Jarlskog invariant does not distinguish between different meson so that it is better to talk about orders of magnitude only.
  2. Matter-antimatter asymmetry is characterized by the number r=nB/nγ ∼ 10-10 telling the ratio of the baryon density after annihilation to the original density. There is about one baryon 10 billion photons of CMB left in the recent Universe.
Consider now the identification of α1.
  1. Since the action is obtained by dimensional reduction from the 6-D Kähler action, one could argue α1= αK. This proposal leads to unphysical predictions in atomic physics since neutron-electron U(1) interaction scales up binding energies dramatically.

    U(1) part of action can be however regarded a small perturbation characterized by the parameter ε= R2(S2)/R2(CP2), the ratio of the areas of twistor spheres of T(M4) and T(CP2). One can however argue that since the relative magnitude of U(1) term and ordinary Kähler action is given by ε, one has α1=ε× αK so that the coupling constant evolution for α1 and αK would be identical.

  2. ε indeed serves in the role of coupling constant strength at classical level. αK disappears from classical field equations at the space-time level and appears only in the conditions for the super-symplectic algebra but ε appears in field equations since the Kähler forms of J resp. CP2 Kähler form is proportional to R2(S2) resp. R2(CP2) times the corresponding U(1) gauge field. R(S2) appears in the definition of 2-bein for R2(S2) and therefore in the modified gamma matrices and modified Dirac equation. Therefore ε1/2=R(S2)/R(CP2) appears in modified Dirac equation as required by CP breaking manifesting itself in CKM matrix.

    NTU for the field equations in the regions, where the volume term and Kähler action couple to each other demands that ε and ε1/2 are rational numbers, hopefully as simple as possible. Otherwise there is no hope about extremals with parameters of the polynomials appearing in the solution in an arbitrary extension of rationals and NTU is lost. Transcendental values of ε are definitely excluded. The most stringent condition ε=1 is also unphysical. ε= 22r is favoured number theoretically.

Concerning the estimate for ε it is best to use the constraints coming from p-adic mass calculations.
  1. p-Adic mass calculations predict electron mass as

    me= hbar/R(CP2)(5+Y)1/2 .

    Expressing me in terms of Planck mass mP and assuming Y=0 (Y∈ (0,1)) gives an estimate for lP/R(CP2) as

    lPR(CP2) ≈ 2.0× 10-4 .

  2. From lP= 2π R(S2) one obtains estimate for ε, α1, g1=(4πα1)1/2 assuming αK≈ α≈ 1/137 in electron length scale.

    ε = 2-30 ≈ 1.0× 10-9 ,

    α1=εαK ≈ 6.8× 10-12 ,

    g1= (4πα11/2 ≈ 9.24 × 10-6 .

There are two options corresponding to lP= R(S2) and lP =2π R(S2). Only the length of the geodesic of S2 has meaning in the Riemann geometry of S2 whereas the radius of S2 has operational meaning only if S2 is imbedded to E3. Hence lP= 2π R(S2) is more plausible option.

For ε=2-30 the value of lP2/R2(CP2) is lP2/R2(CP2)=(2π)2 × R2(S2)/R2(CP2) ≈ 3.7× 10-8. lP/R(S2) would be a transcendental number but since it would not be a fundamental constant but appear only at the QFT-GRT limit of TGD, this would not be a problem.

One can make order of magnitude estimates for the Jarlskog parameter J and the fraction r= n(B)/n(γ). Here it is not however clear whether one should use ε or α1 as the basis of the estimate

  1. The estimate based on ε gives J∼ ε1/2 ≈ 3.2× 10-5 ,

    r∼ ε ≈ 1.0× 10-9 .

    The estimate for J happens to be very near to the recent experimental value J≈ 3.0× 10-5. The estimate for r is by order of magnitude smaller than the empirical value.

  2. The estimate based on α1 gives J∼ g1 ≈ 0.92× 10-5 ,

    r∼ α1 ≈ .68× 10-11 .

    The estimate for J is excellent but the estimate for r by more than order of magnitude smaller than the empirical value. One explanation is that αK has discrete coupling constant evolution and increases in short scales and could have been considerably larger in the scale characterizing the situation in which matter-antimatter asymmetry was generated.

Atomic nuclei have baryon number equal the sum B= Z+N of proton and neutron numbers and neutral atoms have B= N. Only hydrogen atom would be also U(1) neutral. The dramatic prediction of U(1) force is that neutrinos might not be so weakly interacting particles as has been thought. If the quanta of U(1) force are not massive, a new long range force is in question. U(1) quanta could become massive via U(1) super-conductivity causing Meissner effect. As found, U(1) part of action can be however regarded a small perturbation characterized by the parameter ε= R2(S2)/R2(CP2). One can however argue that since the relative magnitude of U(1) term and ordinary Kähler action is given by ε, one has α1=ε× αK.

Quantal U(1) force must be also consistent with atomic physics. The value of the parameter α1 consistent with the size of CP breaking of K mesons and with matter antimatter asymmetry is α1= εαK = 2-30αK.

  1. Electrons and baryons would have attractive interaction, which effectively transforms the em charge Z of atom Zeff= rZ, r=1+(N/Z)ε1, ε11/α=ε × αK/α≈ ε for αK≈ α predicted to hold true in electron length scale. The parameter

    s=(1 + (N/Z)ε)2 -1= 2(N/Z)ε +(N/Z)2ε2

    would characterize the isotope dependent relative shift of the binding energy scale.

    The comparison of the binding energies of hydrogen isotopes could provide a stringent bounds of the value of α1. For lP= 2π R(S2) option one would have α1=2-30αK ≈ .68× 10-11 and s≈ 1.4× 10-10. s is by order of magnitude smaller than α4≈ 2.9× 10-9 corrections from QED (see this). The predicted differences between the binding energy scales of isotopes of hydrogen might allow to test the proposal.

  2. B=N would be neutralized by the neutrinos of the cosmic background. Could this occur even at the level of single atom or does one have a plasma like state? The ground state binding energy of neutrino atoms would be α12mν/2 ∼ 10-24 eV for mν =.1 eV! This is many many orders of magnitude below the thermal energy of cosmic neutrino background estimated to be about 1.95× 10-4 eV (see this). The Bohr radius would be hbar/(α1mν) ∼ 106 meters and same order of magnitude as Earth radius. Matter should be U(1) plasma. U(1) superconductor would be second option.
See the new chapter Breaking of CP, P, and T in cosmological scales in TGD Universe of "Physics in Many-Sheeted Space-time" or the article with the same title.



Breaking of CP, P, and T in cosmological scales in TGD Universe

The twistor lift of TGD forces the analog of Kähler form for M4. Covariantly constant sef-dual Kähler form J(CD) depends on causal diamond of M4 and defines rest frame and spin quantization axis. This implies a violation of CP, P, and T. By introducing a moduli space for the Kähler forms one avoids the loss of Poincare invariance. The natural question is whether J(CD) could relate to CP breaking for K and B type mesons, to matter antimatter asymmetry and the large scale parity breaking suggested by CMB data.

The simplest guess for the coupling strength of U(1) interaction associated with J(CD) predicts a correct order of magnitude for CP violation for K meson and for the antimatter asymmetry and inspires a more detailed discussion. A general mechanism for the generation of matter asymmetry is proposed, and a model for the formation of disk- and elliptic galaxies is considered. The matter antimatter asymmetry would be apparent in the sense that the CP asymmetry would force matter-antimatter separation: antimatter would reside as dark matter (in TGD sense) inside magnetic flux tubes and matter outside them. Also the angular momenta of dark matter and matter would compensate each other.

See the new chapter Breaking of CP, P, and T in cosmological scales in TGD Universe or the article with the same title.



Further support for TGD view about galactic dark matter

The discoveries related to galaxies and dark matter emerge with an accelerating pace, and from TGD point of view it seems that puzzle of galactic dark matter is now solved.

The newest finding is described in popular article This Gigantic Ring of Galaxies Could Bring Einstein's Gravity Into Question. What has been found that in a local group of 54 galaxies having Milky Way and Andromeda near its center the other dwarf galaxies recede outwarts as a ring. The local group is in good approximation in plane and the situation is said to look like having a spinning umbrella from which the water droplets fly radially outwards.

The authors of the article Anisotropic Distribution of High Velocity Galaxies in the Local Group argue that the finding can be understood aif Milky Way and Andromeda had nearly head-on collision about 10 billion light-years ago. The Milky Way and Andromeda would have lost the radially moving dwarf galaxies in this collision during the rapid acceleration turning the direction of motion of both. Coulomb collision is good analog.

There are however problems. The velocities of the dwards are quite too high and the colliding Milky Way and Andromeda would have fused together by the friction caused by dark matter halo.

What says TGD? In TGD galactic dark matter (actually also energy) is at cosmic strings thickened to magnetic flux tubes like pearls along necklace. The finding could be perhaps explained if the galaxies in same plane make a near hit and generate in the collision the dwarf galaxies by the spinning umbrella mechanism.

In TGD Universe dark matter is at cosmic strings and this automatically predicts constant velocity distribution. The friction created by dark matter is absent and the scattering in the proposed manner could be possible. The scattering event could be basically a scattering of approximately parallel cosmic strings with Milky Way and Andromeda forming one pearl in their respective cosmic necklaces.

But were Milky Way and Andromeda already associated with cosmic strings at that time? The time would be about 10 billion years. One annot exclude this possibility. Note however that the binding to strings might have helped to avoid the fusion. The recent finding about effective absence of dark matter about 10 billion light years ago - velocity distributions decline at large distances - suggests that galaxies formed bound states with cosmic strings only later. This would be like formation of neutral atoms from ions as energies are not too high! How fast the things develop becomes clear from the fact that I posted TGD explanation to my blog yesterday and replaced with it with a corrected version this morning!.

See the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article TGD interpretation for the new discovery about galactic dark matter.



Velocity curves of galaxies decline in the early Universe

Sabine Hossenfelder gave a link to a popular article "Declining Rotation Curves at High Redshift" (see this) telling about a new strange finding about galactic dark matter. The rotation curves are declining in the early Universe meaning distances about 10 billion light years (see this). In other words, the rotation velocity of distant stars decreases with radius rather than approaching constant - as if dark matter would be absent and galaxies were baryon dominated. This challenges the halo model of dark matter. For the illustrations of the rotation curves see the article. Of course, the conclusions of the article are uncertain.

Some time ago also a finding about correlation of baryonic mass density with density of dark matter emerged: the ScienceDaily article "In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration" can be found here. The original article can be found in arXiv.org (see this). TGD explanation (see this) involves only the string tension of cosmic strings and predicts the behavior of baryonic matter on distance from the center of the galaxy.

In standard cosmology based on single-sheeted GRT space-time large redshifts mean very early cosmology at the counterpart of single space-time sheet, and the findings are very difficult to understand. What about the interpretation of the results in TGD framework? Let us first summarize the basic assumptions behind TGD inspired cosmology and view about galactic dark matter.

  1. The basic difference between TGD based and standard cosmology is that many-sheeted space-time brings in fractality and length scale dependence. In zero energy ontology (ZEO) one must specify in what length scale the measurements are carried out. This means specifying causal diamond (CD) parameterized by moduli including the its size. The larger the size of CD, the longer the scale of the physics involved. This is of course not new for quantum field theorists. It is however a news for cosmologists. The twistorial lift of TGD allows to formulate the vision quantitatively.
  2. TGD view resolves the paradox due to the huge value of cosmological constant in very small scales. Kähler action and volume energy cancel each other so that the effective cosmological constant decreases like inverse of the p-adic length scale squared because these terms compensate each other. The effective cosmological constant suffers huge reduction in cosmic scales and solves the greatest (the "most gigantic" would be a better attribute) quantitative discrepancy that physics has ever encountered. The smaller value of Hubble constant in long length scales finds also an explanation (see this). The acceleration of cosmic expansion due to the effective cosmological constant decreases in long scales.
  3. In TGD Universe galaxies are located along cosmic strings like pearls in necklace, which have thickened to magnetic flux tubes. The string tension of cosmic strings is proportional to the effective cosmological constant. There is no dark matter hallo: dark matter and energy are at the magnetic flux tubes and automatically give rise to constant velocity spectrum for distant stars of galaxies determined solely by the string tension. The model allows also to understand the above mentioned finding about correlation of baryonic and dark matter densities (see this) .
What could be the explanation for the new findings about galactic dark matter?
  1. The idea of the first day is that the string tension of cosmic strings depends on the scale of observation and this means that the asymptotic velocity of stars decreases in long length scales. The asymptotic velocity would be constant but smaller than for galaxies in smaller scales. The velocity graphs show that in the velocity range considered the velocity decreases. One cannot of course exclude the possibility that velocity is asymptotically constant.

    The grave objection is that the scale is galactic scale and same for all galaxies irrespective of distance. The scale characterizes the object rather than its distance for observer. Fractality suggests a hierarchy of string like structures such that string tension in long scales decreases and asymptotic velocity associated with them decreases with the scale.

  2. The idea of the next day is that the galaxies at very early times have not yet formed bound states with cosmic strings so that the velocities of stars are determined solely by the baryonic matter and approach to zero at large distances. Only later the galaxies condense around cosmic strings - somewhat like water droplets around blade of grass. The formation of these gravitationally bound states would be analogous to the formation of bound states of ions and electrons below ionization temperature or formation of hadrons from quarks but taking place in much longer scale. The early galaxies are indeed baryon dominated and decline of the rotation velocities would be real.
See the chapter TGD and Astrophysics or the article TGD interpretation for the new discovery about galactic dark matter.



What about actual realization of Lorentz invariant synchronization?

The clocks distributed at the hyperboloids of light-cone assignable to CD can in principle be synchronized in Lorentz invariant manner (see this). But what about actual Lorentz invariant synchronization of the clocks? Could TGD say something non-trivial about this problem? I received an interesting link relating to this (see this). The proposed theory deals with fundamental uncertainty of clock time due to quantum-gravitational effects. There are of course several uncertainties involved since quantum theory of gravity does not exist (officially) yet!

  1. Operationalistic definition of time is adopted in the spirit with the empiristic tradition. Einstein was also empirist and talked about networks of synchronized clocks. Nowadays particle physicists do not talk much about them. Symmetry based thinking dominates and Special Relativity is taken as a postulate about symmetries.
  2. In quantum gravity situation becomes even rather complex. If quantization attempt tries to realize quantum states as superpositions of 3-geometries one loses time totally. If GRT space-time is taken to be small deformation of Minkowski space one has path integral and classical solutions of Einstein's equation define the background.

    The difficult problem is the identification of Minkowski coordinates unless one regards GRT as QFT in Minkowski space. In astrophysical scales QFT picture one must consider solutions of Einstein's equations representing astrophysical objects. For the basic solutions of Einstein's equations the identification of Minkowski coordinates is obvious but in general case such as many-particle system this is not anymore so. This is a serious obstacle in the interpretation of the classical limit of GRT and its application to planetary systems.

What about the situation in TGD? Particle physicist inside me trusts symmetry based thinking and has been somewhat reluctant to fill space-time with clocks but I am ready to start the job if necessarily! Since I am lazy I of course hope that Nature might have done this already and the following argument suggests that this might be the case!
  1. Quantum states can be regarded as superpositions of space-time surfaces inside causal diamond of imbedding space H= M4× CP2 in quantum TGD. This raises the question how one can define universal time coordinate for them. Some kind of absolute time seems to be necessary.
  2. In TGD the introduction of zero energy ontology (ZEO) and causal diamonds (CDs) as perceptive fields of conscious entities certainly brings in something new, which might help. CD is the intersection of future and past directed light-cones analogous to a big bang followed by big crunch. This is however only analogy since CD represents only perceptive field not the entire Universe.

    The imbeddability of space-time as to CD× CP2 ⊂ H= M4× CP2 allows the proper time coordinate a2 =t2-r2 near either CD bouneary as a universal time coordinate, "cosmic time". At a= constant hyperboloids Lorentz invariant synchronisation is possible. The coordinate a is kind of absolute time near a given boundary of CD representing the perceptive field of a particular conscious observer and serves as a common time for all space-time surfaces in the superposition. Newton would not have been so wrong after all.

    Also adelic vision involving number theoretic arguments selects a as a unique time coordinate. In p-adic sectors of adele number theoretic universality (NTU) forces discretization since the coordinates of hyperboloid consist of hyperbolic angle and ordinary angles. p-Adicallhy one cannot realize either angles nor their hyperbolic counterparts. This demands discretization in terms of roots of unity (phases) and roots of e (exponents of hyperbolic angles) inducing finite-D extension of p-adic number fields in accordance with finiteness of cognition. a as Lorentz invariant would be genuine p-adic coordinate which can in principle be continuous in p-adic sense. Measurement resolution however discretizes also a.

    This discretization leads to tesselations of a=constant hyperboloid having interpretation in terms of cognitive representation in the intersection of real and various p-adic variants of space-time surface with points having coordinates in the extension of rationals involved. There are two choices for a. The correct choice corresponds to the passive boundary of CD unaffected in state function reductions.

  3. Clearly, the vision about space-time as 4-surface of H and NTU show their predictive power. Even more, adelic physics itself might solve the problem of Lorentz invariant synchronization in terms of a clock network assignable to the nodes of tesselation!

    Suppose that tesselation defines a clock network. What synchronization could mean? Certainly strong correlations between the nodes of the network Could the correlation be due to maximal quantum entanglement (maximal at least in p-adic sense) so that the network of clocks would behave like a single quantum clock? Bose-Einstein condensate of clocks as one might say? Could quantum entanglement in astrophysical scales predicted by TGD via hgr= heff=n× h hypothesis help to establish synchronized clock networks even in astrophysical scales? Could Nature guarantee Lorentz invariant synchronization automatically?

    What would be needed would be not only 3-D lattice but also oscillatory behaviour in time. This is more or less time crystal (see this and this)! Time crystal like states have been observed but they require feed of energy in contrast to what Wilzek proposed. In TGD Universe this would be due to the need to generate large heff/h=n phases since the energy of states with n increases with n. In biological systems this requires metabolic energy feed. Can one imageine even cosmic 4-D lattice for which there would be the analog of metabolic energy feed?

    I have already a model for tensor networks and also here a appears naturally (see this). Tensor networks would correspond at imbedding space level to tesselations of hyperboloid t2-r2=a2 analogous to 3-D lattices but with recession velocity taking the role of quantized position for the point of lattice. They would induce tesselations of space-time surface: space-time surface would go through the points of the tesselation (having also CP2 counterpart). The number of these tesselations is huge. Clocks would be at the nodes of these lattice like structures. Maximal entanglement would be key feature of this network. This would make the clocks at the nodes one big cosmic clock.

    If astrophysical objects serving as clocks tend to be at the nodes of tesselation, quantization of cosmic redshifts is predicted! What is fascinating is that there is evidence for this: for TGD based model for this see (see this and this)! Maybe dark matter fraction of Universe might have taken care of the Lorentz invariant synchronization so that we need not worry about that!

See the chapter More about TGD inspired cosmology.



Is Lorentz invariant synchronization of clocks possible?

I participated an FB discussion with several anti-Einsteinians. As a referee I have expressed my opinion about numerous articles claiming that Einstein's special or general relativity contains a fatal error not noticed by any-one before. I have tried to tell that colleagues are extremely eager to find a mistake in the work of colleague (unless they can silence the colleague) so that logical errors can be safely excluded. If something goes wrong it is at the level of basic postulates. In vain.

Once I had a long email discussion with a professor of logic who claimed to have found logical mistake in the deduction of time dilation formula. It was easy to find that he thought in terms of Newtonian space-time and this was of course in conflict with relativistic view. The logical error was his, not Einstein's. I tried to tell this. In vain again.

At this time I was demanded to explain why the 2 page article of Stephen Crothers (see this). This article was a good example of own logical error projected to that of Einstein. The author assumed besides the basic formulas for Lorentz transformation also synchronization of clocks so that they show the same time everywhere (about how this is achieved see this).

Even more: Crothers assumes that Einstein assumed that this synchronization is Lorentz invariant. Lorentz invariant synchronization of clocks is not however possible for the linear time coordinate of Minkowski space as also Crothers demonstrates. Einstein was wrong! Or was he? No!: Einstein of course did not assume Lorentz invariant synchronization!

The assumption that the synchronization of clock network is invariant under Lorentz transformations is of course in conflict with SR. In Lorentz boosted system the clocks are not in synchrony. This expresses just Einstein's basic idea about the relativity of simultaneity. Basic message of Einstein is misunderstood! The Newtonian notion of absolute time again!

The basic predictions of SR - time dilation and Lorentz contraction - do not depend on the model of synchronization of clocks. Time dilation and Lorentz contraction follow from basic geometry of Minkowskian space-time extremely easily.

Draw system K and K' moving with constant velocity with respect to K. The t' and x' axis of K' have angle smaller than π/2 and are in first quadrant.

  1. Assume first that K corresponds to the rest system of particle. You see that the projection of segment=(0,t') t'-axis to t-axis is shorter than the segment (0,t'): time dilation.
  2. Take K to be the system of stationary observer. Project the segment L=(0,x') to segment on x axis. It is shorter than L: Lorentz contraction.
There is therefore no need to build synchronized networks of clocks to deduce time dilation and Lorentz contraction. They follow from Minkowskian geometry.

This however raises a question. Is it possible to find a system in which synchronization is possible in Lorentz invariant manner? The quantity a2=t2-x2 defines proper time coordinate a along time like geodesics as Lorentz invariant time coordinate of light-one. a = constant hyper-surfaces are now hyperboloids. If you have a synchronized network of clocks, its Lorentz boost is also synchronized. General coordinate invariance of course allows this choice of time coordinate.

For Robertson-Walker cosmologies with sub-critical mass time coordinate a is Lorenz invariant so that one can have Lorentz invariant synchronization of clocks. General Coordinate Invariance allows infinitely many choices of time coordinate and the condition of Lorentz invariant synchronization fixes the time coordinate to cosmic time (or its function to be precise). To my opinion this is rather intesting fact.

What about TGD? In TGD space-time is 4-D surface in H=M4×CP2. a2= t2-r2 defines Lorentz invariant time coordinate a in future light-cone M4+ ⊂ M4 which can be used as time-coordinate also for space-time surfaces.

Robertson-Walker cosmologies can be imbedded as 4-surfaces to H=M4×CP2. The empty cosmology would be just the lightcone M4+ imbedded in H by putting CP2 coordinates constant. If CP2 coordinates depend on M4+ proper time a, one obtains more general expanding RW cosmologies. One can have also sub-critical and critical cosmologies for which Lorentz transformations are not isometries of a= constant section. Also in this case clocks are synchronized in Lorentz invariant manner. The duration of these cosmologies is finite: the mass density diverges after finite time.

See the chapter More about TGD inspired cosmology.



Bullet cluster, cold dark matter, and MOND

Sabine Hossenfelder wrote about Bullet Cluster. Usually Bullet Cluster is seen to favor dark matter and disfavor MOND theory introducing a modification of Newtonian gravity. Sabine Hossenfeldersaw it differently.

Cold dark matter model (ΛCDM) and MOND are two competing mainstream models explaining the constant velocity spectrum of stars in galaxies.

  1. ΛCDM assumes that dark matter forms a spherical halo around galaxy and that its density profile is such that it gives the observed velocity spectrum of distant stars. The problem of the model is that dark matter distribution can have many shapes and it is not easy to understand why approximately constant velocity spectrum is obtained. Also the attempts to find dark matter particles identified as some exoticons have failed one after another. The recent finding that the velocity spectrum of distant stars around galaxies correlates strongly with the density of baryonic matter also challenges this model: it is difficult to believe that the halo would have so universal baryonic mass density.
  2. MOND does not assume dark matter but makes an ad hoc modification of gravitational force for small accelerations. The problem of MOND is that it is indeed an ad hoc modification and it is not easy to see how to make it consistent with general relativity: it is difficult to do cosmology using MOND. For small accelerations (small space-time curvatures) one would expect Newtonian theory to be an excellent approximation.
Consider now how Bullet Cluster relates to these two options. Bullet cluster is a pair of galaxy clusters which has emerged from collision (see the figure). There exists data at optical wavelenghts about stars. Stars experience only a small gravitational slowing down and are expected to go through the collision region rather fast. Data from X-ray measurements give information about the intergalactic gas associated with clusters. This gas interacts electromagnetically and is slowed down much more and remains in the collision region for a longer time. The red regions in the figure correspond to the gas. Gravitational lensing in turn gives information about space-time curvature and these two regions are farthest away from the collision center. These regions are blue and would naturally correspond to dark matter in ΛCDM model. Both models have severe problems.
  1. In cold dark matter model the event would require too high relative velocity for colliding clusters - about c/100. The probability for this kind of collision in cold dark matter model is predicted to be very low - about 6.4×10-6. Something seems to be wrong with ΛCDM model.
  2. In MOND the relative collision velocities are argued to be much more frequent. Bee however forgot to mention that in MOND the lensing is expected to be associated with X-ray region (hot gas in the center of figure) rather than with the blue regions disjoint from it. This observation is a very severe blow against MOND model.
The logical conclusion is that there indeed seems to be dark matter there but it is something different from the cold dark matter. What it could be?

What could be the interpretation in TGD?

  1. In TGD galaxies are associated with cosmic string or more general string like objects like pearls with necklace: that this is the case is known for decades but for some mysterious reason to me has not been used as guideline in dark matter models. Maybe it is very difficult to see things from bigger perspective than galaxies.

    The flux tubes carry Kähler magnetic energy, dark energy, and dark matter in TGD sense having heff=n×h. The galactic matter experiences transversal 1/ρ gravitational force predicting constant velocity spectrum for distant stars when baryonic matter is neglected. Note that one avoids a model for the profile of the halo altogether. The motion of the galaxy along the flux tube is free apart from the forces caused by galaxy. The presence of baryonic matter implies that the velocity increases slowly with distance up to some critical radius. By recent findings correlating observed velocity spectrum with density of baryonic matter one can deduce the density of baryonic matter (see this). A possible interpretation is as remnants of cosmic string like object produced in its decay to ordinary matter completely analogous to the decay of the vacuum energy of inflaton field to matter in inflation theory.

    The order of magnitude for velocity vgal for distant stars in galaxies is about vgal∼ c/1000. In absence of baryonic matter it is predicted to be constant and proportional satisfy v∝ (TG)1/2, T string tension and G Newton's constant (c=1). T in turn is proportional to 1/R2, where R is CP2 radius. Maximal velocity is obtained for cosmic strings. For magnetic flux tubes resulting when cosmic strings develop 4-D M4 projection string tension T and thus vgal is reduced. One obtains larger velocities if there are several parallel flux tubes forming a gravitational bound state so that tensions add.

  2. By fractality also galaxy clusters are expected to form similar linear structures. Concerning the interpretiong of the Bullet Cluster one can imagine two options.
    1. The two colliding clusters could belong to the same string like object and move in opposite directions along it. In this case gravitational lensing would be most naturally associated with the flux tube and there would be single linear blue region instead of the two blue spots of the figure.
    2. The clusters could also belong to different flux tubes, which pass by each other and induce the collision of clusters and the gas associated with them. If the flux tubes are more or less parallel and orthogonal to the plane of the figure, the gravitational lensing would be from the two string like objects and two disjoint blue spots would appear in the figure. This option conforms with the figure.

  3. The collision velocity would correspond to the relative velocity of flux tubes. Can one say anything about the needed collision velocities? The naive first guess of dimensional analyst is that the rotation velocity vgal ∝ (TG)1/2 determining galactic rotation spectrum determines also the typical relative velocity between galaxies. Here T would be the string tension of flux tubes containing galaxy clusters along it. T would gradually decrease during the cosmic evolution as flux tubes gets thicker and magnetic energy density is reduced. The velocity v∼ c/100 suggested by ΛCDM model is 10 times larger than c/1000 for distant stars in galaxies. By fractality similar view would apply to galaxy clusters assigned to flux tubes. Cluster flux tubes containing clusters along them could correspond to bound states of parallel galactic flux tubes containg galaxies along them.
  4. The simplest model for collision of flux tubes treats them as parallel rigid strings so that dimensional reduction to D=2 occurs. The gravitational potential is logarithmic potential: V= Klog(ρ). One can use conservation laws of angular momentum and energy to solve the equations of motion just as in 3-D central force problem. The initial and final angular momentum per mass equals to J= v0a, where a is the impact parameter and v0 the initial velocity. The initial energy per unit mass equals to e= v02/2 and is same in the final state. Conservation law for e gives e= v2/2+Klog(ρ) = v02/2. Conservation law for angular momentum reads j= v ρsin(φ)=v0a and gives v =j/ρsin(φ). Velocity is given from v2=(dρ/dt)2+ ρ2(dφ/dt)2 and leads together with conservation laws a first order differential equation for &drho;/dt.

    Since the potential is logarithmic, there is rather small variation of energy in the collision so that the clusters interact rather weakly. This could produce the same effect as larger relative collision velocity in ΛCDM model with kinetic energy dominating over gravitational potential.

See the chapter TGD and astrophysics.



Questions about TGD

In FB I was made a question about general aspects of TGD. It was impossible to answer the question with few lines and I decided to write a blog posting. I am sorry for typos in the hastily written text. A more detailed article Can one apply Occam’s razor as a general purpose debunking argument to TGD? tries to emphasize the simplicity of the basic principles of TGD and of the resulting theory.

A. In what aspects TGD extends other theory/theories of physics?

I will replace "extends" with "modifies" since TGD also simplifies in many respects. I shall restrict the considerations to the ontological level which to my view is the really important level.

  1. Space-time level is where TGD started from. Space-time as an abstract 4-geometry is replaced as space-time as 4-surface in M4× CP2. In GRT space-time is small deformation of Minkowski space.

    In TGD both Relativity Principle (RP) of Special Relativity (SRT) and General Coordinate Invariance (GCI) and Equivalence Principle (EP) of General Relativity hold true. In GRT RP is given up and leads to the loss of conservation laws since Noether theorem cannot be applied anymore: this is what led to the idea about space-time as surface in H. Strong form of holography (SH) is a further principle reducing to strong form of GCI (SGCI).

  2. TGD as a physical theory extends to a theory of consciousness and cognition. Observer as something external to the Universe becomes part of physical system - the notion of self - and quantum measurement theory which is the black sheet of quantum theory extends to a theory of consciousness and also of cognition relying of p-adic physics as correlate for cognition. Also quantum biology becomes part of fundamental physics and consciousness and life are seen as basic elements of physical existence rather than something limited to brain.

    One important aspect is a new view about time: experienced time and geometric time are not one and same thing anymore although closely related. ZEO explains how the experienced flow and its direction emerges. The prediction is that both arrows of time are possible and that this plays central role in living matter.

  3. p-Adic physics is a new element and an excellent candidate for a correlate of cognition. For instance, imagination could be understood in terms of non-determinism of p-adic partial differential equations for p-adic variants of space-time surfaces. p-Adic physics and fusion of real and various p-adic physics to adelic physics provides fusion of physics of matter with that of cognition in TGD inspired theory of cognition. This means a dramatic extension of ordinary physics. Number Theoretical Universality states that in certain sense various p-adic physics and real physics can be seen as extensions of physics based on algebraic extensions of rationals (and also those generated by roots of e inducing finite-D extensions of p-adics).
  4. Zero energy ontology (ZEO) in which so called causal diamonds (CDs, analogs Penrose diagrams) can be seen as being forced by very simple condition: the volume action forced by twistorial lift of TGD must be finite. CD would represent the perceptive field defined by finite volume of imbedding space H=M4× CP2.

    ZEO implies that conservation laws formulated only in the scale of given CD do not anymore fix select just single solution of field equations as in classical theory. Theories are strictly speaking impossible to test in the old classical ontology. In ZEO testing is possible be sequence of state function reductions giving information about zero energy states.

    In principle transition between any two zero energy states - analogous to events specified by the initial and final states of event - is in principle possible but Negentropy Maximization Principle (NMP) as basic variational principle of state function reduction and of consciousness restricts the possibilities by forcing generation of negentropy: the notion of negentropy requires p-adic physics.

    Zero energy states are quantum superpositions of classical time evolutions for 3-surfaces and classical physics becomes exact part of quantum physics: in QFTs this is only the outcome of stationary phase approximation. Path integral is replaced with well-defined functional integral- not over all possible space-time surface but pairs of 3-surfaces at the ends of space-time at opposite boundaries of CD.

    ZEO leads to a theory of consciousness as quantum measurement theory in which observer ceases to be outsider to the physical world. One also gets rid of the basic problem caused by the conflict of the non-determinism of state function reduction with the determinism of the unitary evolution. This is obviously an extension of ordinary physics.

  5. Hierarchy of Planck constants represents also an extension of quantum mechanics at QFT limi. At fundamental level one actually has the standard value of h but at QFT limit one has effective Planck constant heff =n× h, n=1,2,... this generalizes quantum theory. This scaling of h has a simple topological interpretation: space-time surface becomes n-fold covering of itself and the action becomes n-multiple of the original which can be interpreted as heff=n×h.

    The most important applications are to biology, where quantum coherence could be understood in terms of a large value of heff/h. The large n phases resembles the large N limit of gauge theories with gauge couplings behaving as α ∝ 1/N used as a kind of mathematical trick. Also gravitation is involved: heff is associated with the flux tubes mediating various interactions (being analogs to wormholes in ER-EPR correspondence). In particular, one can speak about hgr, which Nottale introduced originally and heff= hgr plays key role in quantum biology according to TGD.

B. In what sense TGD is simplification/extension of existing theory?

  1. Classical level: Space-time as 4-surface of H means a huge reduction in degrees of freedom. There are only 4 field like variables - suitably chosen 4 coordinates of H=M4× CP2. All classical gauge fields and gravitational field are fixed by the surface dynamics. There are no primary gauge fields or gravitational fields nor any other fields in TGD Universe and they appear only at the QFT limit.

    GRT limit would mean that many-sheeted space-time is replaced by single slightly curved region of M4. The test particle - small particle like 3-surface - touching the sheets simultaneously experience sum of gravitational forces and gauge forces. It is natural to assume that this superposition corresponds at QFT limit to the sum for the deviations of induced metrics of space-time sheets from flat metric and sum of induce gauge potentials. These would define the fields in standard model + GRT. At fundamental level effects rather than fields would superpose. This is absolutely essential for the possibility of reducing huge number field like degrees of freedom. One can obviously speak of emergence of various fields.

    A further simplification is that only preferred extremals for which data coding for them are reduced by SH to 2-D string like world sheets and partonic 2-surfaces are allowed. TGD is almost like string model but space-time surfaces are necessary for understanding the fact that experiments must be analyzed using classical 4-D physics. Things are extremely simple at the level of single space-time sheet.

    Complexity emerges from many-sheetedness. From these simple basic building bricks - minimal surface extremals of Kähler action (not the extremal property with respect to Kähler action and volume term strongly suggested by the number theoretical vision plus analogs of Super Virasoro conditions in initial data) - one can engineer space-time surfaces with arbitrarily complex topology - in all length scales. An extension of existing space-time concept emerges. Extremely simple locally, extremely complex globally with topological information added to the Maxwellian notion of fields (topological field quantization allowing to talk about field identify of system/field body/magnetic body.

    Another new element is the possibility of space-time regions with Euclidian signature of the induced metric. These regions correspond to 4-D "lines" of general scattering diagrams. Scattering diagrams has interpretation in terms of space-time geometry and topology.

  2. The construction of quantum TGD using canonical quantization or path integral formalism failed completely for Kähler action by its huge vacuum degeneracy. The presence of volume term still suffers from complete failure of perturbation theory and extreme non-linearity. This led to the notion of world of classical worlds (WCW) - roughly the space of 3-surfaces. Essentially pairs of 3-surfaces at the boundaries of given CD connected by preferred extremals of action realizing SH and SGCI.

    The key principle is geometrization of the entire quantum theory, not only of classical fields geometrized by space-time as surface vision. This requires geometrization of hermitian conjugation and representation of imaginary unit geometrically. Kähler geometry for WCW makes this possible and is fixed once Kähler function defining Kähler metric is known. Kähler action for a preferred extremal of Kähler action defining space-time surface as an analog of Bohr orbit was the first guess but twistor lift forced to add volume term having interpretation in terms of cosmological constant.

    Already the geometrization of loop spaces demonstrated that the geometry - if it exists - must have maximal symmetries (isometries). There are excellent reasons to expect that this is true also in D=3. Physics would be unique from its mathematical existence!

  3. WCW has also spinor structure. Spinors correspond to fermionic Fock states using oscillator operators assignable to the induced spinor fields - free spinor fiels. WCW gamma matrices are linear combinations of these oscillator operators and Fermi statistics reduces to spinor geometry.

  4. There is no quantization in TGD framework at the level of WCW. The construction of quantum states and S-matrix reduces to group theory by the huge symmetries of WCW. Therefore zero energy states of Universe (or CD) correspond formally to classical WCW spinor fields satisfying WCW Dirac equation analogous to Super Virasoro conditions and defining representations for the Yangian generalization of the isometries of WCW (so called super-symplectic group). In ZEO stated are analogous to pairs of initial and final states and the entanglement coefficients between positive and negative energy parts of zero energy states expected to be fixed by Yangian symmetry define scattering matrix and have purely group theoretic interpretation. If this is true, entire dynamics would reduce to group theory in ZEO.

C. What is the hypothetical applicability of the extension - in energies, sizes, masses etc?

TGD is a unified theory and is meant to apply in all scales. Usually the unifications rely on reductionistic philosophy and try to reduce physics to Planck scale. Also super string models tried this and failed: what happens at long length scales was completely unpredictable (landscape catastrophe).

Many-sheeted space-time however forces to adopt fractal view. Universe would be analogous to Mandelbrot fractal down to CP2 scale. This predicts scaled variants of say hadron physics and electroweak physics. p-Adic length scale hypothesis and hierarchy of phases of matter with heff=n×h interpreted as dark matter gives a quantitative realization of this view.

  1. p-Adic physics shows itself also at the level of real physics. One ends up to the vision that particle mass squared has thermal origin: the p-adic variant of particle mass square is given as thermal mass squared given by p-adic thermodynamics mappable to real mass squared by what I call canonical identification. p-Adic length scale hypothesis states that preferred p-adic primes characterizing elementary particles correspond to primes near to power of 2: p=about 2k. p-Adic length scale is proportional to p1/2.

    This hypothesis is testable and it turns out that one can predict particle mass rather accurately. This is highly non-trivial since the sensitivity to the integer k is exponential. So called Mersenne primes turn out to be especially favoured. This part of theory was originally inspired by the regularities of particle mass spectrum. I have developed arguments for why the crucial p-adic length scale hypothesis - actually its generalization - should hold true. A possible interpretation is that particles provide cognitive representations of themselves by p-adic thermodynamics.

  2. p-Adic length scale hypothesis leads also to consider the idea that particles could appear as different p-adically scaled up variants. For instance, ordinary hadrons to which one can assign Mersenne prime M107=2107-1 could have fractally scaled variants. M89 and MG,107 (Gaussian prime) would be two examples and there are indications at LHC for these scaled up variants of hadron physics. These fractal copies of hadron physics and also of electroweak physics would correspond to extension of standard model.
  3. Dark matter hierarchy predicts zoomed up copies of various particles. The simplest assumption is that masses are not changed in the zooming up. One can however consider that binding energy scale scales non-trivially. The dark phases would emerge are quantum criticality and give rise to the associated long range correlations (quantum lengths are typically scaled up by heff/h=n).

D. What is the leading correction/contribution to physical effects due to TGD onto particles, interactions, gravitation, cosmology?

  1. Concerning particles I already mentioned the key predictions.
    1. The existence of scaled variants of various particles and entire branches of physics. The fundamental quantum numbers are just standard model quantum numbers code by CP2 geometry.

    2. Particle families have topological description meaning that space-time topology would be an essential element of particle physics. The genus of partonic 2-surfaces (number of handles attached to sphere) is g=0,1,2,... and would give rise to family replication. g<2 partonic 2-surfaces have always global conformal symmetry Z2 and this suggests that they give rise to elementary particles identifiable as bound states of g handles. For g>2 this symmetry is absent in the generic case which suggests that they can be regarded as many-handle states with mass continuum rather than elementary particles. 2-D anyonic systems could represent an example of this.
    3. A hierarchy of dynamical symmetries as remnants of super-symplectic symmetry however suggests itself. The super-symplectic algebra possess infinite hierarchy of isomorphic sub-algebras with conformal weights being n-multiples of for those for the full algebra (fractal structure again possess also by ordinary conformal algebras). The hypothesis is that sub-algebra specified by n and its commutator with full algebra annihilate physical states and that corresponding classical Noether charges vanish. This would imply that super-symplectic algebra reduces to finite-D Kac-Moody algebra acting as dynamical symmetries. The connection with ADE hierarchy of Kac-Moody algebras suggests itself. This would predict new physics. Condensed matter physics comes in mind.
    4. Number theoretic vision suggests that Galois groups for the algebraic extensions of rationals act as dynamical symmetry groups. They would act on algebraic discretizations of 3-surfaces and space-time surfaces necessary to realize number theoretical universality. This would be completely new physics.
  2. Interactions would be mediated at QFT limit by standard model gauge fields and gravitons. QFT limit however loses all information about many-sheetedness and there would be anomalies reflecting this information loss. In many-sheeted space-time light can propagate along several paths and the time taken to travel along light-like geodesic from A to B depends on space-time sheet since the sheet is curved and warped. Neutrinos and gamma rays from SN1987A arriving at different times would represent a possible example of this. It is quite possible that the outer boundaries of even macroscopic objects correspond to boundaries between Euclidian and Minkowskian regions at the space-time sheet of the object.

    The failure of QFTs to describe bound states of say hydrogen atom could be second example: many-sheetedness and identification of bound states as single connected surface formed by proton and electron would be essential and taken into account in wave mechanical description but not in QFT description.

  3. Concerning gravitation the basic outcome is that by number theoretical vision all preferred extremals are extremals of both Kähler action and volume term. This is true for all known extremals what happens if one introduces the analog of Kähler form in M4 is an open question).

    Minimal surfaces carrying no K&aum;lher field would be the basic model for gravitating system. Minimal surface equation are non-linear generalization of d'Alembert equation with gravitational self-coupling to induce gravitational metric. In static case one has analog for the Laplace equation of Newtonian gravity. One obtains analog of gravitational radiation as "massless extremals" and also the analog of spherically symmetric stationary metric.

    Blackholes would be modified. Besides Schwartschild horizon which would differ from its GRT version there would be horizon where signature changes. This would give rise to a layer structure at the surface of blackhole.

  4. Concerning cosmology the hypothesis has been that RW cosmologies at QFT limit can be modelled as vacuum extremals of Kä hler action. This is admittedly ad hoc assumption inspired by the idea that one has infinitely long p-adic length scale so that cosmological constant behaving like 1/p as function of p-adic length scale assignable with volume term in action vanishes and leaves only Kähler action. This would predict that cosmology with critical is specified by a single parameter - its duration as also over-critical cosmology. Only sub-critical cosmologies have infinite duration.

    One can look at the situation also at the fundamental level. The addition of volume term implies that the only RW cosmology realizable as minimal surface is future light-cone of M4. Empty cosmology which predicts non-trivial slightly too small redshift just due to the fact that linear Minkowski time is replaced with lightcone proper time constant for the hyperboloids of M4+. Locally these space-time surfaces are however deformed by the addition of topologically condensed 3-surfaces representing matter. This gives rise to additional gravitational redshift and the net cosmological redshift. This also explains why astrophysical objects do not participate in cosmic expansion but only comove. They would have finite size and almost Minkowski metric.

    The gravitational redshift would be basically a kinematical effect. The energy and momentum of photons arriving from source would be conserved but the tangent space of observer would be Lorentz-boosted with respect to source and this would course redshift.

    The very early cosmology could be seen as gas of arbitrarily long cosmic strings in H (or M4) with 2-D M4 projection. Horizon would be infinite and TGD suggests strongly that large values of heff makes possible long range quantum correlations. The phase transition leading to generation of space-time sheets with 4-D M4 projection would generate many-sheeted space-time giving rise to GRT space-time at QFT limit. This phase transition would be the counterpart of the inflationary period and radiation would be generated in the decay of cosmic string energy to particles.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? or article with the same title.



To the ../index page