What's new in

Topological Geometrodynamics: an Overview

Note: Newest contributions are at the top!



Year 2018



Is it possible to determine experimentally whether gravitation is quantal interaction?

Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

  1. If gravitational field is quantum it makes possible entanglement between two states. This is the intuitive idea but what it means in TGD picture? Feynman interpreted this as entanglement of gravitational field of an objects with the state of object. If object is in a state, which is superposition of states localized at two different points xi, the classical gravitational fields φgr are different and one has a superposition of states with different locations

    | I>= ∑i=1,2 | mi ~at~ xi> | φgr,xi> == | L> +|R> .

  2. Put two such de-localized states with masses mi at some distance d to get state I1>I2>, | i> =| L>i +| R>i. The 4 components pairs of the states interact gravitationally and since there are different gravitational fields between different states the states get different phases, one can obtain entangled state.

    Gravitational field would entangle the masses. If one integrates over the degrees of freedom associated with gravitational field one obtains density matrix and the density matrix is not pure if gravitational field is quantum in the sense that it entangles with the particle position.

    That gravitation is able to entangle the masses would be a proof for the quantum nature of gravitational field. It is not however easy to detect this. If gravitation only serves as a parameter in the interaction Hamiltonian of the two masses, entanglement can be generated but does not prove that gravitational interaction is quantal. It is required that the only interaction between the systems is gravitational so that other interactions do not generate entanglement. Certainly, one should use masses having no em charges.

  3. In TGD framework the view of Feynman is natural. One has superposition of space-time surfaces representing this situation. Gravitational field of particle is associated with the magnetic body of particle represented as 4-surface and superposition corresponds to a de-localized quantum state in the "world of classical worlds" with xi representing particular WCW coordinates.
I am not specialist in quantum information theory nor as quantum gravity experimentalist, and hereafter I must proceed keeping fingers crossed and I can only hope that I have understood correctly. To my best understanding, the general idea of the experiment would be to use interferometer to detect phase differences generated by gravitational interaction and inducing the entanglement. Not for photons but for gravitationally interacting masses m1 and m2 assumed to be in quantum coherent state and be describable by wave function analogous to em field. It is assumed that gravitational interact can be describe classically and this is also the case in TGD by quantum-classical correspondence.
  1. Authors think quantum information theoretically and reduce everything to qubits. The de-localization of masses to a superposition of two positions correspond to a qubit analogous to spin or a polarization of photon.
  2. One must use and analog of interferometer to measure the phase difference between different values of this "polarization".

    In the normal interferometer is a flattened square like arrangement. Photons in superpositions of different spin states enter a beam splitter at the left-lower corner of interferometer dividing the beam to two beams with different polarizations: horizontal (H) and vertical (V). Vertical (horizontal) beam enters to a mirror which reflects it to horizontal (vertical beam). One obtains paths V-H and H-V meeting at a transparent mirror located at the upper right corner of interferometer and interfere.

    There is detector D0 resp. D1 detecting component of light gone through in vertical resp. horizontal direction of the fourth mirror. Firing of D1 would select the H-V and the firing of D0 the V-H path. This thus would tells what path (V-H or H-V) the photon arrived. The interference and thus also the detection probabilities depend on the phases of beams generated during the travel: this is important.

  3. If I have understood correctly, this picture about interferometer must be generalized. Photon is replaced by mass m in quantum state which is superposition of two states with polarizations corresponding to the two different positions. Beam splitting would mean that the components of state of mass m localized at positions x1 and x2 travel along different routes. The wave functions must be reflected in the first mirrors at both path and transmitted through the mirror at the upper right corner. The detectors Di measure which path the mass state arrived and localize the mass state at either position. The probabilities for the positions depend on the phase difference generated during the path. I can only hope that I have understood correctly: in any case the notion of mirror and transparent mirror in principle make sense also for solutions of Schrödinger eequation.
  4. One must however have two interferometers. One for each mass. Masses m1 and m2 interact quantum gravitationally and the phases generated for different polarization states differ. The phase is generated by the gravitational interaction. Authors estimate that phases generate along the paths are of form

    Φi = [m1m2G/ℏ di] Δ t .

    Δ t =L/v is the time taken to pass through the path of length L with velocity v. d1 is the smaller distance between upper path for lower mass m2 and lower path for upper mass m1. d2 is the distance between upper path for upper mass m1 and lower m2. See Figure 1 of the article.

What one needs for the experiment?
  1. One should have de-localization of massive objects. In atomic scales this is possible. If one has heff/h0>h one could also have zoomed up scale of de-localization and this might be very relevant. Fountain effect of superfluidity pops up in mind.
  2. The gravitational fields created by atomic objects are extremely weak and this is an obvious problem. Gm1m2 for atomic mass scales is extremely small: since Planck mass mP is something like 1019 proton masses and atomic masses are of order 10-100 atomic masses.

    One should have objects with masses not far from Planck mass to make Gm1m2 large enough. Authors suggest using condensed matter objects having masses of order m∼ 10-12 kg, which is about 1015 proton masses 10-4 Planck masses. Authors claim that recent technology allows de-localization of masses of this scale at two points. The distance d between the objects would be of order micron.

  3. For masses larger than Planck mass one could have difficulties since quantum gravitational perturbation series need not converge for Gm1m2> 1 (say). For proposed mass scales this would not be a problem.
What can one say about the situation in TGD framework?
  1. In TGD framework the gravitational Planck hgr= Gm1m2/v0 assignable to the flux tubes mediating interaction between m1 and m2 as macroscopic quantum systems could enter into the game and could reduce in extreme case the value of gravitational fine structure constant from Gm1m2/4π ℏ to Gm1m2/4π ℏeff = β0/4π, β0= v0/c<1. This would make perturbation series convergent even for macroscopic masses behaving like quantal objects. The physically motivated proposal is β0∼ 2-11. This would zoom up the quantum coherence length scales by hgr/h.
  2. What can one say in TGD framework about the values of phases Φ?
    1. For ℏ → ℏeff one would have

      Φi = [Gm1m2/ℏeff di] Δ t .

      For ℏ → ℏeff the phase differences would be reduced for given Δ t. On the other hand, quantum gravitational coherence time is expected to increase like heff so that the values of phase differences would not change if Δ t is increased correspondingly. The time of 10-6 seconds could be scaled up but this would require the increase of the total length L of interferometer arms and/or slowing down of the velocity v.

    2. For ℏeff=ℏgr this would give a universal prediction having no dependence on G or masses mi

      Φi = [v0Δ t/di] = [v0/v] [L/di] .

      If Planck length is actually equal to CP2 length R∼ 103.5(GNℏ)1/2, one would has GN = R2/ℏeff, ℏeff∼ 107. One can consider both smaller and larger values of G and for larger values the phase difference would be larger. For this option one would obtain 1/ℏeff2 scaling for Φ. Also for this option the prediction for the phase difference is universal for heff=hgr.

    3. What is important is that the universality could be tested by varying the masses mi. This would however require that mi behave as coherent quantum systems gravitationally. It is however possible that the largest quantum systems behaving quantum coherently correspond to much smaller masses.
See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.



Did LIGO observe non-standard value of G and are galactic blackholes really supermassive?

I have talked (see this) about the possibility that Planck length lP is actually CP2 length R, which is scaled up by factor of order 103.5 from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R2/ℏeff. There would be only one fundamental scale in TGD as the original idea indeed was. ℏeff at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 106-107 times larger than h.

Also other values of heff are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in super-fluidity could correspond to a value of heff/h0=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of heff would induce also larger delocalization height.

Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be Gmax= R2/h0, h=6h0.

Are the blackholes detected by LIGO really so massive?

LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 10-40 solar masses, which is regarded as something rather strange.

Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and heff smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value - say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass.

This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?

  1. If the power of gravitational radiation distinguishes between different values of M+m, when G(M+m) is kept constant, the idea is dead. This seems to be the case. The dependence on G(M+m) only leads to contradiction at the limit when M+m approaches zero and G(M+m) is fixed. The reason is that the energy emitted per single period of rotation would be larger than M+m. The natural expectation is that the radiated power per cycle and per mass M+m depends on G(M+m) only as a dimensionless quantity .
  2. From arXiv one can find an (see article, in which the energy per unit solid angle and frequency radiated ina collision of blackholes is estimated and the outcome is proportional to E2G(M+m)2, where E is the energy of the colliding blackhole.

    The result is proportional mass squared measured in units of Planck mass squared as one might indeed naively expect since GM2 is analogous to the total gravitational charge squared measured using Planck mass.

    The proportionality to E2 comes from the condition that dimensions come out correctly. Therefore the scaling of G upwards would reduce mass and the power of gravitational radiation would be reduced down like M+m. The power per unit mass depends on G(M+m) only. Gravitational radiation allows to distinguish between two systems with the same Schwartschild radius, although the classical dynamics does not allow this.

  3. One can express the classical gravitational energy E as gravitational potential energy proportional to GM/R. This gives only dependence on GM as also Equivalence Principle for classical dynamics requires and for the collisions of blackholes R is measured by using GM as a natural unit.
Remark: The calculation uses the notion of energym which in general relativity is precisely defined only for stationary solutions. Radiation spoils the stationarity. The calculations of the radiation power in GRT is to some degree artwork feeding in the classical conservation laws in post-Newtonian approximation lost in GRT. In TGD framework the conservation laws are not lost and hold true at the level of M4×CP2.

What about supermassive galactic blacholes?

What about supermassive galactic black holes in the centers of galaxies: are they really super-massive or is G super-large! The mass of Milky Way super-massive blackhole is in the range 105-109 solar masses. Geometric mean is n=107 solar masses and of the order of the standard value of R2/GN=n ∼ 107 . Could one think that this blackhole has actually mass in the range 1-100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning!

The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time" or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.



Galois groups and genes

The question about possible variations of Geff (see this) led again to the old observation that sub-groups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new sub-group can emerge.

The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by heff/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of space-time surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology.

One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions EH leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials.

Any sub-group H⊂ Gal(E/K)) leaves the intermediate extension EH invariant in element-wise manner as a sub-field of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension EH and subgroup H1⊂ H2⊂... define a hierarchy of extensions EH1>EH2>EH3... with decreasing dimension. The subgroups H are normal - in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order |H| is the dimension of E as an extension of EH. This is a highly non-trivial piece of information. The dimension of E factorizes to a product ∏i |Hi| of dimensions for a sequence of groups Hi.

Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group Hi so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon?

Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension EH in a further extension to E. The degree of EH increases by a factor, which is dimension of E/EH and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?

  1. What comes into mathematically uneducated mind of physicist is the functional decomposition Pm+n(x)= Pm(Pn(x)) of polynomials assignable to sub-units (letters/codons/genes) with coefficients in K for a algebraic counterpart for the product of sub-units. Pm(Pn(x)) would be a polynomial of degree n+m in K and polynomial of degree m in EH and one could assign to a given gene a fixed polynomial obtained as an iterated function composition. Intuitively it seems clear that in the generic case Pm(Pn(x)) does not decompose to a product of lower order polynomials. One could use also polynomials assignable to codons or letters as basic units. Also polynomials of genes could be fused in the same manner.
  2. If this indeed gives a Galois extension, the dimension m of the intermediate extension should be same as the order of its Galois group. Composition would be non-commutative but associative as the physical picture demands. The longer the gene, the higher the algebraic complexity would be. Could functional decomposition define the rule for who extensions and Galois groups correspond to genes? Very naively, functional decomposition in mathematical sense would correspond to composition of functions in biological sense.
  3. This picture would conform with M8-M4× CP2 correspondence (see this) in which the construction of space-time surface at level of M8 reduces to the construction of zero loci of polynomials of octonions, with rational coefficients. DNA letters, codons, and genes would correspond to polynomials of this kind.
Could one say anything about the Galois groups of DNA letters?
  1. Since n=heff/h serves as a kind of quantum IQ, and since molecular structures consisting of large number of particles are very complex, one could argue that n for DNA or its dark variant realized as dark proton sequences can be rather large and depend on the evolutionary level of organism and even the type of cell (neuron viz. soma cell). On the other, hand one could argue that in some sense DNA, which is often thought as information processor, could be analogous to an integrable quantum field theory and be solvable in some sense. Notice also that one can start from a background defined by given extension K of rationals and consider polynomials with coefficients in K. Under some conditions situation could be like that for rationals.
  2. The simplest guess would be that the 4 DNA letters correspond to 4 non-trivial finite groups with smaller possible orders: the cyclic groups Z2,Z3 with orders 2 and 3 plus 2 finite groups of order 4 (see the table of finite groups in this). The groups of order 4 are cyclic group Z4=Z2× Z2 and Klein group Z2⊕ Z2 acting as a symmetry group of rectangle that is not square - its elements have square equal to unit element. All these 4 groups are Abelian.
  3. On the other hand, polynomial equations of degree not larger than 4 can be solved exactly in the sense that one can write their roots in terms of radicals. Could there exist some kind of connection between the number 4 of DNA letters and 4 polynomials of degree less than 5 for whose roots one can write closed expressions in terms of radicals as Galois found? Could the polynomials obtained by a a repeated functional composition of the polynomials of DNA letters also have this solvability property?

    This could be the case! Galois theory states that the roots of polynomial are solvable in terms of radicals if and only if the Galois group is solvable meaning that it can be constructed from abelian groups using Abelian extensions (see this).

    Solvability translates to a statement that the group allows so called sub-normal series 1<G0<G1 ...<Gk=G such that Gj-1 is normal subgroup of Gj and Gj/Gj-1 is an abelian group: it is essential that the series extends to G. An equivalent condition is that the derived series is G→ G(1) → G(2) → ...→ 1 in which j+1:th group is commutator group of Gj: the essential point is that the series ends to trivial group.

    If one constructs the iterated polynomials by using only the 4 polynomials with Abelian Galois groups, the intuition of physicist suggests that the solvability condition is guaranteed!

  4. Wikipedia article also informs that for finite groups solvable group is a group whose composition series has only factors which are cyclic groups of prime order. Abelian groups are trivially solvable, nilpotent groups are solvable, and p-groups (having order, which is power prime) are solvable and all finite p-groups are nilpotent. This might relate to the importance of primes and their powers in TGD.

    Every group with order less than 60 elements is solvable. Fourth order polynomials can have at most S4 with 24 elements as Galois groups and are thus solvable. Fifth order polynomial can have the smallest non-solvable group, which is alternating group A5 with 60 elements as Galois group and in this case is not solvable. Sn is not solvable for n>4 and by the finding that Sn as Galois group is favored by its special properties (see this). It would seem that solvable polynomials are exceptions.

    A5 acts as the group of icosahedral orientation preserving isometries (rotations). Icosahedron and tetrahedron glued to it along one triangular face play a key role in TGD inspired model of bio-harmony and of genetic code (see this and this). The gluing of tetrahedron increases the number of codons from 60 to 64. The gluing of tetrahedron to icosahedron also reduces the order of isometry group to the rotations leaving the common face fixed and makes it solvable: could this explain why the ugly looking gluing of tetrahedron to icosahedron is needed? Could the smallest solvable groups and smallest non-solvable group be crucial for understanding the number theory of the genetic code.

An interesting question inspired by M8-H-duality (see this) is whether the solvability could be posed on octonionic polynomials as a condition guaranteeing that TGD is integrable theory in number theoretical sense or perhaps following from the conditions posed on the octonionic polynomials. Space-time surfaces in M8 would correspond to zero loci of real/imaginary parts (in quaternionic sense) for octonionic polynomials obtained from rational polynomials by analytic continuation. Could solvability relate to the condition guaranteeing M8 duality boiling down to the condition that the tangent spaces of space-time surface are labelled by points of CP2. This requires that tangent or normal space is associative (quaternionic) and that it contains fixed complex sub-space of octonions or perhaps more generally, there exists an integrable distribution of complex subspaces of octonions defining an analog of string world sheet.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.



Is the hierarchy of Planck constants behind the reported variation of Newton's constant?

It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= (ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107.

The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.

In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.



How could Planck length be actually equal to much larger CP2 radius?!

The following argument stating that Planck length lP equals to CP2 radius R: lP=R and Newton's constant can be identified G= R2/ℏeff. This idea looking non-sensical at first glance was inspired by an FB discussion with Stephen Paul King.

First some background.

  1. I believed for long time that Planck length lP would be CP2 length scale R squared multiplied by a numerical constant of order 10-3.5. Quantum criticality would have fixed the value of lP and therefore G=lP2/ℏ.
  2. Twistor lift of TGD led to the conclusion that that Planck length lP is essentially the radius of twistor sphere of M4 so that in TGD the situation seemed to be settled since lP would be purely geometric parameter rather than genuine coupling constant. But it is not! One should be able to understand why the ratio lP/R but here quantum criticality, which should determine only the values of genuine coupling parameters, does not seem to help.

    Remark: M4 has twistor space as the usual conformal sense with metric determined only apart from a conformal factor and in geometric sense as M4× S2: these two twistor spaces are part of double fibering.

Could CP2 radius R be the radius of M4 twistor sphere, and could one say that Planck length lP is actually equal to R: lP=R? One might get G= lP2/ℏ from G= R2/ℏeff!
  1. It is indeed important to notice that one has G=lP2/ℏ. ℏ is in TGD replaced with a spectrum of ℏeff=nℏ0, where ℏ= 6ℏ0 is a good guess. At flux tubes mediating gravitational interactions one has

    eff=ℏgr= GMm/v0 ,

    where v0 is a parameter with dimensions of velocity. I recently proposed a concrete physical interpretation for v0 (see this). The value v0=2-12 is suggestive on basis of the proposed applications but the parameter can in principle depend on the system considered.

  2. Could one consider the possibility that twistor sphere radius for M4 has CP2 radius R: lP= R after all? This would allow to circumvent introduction of Planck length as new fundamental length and would mean a partial return to the original picture. One would lP= R and G= R2/ℏeff. ℏeff/ℏ would be of 107-108!
The problem is that ℏeff varies in large limits so that also G would vary. This does not seem to make sense at all. Or does it?!

To get some perspective, consider first the phase transition replacing hbar and more generally hbareff,i with hbareff,f=hgr .

  1. Fine structure constant is what matters in electrodynamics. For a pair of interacting systems with charges Z1 and Z2 one has coupling strength Z1Z2e2/4πℏ= Z1Z2α, α≈ 1/137.
  2. One can also define gravitational fine structure constant αgr. Only αgr should matter in quantum gravitational scattering amplitudes. αgr wold be given by

    αgr= GMm/4πℏgr= v0/4π .

    v0/4π would appear as a small expansion parameter in the scattering amplitudes. This in fact suggests that v0 is analogous to α and a universal coupling constant which could however be subject to discrete number theoretic coupling constant evolution.

  3. The proposed physical interpretation is that a phase transition hbareff,i→ hbareff,f=hgr at the flux tubes mediating gravitational interaction between M and m occurs if the perturbation series in αgr=GMm/4π/hbar fails to converge (Mm∼ mPl2 is the naive first guess for this value). Nature would be theoretician friendly and increase heff and reducing αgr so that perturbation series converges again.

    Number theoretically this means the increase of algebraic complexity as the dimension n=heff/h0 of the extension of rationals involved increases fron ni to nf and the number n sheets in the covering defined by space-time surfaces increases correspondingly. Also the scale of the sheets would increase by the ratio nf/ni.

    This phase transition can also occur for gauge interactions. For electromagnetism the criterion is that Z1Z2α is so large that perturbation theory fails. The replacement hbar→ Z1Z2e2/v0 makes v0/4π the coupling constant strength. The phase transition could occur for atoms having Z≥ 137, which are indeed problematic for Dirac equation. For color interactions the criterion would mean that v0/4π becomes coupling strength of color interactions when αs is above some critical value. Hadronization would naturally correspond to the emergence of this phase.

    One can raise interesting questions. Is v0 (presumably depending on the extension of rationals) a completely universal coupling strength characterizing any quantum critical system independent of the interaction making it critical? Can for instance gravitation and electromagnetism are mediated by the same flux tubes? I have assumed that this is not the case. It it could be the case, one could have for GMm<mPl2 a situtation in which effective coupling strength is of form (GmMm/Z1Z2e2) (v0/4π).

The possibility of the proposed phase transition has rather dramatic implications for both quantum and classical gravitation.
  1. Consider first quantum gravitation. v0 does not depend on the value of G at all!The dependence of G on ℏeff could be therefore allowed and one could have lP= R. At quantum level scattering amplitudes would not depend on G but on v0. I was happy of having found small expansion parameter v0 but did not realize the enormous importance of the independence on G!

    Quantum gravitation would be like any gauge interaction with dimensionless coupling, which is even small! This might relate closely to the speculated TGD counterpart of AdS/CFT duality between gauge theories and gravitational theories.

  2. But what about classical gravitation? Here G should appear. What could the proportionality of classical gravitational force on 1/ℏeff mean? The invariance of Newton's equation

    dv/dt =-GM r/r3

    under heff→ xheff would be achieved by scaling vv/x and t→ t/x. Note that these transformations have general coordinate invariant meaning as transformations of coordinates of M4 in M4×CP2. This scaling means the zooming up of size of space-time sheet by x, which indeed is expected to happen in heff→ xheff!

What is so intriguing that this connects to an old problem that I pondered a lot during the period 1980-1990 as I attempted to construct to the field equations for Kähler action approximate spherically symmetric stationary solutions. The naive arguments based on the asymptotic behavior of the solution ansatz suggested that the one should have G= R2/ℏ. For a long time indeed assumed R=lP but p-adic mass calculations and work with cosmic strings forced to conclude that this cannot be the case. The mystery was how G= R2/ℏ could be normalized to G=lP2/ℏ: the solution of the mystery is ℏ→ ℏeff as I have now - decades later - realized!

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant.



Large scale fluctuations in metagalactic ionizing background for redhsift six

I learned about a very interesting result related to early cosmology and challenging the standard cosmology. The result is described in popular article " Early opaque universe linked to galaxy scarcity" (see this). The original article " Evidence for Large-scale Fluctuations in the Metagalactic Ionizing Background Near Redshift Six" of Becker et al is published in Astrophysical Journal (see this).

The abstract of the article is following.

" The observed scatter in intergalactic Lyα opacity at z ≤ 6 requires large-scale fluctuations in the neutral fraction of the intergalactic medium (IGM) after the expected end of reionization. Post-reionization models that explain this scatter invoke fluctuations in either the ionizing ultraviolet background (UVB) or IGM temperature. These models make very different predictions, however, for the relationship between Ly╬▒ opacity and local density. Here, we test these models using Lyα-emitting galaxies (LAEs) to trace the density field surrounding the longest and most opaque known Lyα trough at z < 6. Using deep Subaru Hyper Suprime-Cam narrowband imaging, we find a highly significant deficit of z ≈ 5.7 LAEs within 20 h-1 Mpc of the trough. The results are consistent with a model in which the scatter in Lyα opacity near z ∼ 6 is driven by large-scale UVB fluctuations, and disfavor a scenario in which the scatter is primarily driven by variations in IGM temperature. UVB fluctuations at this epoch present a boundary condition for reionization models, and may help shed light on the nature of the ionizing sources. "

The basic conclusion is that the opaque regions of the early Universe about 12.5 billion years ago (redshift z∼ 6) correspond to to small number of galaxies. This is in contrast to standard model expectations. Opacity is due to the absorption of radiation by atoms and the UV radiation generated by galaxies ionizes atoms and makes Universe transparent. In standard cosmology the radiation would arrive from rather large region. The formation of galaxies is estimated to have begun .5 Gy years after Big Bang but there is evidence for galaxies already for .2 Gy after Big Bang (see this). Since the region studied corresponds to a temporal distance about 12.5 Gly and the age of the Universe is around 13.7 Gy, UV radiation from a region of size about 1 Gly should have reached the intergalactic regions and have caused the ionization.

Second conclusion is that there are large fluctuations in the opacity. What is suggested is that either the intensity of the UV radiation or that the density of intergalactic gas fluctuates. The fluctuations in the intensity of UV radiation could be understood if the radiation from the galaxies propagates only to finite distance in early times. Why this should be the case is difficult to understand in standard cosmology.

Could TGD provide the explanation.

  1. In TGD framework galaxies would have born as cosmic strings thickened to flux tubes. This causes reduction of the string tension as energy per unit length. The liberated dark energy and matter transformed to ordinary matter and radiation. Space-time emerges as thickened magnetic flux tubes. Galaxies would correspond to knots of cosmic strings and stars to their sub-knots.
  2. If the UV light emerging from the galaxies did not get far away from galaxies, the ionization of the intergalactic gas did not occur and these regions became opaque if distance to nearest galaxies was below critical value.
  3. Why the UV radiation at that time would have been unable to leave some region surrounding galaxies? The notion of many-sheeted space-time suggests a solution. Simplest space-time sheets are 2-sheeted structure if one does not allow space-time to have boundaries. The members of the pair with boundary are glued to together along their common boundary. The radiation would have left this surface only partially. Partial reflection should occur as the radiation along first member of pair is reflected as a reflected signal propagating along second member. This model could explain the large fluctuations in the opacity as fluctuations in the density of galaxies.
  4. Cosmic expansion occurring in TGD framework in jerk-wise manner as rapid phase transitions would have expanded the galactic space-time sheets and in the recent Universe this confinement of UV radiation would not occur and intergalactic space would be homogenously ionized and transparent.
The echo phenomenon could be completely general characteristic of the many-sheeted space-time.
  1. The popular article "Evidence in several Gamma Ray Bursts of events where time appears to repeat backwards" (see this) tells about the article " Smoke and Mirrors: Signal-to-Noise and Time-Reversed Structures in Gamma-Ray Burst Pulse Light Curve" of Hakkila et al (see this). The study of gamma ray bursts (GRBs) occurring in the very early Universe with distance of few billion light years (smaller than for opacity measurements by an order of magnitude) has shown that the GRB pulses have complex structures suggesting that the radiation is reflected partially back at some distance and then back in core region.The duration of these pulses varies from 1 ms to 200 s. Could also this phenomenon be caused by the finite size of the space-time sheets assignable to the object creating GRBs?
  2. There is also evidence for blackhole echoes, which could represent example of a similar phenomenon. Sabine Hossenfelder (see this) tells about the new evidence for blackhole echoes in the fusion of blackholes for GW170817 event observed by LIGO reported by Niayesh Afshordi, Professor of astrophysics at Perimeter Institute in the article " Echoes from the Abyss: A highly spinning black hole remnant for the binary neutron star merger GW170817" (see this). The earlier 2.5 sigma evidence has grown into 4.2 sigma evidence. 5 sigma is regarded as a criterion for discovery. For TGD based comments see this.
See the chapter TGD and Astrophysics or the article Some new strange effects associated with galaxies.





Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology

Penrose has proposed an interesting cyclic cosmology (see this, , and this) in which two subsequent cosmologies are glued along conformal boundary together. The metric of the next cosmology is related to that of previous by conformal scaling factor, which approaches zero at the 3-D conformal boundary. The physical origin of this kind of distance scaling is difficult to understand. The prediction is the existence of concentric circles of cosmic size interpretable as kind of memories about previous cosmic cycles.

In TGD framework zero energy ontology (ZEO) inspired theory of consciousness suggest an analogous sequence of cosmologies. Now the cycles would correspond to life cycles of cosmic size serving as a conscious entity having causal diamond (CD) as imbedding space correlate. The arrow of geometric time is defined as the time direction to which the temporal distance between the ends of CD increases in sequence of state function reductions leaving passive boundary of CD unaffected and having interpretation as weak measurements. The arrow of time changes "big" state function reductions changing the roles of the boundaries of CD and meaning the death and re-incarnation of self with opposite arrow of time. Penrose's gluing procedure would be replaced with "big" state function reduction in TGD framework. This proposal is discussed in some detail and the possibility that also now concentric low variance circles in CMB could carry memories about the previous life cycles of cosmos. This picture applies to all levels in the hierarchy of cosmologies (hierarchy of selves) giving rise to a kind of Russian doll cosmology.

See the chapter TGD based cosmology or the article Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology.



About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant

Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In the following a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed.

A generalization of the Hubble formula β=L/LH for the cosmic recession velocity, where LH= c/H is Hubble length and L is radial distance to the object, is suggestive. This interpretation would suggest that some kind of expansion is present. The fact however is that stars, planetary systems, and planets do not seem to participate cosmic expansion. In TGD framework this is interpreted in terms of quantal jerk-wise expansion taking place as relative rapid expansions analogous to atomic transitions or quantum phase transitions. The TGD based variant of Expanding Earth model assumes that during Cambrian explosion the radius of Earth expanded by factor 2.

There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind<LM4. The identification β0= LM4/LH<1 does not allow the identification LH=LM4. LH would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD.

One can deduce an estimate for β0 by approximating the space-time surface near the light-cone boundary as Robertson-Walker cosmology, and expressing the mass density ρ defined as ρ=M/VM4, where VM4=(4π/3) LM43 is the M4 volume of the system. ρ can be expressed as a fraction ε2 of the critical mass density ρcr= 3H2/8π G. This leads to the formula β0= [rS/LM4]1/2 × (1/ε), where rS is Schwartschild radius.

This formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant.



Solution of Hubble constant discrepancy from the length scale dependence of cosmological constant

The discrepancy of the two determinations of Hubble constant has led to a suggestion that new physics might be involved (see this).

  1. Planck observatory deduces Hubble constant H giving the expansion rate of the Universe from CMB data something like 360,000 y after Big Bang, that is from the properties of the cosmos in long length scales. Riess's team deduces H from data in short length scales by starting from galactic length scale and identifies standard candles (Cepheid variables), and uses these to deduce a distance ladder, and deduces the recent value of H(t) from the redshifts.
  2. The result from short length scales is 73.5 km/s/Mpc and from long scales 67.0 km/s/Mpc deduced from CMB data. In short length scales the Universe appears to expand faster. These results differ too much from each other. Note that the ratio of the values is about 1.1. There is only 10 percent discrepancy but this leads to conjecture about new physics: cosmology has become rather precise science!
TGD could provide this new physics. I have already earlier considered this problem but have not found really satisfactory understanding. The following represents a new attempt in this respect.
  1. The notions of length scale are fractality are central in TGD inspired cosmology. Many-sheeted space-time forces to consider space-time always in some length scale and p-adic length scale defined the length scale hierarchy closely related to the hierarchy of Planck constants heff/h0=n related to dark matter in TGD sense. The parameters such as Hubble constant depend on length scale and its value differ because the measurements are carried out in different length scales.
  2. The new physics should relate to some deep problem of the recent day cosmology. Cosmological constant Λ certainly fits the bill. By theoretical arguments Λ should be huge making even impossible to speak about recent day cosmology. In the recent day cosmology Λ is incredibly small.
  3. TGD predicts a hierarchy of space-time sheets characterized by p-adic length scales (Lk) so that cosmological constant Λ depends on p-adic length scale L(k) as Λ∝ 1/GL(k)2, where p ≈ 2k is p-adic prime characterizing the size scale of the space-time sheet defining the sub-cosmology. p-Adic length scale evolution of Universe involve as sequence of phase transitions increasing the value of L(k). Long scales L(k) correspond to much smaller value of Λ.
  4. The vacuum energy contribution to mass density proportional to Λ goes like 1/L2(k) being roughly 1/a2, where a is the light-cone proper time defining the "radius" a=R(t) of the Universe in the Robertson-Walker metric ds2=dt2-R2(t) dΩ2. As a consequence, at long length scales the contribution of Λ to the mass density decreases rather rapidly.

    Must however compare this contribution to the density ρ of ordinary matter. During radiation dominated phase it goes like 1/a4 from T∝ 1/a and form small values of a radiation dominates over vacuum energy. During matter dominated phase one has ρ∝ 1/a3 and also now matter dominates. During predicted cosmic string dominated asymptotic phase one has ρ∝ 1/a2 and vacuum energy density gives a contribution which is due to Kähler magnetic energy and could be comparable and even larger than the dark energy due to the volume term in action.

  5. The mass density is sum ρmd of the densities of matter and dark energy. One has ρm∝ H2. Λ∝ 1/L2(k) implies that the contribution of dark energy in long length scales is considerably smaller than in the recent cosmology. In the Planck determination of H it is however assumed that cosmological constant is indeed constant. The value of H in long length scales is under-estimated so that also the standard model extrapolation from long to short length scales gives too low value of H. This is what the discrepancy of determinations of H performed in two different length scales indeed demonstrate.
A couple of remarks are in order.
  1. The twistor lift of TGD suggests an alternative parameterization of vacuum energy density as ρvac= 1/L4(k1). k1 is roughly square root of k. This gives rise to a pair of short and long p-adic length scales. The order of magnitude for 1/L(k1) is roughly the same as that of CMB temperature T: 1/L(k1)∼ T. Clearly, the parameters 1/T and R correspond to a pair of p-adic length scales. The fraction of dark energy density becomes smaller during the cosmic evolution identified as length scale evolution with largest scales corresponding to earliest times. During matter dominated era the mass density going like 1/a3 would to dominate over dark energy for small enough values of a. The asymptotic cosmology should be cosmic string dominated predicting 1/GT2(k). This does not lead to contradiction since Kähler magnetic contribution rather than that due to cosmological constant dominates.
  2. There are two kinds of cosmic strings: for the other type only volume action is non-vanishing and for the second type both Kähler and volume action are non-vanishing but the contribution of the volume action decreases as function of the length scale.
See the chapter More about TGD inspired cosmology or the article New insights about quantum criticality for twistor lift inspired by analogy with ordinary criticality .



CBM cold spot as problem of the inflationary cosmology

The existence of large cold spot in CMB) is a serious problem for the inflationary cosmology. The explanation as apparent cold spot due to Sachs-Wolfe effect caused by gravitational redshift of arriving CMB photons in so called super voids along the line of sight has been subjected to severe criticism. TGD based explanation as a region with genuinely lower temperature and average density relies on the view about primordial cosmology as cosmic string dominated period during which it is not possible to speak about space-time in the sense of general relativity, and on the analog of inflationary period mediating a transition to radiation dominated cosmology in which space-time in the sense of general relativity exists. The fluctuations for the time when this transition period ended would induce genuine fluctuations in CMB temperature and density. This picture would also explain the existence super voids.

See the chapter TGD inspired cosmology or the article CBM cold spot as problem of the inflationary cosmology .



Did you think that star formation is understood?

In Cosmos Magazine there is an interesting article about about the work of a team of astronomers led by Fatemeh Tabatabaei published in Nature Astronomy.

The problem is following. In the usual scenario for the star formation the stars would have formed almost instantaneously and star formation would not continue anymore significantly. Stars with the age of our sun however exist and star formation is still taking place: more than one half of galaxies is forming stars. So called starburst galaxies do this very actively. The standard story is that since stars explode as supernovae, the debris from supernovae condenses to stars of later generations. Something like this certainly occurs but this does not seem to be the whole story.

Remark: It seems incredible that astrophysics would still have unsolved problems at this level. During years I have learned that standard reductionistic paradigm is full of holes.

The notion of star formation quenching has been introduced: it would slow down the formation of stars. It is known that quenched galaxies mostly have a super-massive blackhole in their center and that quenching starts at the centers of galaxies. Quenching would preserve star forming material for future generations of stars.

To study this process a team of astronomers led by Tabatabaei turned their attention to NCG 1079 located at distance of 45 million light years. It is still forming stars in central regions but shows signs of quenching and has a super-massive blackhole in its center. What was found that large magnetic fields, probably enhanced by the central black hole, affect the gas clouds that would normally collapse into stars, thereby inhibiting their collapse. These forces can even break big clouds into smaller ones, she says, ultimately leading to the formation of smaller stars.

This is highly interesting from TGD point of view. I have already considered a TGD based model for star formation (see this). In the simplest TGD based model galaxies are formed as knots of long cosmic strings. Stars in turn would be formed as sub-knots of these galactic knots. There is also alternative vision in which knots are just closed flux tubes bound to long strings containing galaxies as closed flux tubes like pearls in necklace. These closed flux tubes could emerge from long string by reconnection and form elliptic galaxies. The signature would be non-flatness for the velocity spectrum of distant stars. Also in the case of stars similar reconnection process splitting star as sub-knot of galactic string can be imagined.

If stars are sub-knots in knots of galactic string representing the galaxies, the formation of star would correspond to a formation of knot. This would involve reconnection process in which some portions of knot go "through each other". This is the manner how knots are reduced to trivial knot in knot cobordism used to construct knot invariants in knot theory (see this). Now it would work in opposite direction: to build a knots.

This process is rather violent and would initiate star formation with dark matter from the cosmic string forming the star. This process would continue forever and would allow avoid the instantaneous transformation of matter into stars as in the standard model. At deeper level star formation would be induced by a process taking place at the level of dark matter for magnetic flux tubes: similar vision applies in TGD inspired biology. One could perhaps see these knots as seeds of a phase transition like process leading to a formation of star. This reconnection process could take place also in the formation of spiral galaxies. In Milky Way there are indeed indications for the reconnection process, which could be related to the formation of Milky as knot.

The role of strong magnetic fields supposed to be amplified by the galactic blackhole is believed to be essential in quenching. They would be associated with dark flux tubes, possibly as return fluxes at ordinary space-time sheets carrying visible matter (flux lines must be closed). These magnetic fields would somehow prevent the collapse of gas clouds to stars. They could also induce a splitting of the gas cloud to smaller clouds. The ratio of mass to magnetic flux ratio for clouds is studied and the clouds are found to be magnetically critical or stable against collapse to a core regions needed for the formation of star. The star formation efficiency of clouds drops with increasing magnetic field strength.

Star formation would begin as the magnetic field has strength below a critical value. If the reconnection plays a role in the process, this would suggest that reconnection is probable for magnetic field strengths below critical value. Since the thickness of the magnetic flux tube associated with its M4projection increases when magnetic field strength decreases, one can argue that the reconnection probability increases so that star formation becomes more probable. The development of galactic blackhole would amplify the magnetic fields. During cosmic evolution the flux tubes would thicken so that also the field strength would be reduced and eventually the star formation would begin if the needed gas clouds are present. At distant regions the thickness of flux tube loops can be argued to be larger since the p-adic length scale in question is longer since magnetic field strength is expected to scale like inverse of p-adic length scale squared (also larger value for heff/h=n would imply this). This would explain star formation in distant regions. This is just what observations tell.

A natural model for the galactic blackhole is as a highly wounded portion of cosmic string. The blackhole Schwartschild radius would be R=2GM and the mass due to dark energy of string (there would be also dark matter contribution) to mass would be M≈ TL, where T is roughly T≈ 2-11. This would give the estimate L≈ 210R.

See the chapter TGD and astrophysics or the article Five new strange effects associated with galaxies .



Four new strange effects associated with galaxies

Dark matter in TGD sense corresponds to heff/h=n phases of ordinary matter associated with magnetic flux tubes carrying monopole flux. These flux tubes are n-sheeted covering spaces, and n corresponds to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss 4 latest galactic anomalies supporting the proposed view.

  1. Standard view about galactic dark matter strongly suggests that the stars moving around so called low surface brightness stars should not have flat velocity spectrum. The surprise has been that they have. It is demonstrated that this provides additional piece of support for the TGD view about dark matter and energy assigning them with cosmic strings having galaxies as knots along them.
  2. The called 21-cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon implying that em charge of dark matter particle is effectively reduced.
  3. The unexpected migration of stars from Milky Way halo would in pearl-in-necklace model for galaxies be due to a cosmic traffic accident: a head-on collision of galaxy arriving along cosmic string having both Milky Way and arriving galaxy along it. The gravitational attraction of the arriving galaxy would strip part of stars from the galactic plane and distributions of stripped stars located symmetrically at the two sides of the galactic plane would be formed.

  4. A further observation is that the rotation period of galaxy identified as the period of rotation at the edge of galaxy seems to be universal. In TGD Universe the period could be assigned to dark matter. The model allows to build a more detailed picture about the interaction of ordinary matter and dark matter identified as a knot in a long string containing galaxies as knots. This knot would have loop like protuberances extending up to the edge of the galaxy and even beyond it. In the region or radius r of few kpc the dark matter knot behaves like a rigid body and rotates with velocity vmax slightly higher velocity vrot of distant stars. The angular rotation velocity of the flux loops extending to larger distances slows down with distance from its value ωmax at ρ=r to ωrot=vrot/R at ρ=R - roughly by a factor r/R. If stars are associated with sub-knots of the galactic knot and have decayed partially (mostly) to ordinary matter, the rotational velocities of stars and dark matter are same, and one can understand the peculiar features of the velocity spectrum.
See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies .



TGD based explanation for why the rotation periods of galaxies are same

I learned in FB about very interesting finding about the angular rotation velocities of stars near the edges of the galactic disks (see this). The rotation period is about one giga-year. The discovery was made by a team led by professor Gerhardt Meurer from the UWA node of the International Centre for Radio Astronomy Research (ICRAR). Also a population of older stars was found at the edges besides young stars and interstellar gas. The expectation was that older stars would not be present.

The rotation periods are claimed to in a reasonable accuracy same for all spiral galaxies irrespective of the size. The constant velocity spectrum for distant stars implies ω ∝ 1/r for r>R. It is important do identify the value of the radius R of the edge of the visible part of galaxy precisely. I understood that outside the edge stars are not formed. According to Wikipedia, the size R of Milky Way is in the range (1-1.8)× 105 ly and the velocity of distant stars is v=240 km/s. This gives T∼ R/v∼ .23 Gy, which is by a factor 1/4 smaller than the proposed universal period of T=1 Gy at the edge. It is clear that the value of T is sensitive to the identification of the edge and that one can challenge the identification Redge=4× R.

In the following I will consider two TGD inspired arguments. The first argument is classical and developed by studying the velocity spectrum of stars for Milky Way, and leads to a rough view about the dynamics of dark matter. Second argument is quantal and introduces the notion of gravitational Planck constant hbargr and quantization of angular momentum as multiples of hbargr. It allows to predict the value of T and deduce a relationship between the rotation period T and the average surface gravity of the galactic disk.

In the attempts understand how T could be universal in TGD framework, it is best to look at the velocity spectrum of Milky Way depicted in a Wikipedia article about Milky Way (see this).

  1. The illustration shows that the v(ρ) has maximum at around r=1 kpc. The maximum corresponds in reasonable approximation to vmax= 250 km/s, which is only 4 per cent above the asymptotic velocity vrot=240 km/s for distant stars as deduced from the figure.

    Can this be an accident? This would suggest that the stars move under the gravitational force of galactic string alone apart from a small contribution from self-gravitation! The dominating force could be due to the straight portions of galactic string determining also the velocity vrot of distant stars.

    It is known that there is also a rigid body part of dark matter having radius r∼ 1 kpc (3.3 × 103 ly) for Milky Way, constant density, and rotating with a constant angular velocity ωdark to be identified as the ωvis at r. The rigid body part could be associated with a separate closed string or correspond to a knot of a long cosmic string giving rise to most of the galactic dark matter.

    Remark: The existence of rigid body part is serious problem for dark matter as halo approach and known as core-cusp problem.

    For ρ<r stars could correspond to sub-knots of a knotted galactic string and vrot would correspond to the rotation velocity of dark matter at r when self-gravitation of the knotty structure is neglected. Taking it into account would increase vrot by 4 per cent to vmax. One would have ωdark= vmax/r.

  2. The universal rotation period of galaxy, call it T∼ 1 Gy, is assigned with the edge of the galaxy and calculated as T= v(Redge)/Redge. The first guess is that the the radius of the edge is Redge=R, where R∈ (1-1.8)× 105 ly (30-54 kpc) is the radius of the Milky Way. For v(R)= vrot∼ 240 km/s one has T∼ .225 Gy, which is by a factor 1/4 smaller that T=1 Gy. Taking the estimate T=1 Gy at face value one should have Redge=4R.

    One could understand the slowing down of the rotation if the dark matter above ρ>r corresponds to long - say U-shaped as TGD inspired quantum biology suggests - non-rigid loops emanating from the rigid body part. Non-rigidy would be due to the thickening of the flux tube reducing the contribution of Kähler magnetic energy to the string tension - the volume contribution would be extremely small by the smallness of cosmological constant like parameter multiplying it.

  3. The velocity spectrum of stars for Milky Way is such that the rotation period Tvis=ρ/vvis(ρ) is quite generally considerably shorter than T=1 Gy. The discrepancy is from 1 to 2 orders of magnitude. The vvis(ρ) varies by only 17 per cent at most and has two minima (200 km/s and 210 km/s) and eventually approaches vrot=240 km/s.

    The simplest option is that the rotation v(ρ) velocity of dark matter in the range [r,R] is in the first approximation same as that of visible matter and in the first approximation constant. The angular rotation ω would decrease roughly like r/ρ from ωmax to ωrot=2π/T: for Milky Way this would mean reduction by a factor of order 10-2. One could understand the slowing down of the rotation if the dark matter above ρ>r corresponds to long - say U-shaped as TGD inspired quantum biology suggests - non-rigid loops emanating from the rigid body part. Non-rigidity would be due to the thickening of the flux tube reducing the contribution of Kähler magnetic energy to the string tension - the volume contribution would be extremely small by the smallness of cosmological constant like parameter multiplying it.

    If the stars form sub-knots of the galactic knot, the rotational velocities of dark matter flux loops and visible matter are same. This would explain why the spectrum of velocities is so different from that predicted by Kepler law for visible matter as the illustration of the Wikipedia article shows (see this). Second - less plausible - option is that visible matter corresponds to closed flux loops moving in the gravitational field of cosmic string and its knotty part, and possibly de-reconnected (or "evaporated") from the flux loops.

    What about the situation for ρ>R? Are stars sub-knots of galactic knot having loops extending beyond ρ=R. If one assumes that the differentially rotating dark matter loops extend only up to ρ=R, one ends up with a difficulty since vvis(ρ) must be determined by Kepler's law above ρ=R and would approach vrot from above rather from below. This problem is circumvented if the loops can extend also to distances longer than R.

  4. Asymptotic constant rotation velocity vrot for visible matter at r>R is in good approximation proportional to the square root of string tension Ts defining the density per unit length for the dark matter and dark energy of string. vrot= (2GTs)1/2 is determined from Kepler's law in the gravitational field of string. In the article R is identified as the size of galactic disk containing stars and gas.
  5. The universality of T (no dependence on the size R of the galaxy) is guaranteed if the ratio R/r is universal for given string tension Ts. This would correspond to scaling invariance. To my opinion one can however challenge the idea about universality of T since its identification is far from obvious. Rather, the period at r would be universal if the angular velocity ω and perhaps also r are universal in the sense that they depend on the string tension Ts of the galactic string only.
The above argument is purely classical. One can consider the situation also quantally.
  1. The notion of gravitational Planck constant hgr introduced first by Nottale is central in TGD, where dark matter corresponds to a hierarchy of Planck constants heff=n × h. One would have

    hbargr= GM2/v0.

    for the magnetic flux tubes connecting masses M and m and carrying dark matter. For flux loops from M back to M one would have

    hbargr= GM2/v0.

    v0 is a parameter with dimensions of velocity. The first guess is v0 =vrot, where vrot corresponds to the rotation velocity of distant stars - roughly vrot=4× 10-3c/5. Distant stars would be associated with the knots of the flux tubes emanating from the rigid body part of dark matter, and T=.25 Gy is obtained for v0= R/vrot in the case of Milky Way. The universality of r/R guaranteeing the universality of T would reduce to the universality of v0.

  2. Assume quantization of dark angular momentum with unit hgr for the galaxy. Using L = Iω, where I= MR2/2 is moment of inertia, this gives

    MR2ω/2= L = m×hbargr =2m×GM2/v0

    giving

    ω= 2m×hbargr/MR2 = 2m×GM/(R2v0)= m× 2πggal/v0 , m=1,2,.. ,

    where ggal= GM/πR2 is surface gravity of galactic disk.

    If the average surface mass density of the galactic disk and the value of m do not depend on galaxy, one would obtain constant ω as observed (m=1 is the first guess but also other values can be considered).

  3. For the rotation period one obtains

    T= v0/m×ggal, m=1,2,...

    Does the prediction make sense for Milky Way? For M= 1012MSun represents a lower bound for the mass of Milky Way (see this). The upper bound is roughly by a factor 2 larger. For M=1012MSun the average surface gravity ggal of Milky Way would be approximately ggal ≈ 10-10g for R= 105 ly and by a factor 1/4 smaller for R= 2× 105 ly. Here g=10 m/s2 is the acceleration of gravity at the surface of Earth. m=1 corresponds to the maximal period.

    For the upper bound M= 1.5× 1012MSun of the Milky Way mass (see this) and larger radius R=2× 105 ly one obtains T≈ .23× 109/m years using v0=vrot(R/r), R=180r and vrot=240 km/s.

  4. One can criticize this argument since the rigid body approximation fails. Taking into account the dependence v=vrotR/ρ in the the integral defining total angular momentum as 2π (M/π R2) ∫ v(ρ) ρ2 dρ= Mω R2 rather than Mω R2/2 so that the value of ω is reduced by factor 1/2 and the value of T increases by factor 2 to T=.46/m Gy which is rather near to the claimed value of 1 Gy..
To sum up, the quantization argument combined with the classical argument discussed first allows to relate the value of T to the average surface gravity of the galactic disk and predict reasonably well the value of T.

See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies .



Strange finding about galactic halo as a possible further support for TGD based model of galaxies

A team led by Maria Bergemann from the Max Planck Institute for Astronomy in Heidelberg, has studied a small population of stars in the halo of the Milky Way (MW) and found its chemical composition to closely match that of the Galactic disk (see this). This similarity provides compelling evidence that these stars have originated from within the disc, rather than from merged dwarf galaxies. The reason for this stellar migration is thought to be theoretically proposed oscillations of the MW disk as a whole, induced by the tidal interaction of the MW with a passing massive satellite galaxy.

One can divide the stars in MW to the stars in the galactic disk and those in the galactic halo. The halo has gigantic structures consisting of clouds and streams of stars rotating around the center of the MW. These structures have been identified as a kind of debris thought to reflect the violent past of the MW involving collisions with smaller galaxies.

The scientists investigated 14 stars located in two different structures in the Galactic halo, the Triangulum-Andromeda (Tri-And) and the A13 stellar over-densities, which lie at opposite sides of the Galactic disc plane. Earlier studies of motion of these two diffuse structures revealed that they are kinematically associated and could relate to the Monoceros Ring, a ring-like structure that twists around the Galaxy. The position of the two stellar over-densities could be determined as each lying about 5 kiloparsec (14000 ly) above and below the Galactic plane. Chemical analysis of the stars made possible by their spectral lines demonstrated that they must must originate from MW itself, which was a complete surprise.

The proposed model for the findings is in terms of vertical vibrations of galactic disk analogous to those of drum membrane. In particular the fact that the structures are above and below of the Monoceros Ring supports this idea. The vibrations would be induced by the gravitational interactions of ordinary and dark matter of galactic halo with a passing satellite galaxy. The picture of the the article (see this) illustrates what the pattern of these vertical vibrations would look like according to simulations.

In TGD framework this model is modified since dark matter halo is replaced with cosmic string. Due to the absence of the dark matter halo, the motion along cosmic string is free apart from gravitational attraction caused by the galactic disk. Cosmic string forces the migrated stars to rotate around to the cosmic string in plane parallel to the galactic plane and the stars studied indeed belong to ring like structures: the prediction is that these rings rotate around the axis of galaxy.

One can argue that if one has stars are very far from galactic plane - say dwarf galaxy - the halo model of dark matter suggests that the orbital plane arbitrary but goes through galactic center since spherically symmetric dark matter halo dominates in mass density. TGD would predict that the orbital plane is parallel to to the galactic plane.

Are the oscillations of the galactic plane necessary in TGD framework?

  1. The large size of and the ring shape of the migrated structures suggests that oscillations of the disk could have caused them. The model for the oscillations of MW disk would be essentially that for a local interaction of a membrane (characterized by tension) with its own gravitational field and with the gravitational field of G passing by. Some stars would be stripped off from the membrane during oscillations.
  2. If the stars are local knots in a big knot (galaxy) formed by a long flux tube as TGD based model for galaxy formation suggests, one can ask whether reconnections of the flux tube could take place and split from the flux tube ring like structures to which migrating stars are associated. This would reduce the situation to single particle level and it is interesting to see whether this kind of model might work. One can also ask whether the stripping could be induced by the interaction with G without considerable oscillations of MW.
The simplest toy model for the interaction of MW with G would be following: I have proposed this model of cosmic traffic accidents already earlier. Also the fusion of blackholes leading could be made probable if the blackholes are associated with the same cosmic string (stars would be subknots of galactic knots).
  1. G moves past the MW and strips off stars and possibly also larger structures from MW: denote this kind of structures by O. Since the stripped objects at the both sides of the MW are at the same distance, it seems that the only plausible direction of motion of G is along the cosmic string along which galaxies are like pearls in necklace. G would go through MW! If the model works it gives support for TGD view about galaxies.

    One can of course worry about the dramatic implications of the head on collisions of galaxies but it is interesting to look whether it might work at all. On the other hand, one can ask whether the galactic blackhole for MW could have been created in the collision possibly via fusion of the blackhole associated with G with that of MW in analogy with the fusion of blackholes detected by LIGO.

  2. A reasonable approximation is that the motions of G and MW are not considerably affected in the collision. MW is stationary and G arrives with a constant velocity v along the axis of cosmic string above MW plane. In the region between galactic planes of G and MW the constant accelerations caused by G and MW have opposite directions so that one has

    gtot= gG -gMW between the galactic planes and above MW plane

    gtot= -gG+gMW between the galactic planes and below MW plane ,

    gtot= -gG- gMW above both galactic planes ,

    gtot= gG+ gMW below both galactic planes .

    The situation is completely symmetric with respect to the reflection with respect to galactic plane if one assumes that the situation in galactic plane is not affected considerably. Therefore it is enough to look what happens above the MW plane.

  3. If G is more massive, one can say that it attracts the material in MW and can induce oscillatory wave motion, whose amplitude could be however small. This would induce the reconnections of the cosmic string stripping objects O from MW, and O would experience upwards acceleration gtot= gG -gMW towards G (note that O also rotates around the cosmic string). After O has passed by G, it continues its motion in vertical direction and experiences deceleration gtot= -gG- gMW and eventually begins to fall back towards MW.

    One can parameterize the acceleration caused by G as gG =(1+x)× gMW, x>1 so that the acceleration felt by O in the middle regions between the planes is gtot=gG-g= x × gMW. Above planes of both G and MW the acceleration is gtot= -(2+x) gMW .

  4. Denote by T the moment when O and G pass each other. One can express the vertical height h and velocity v of O in the 2 regions above MW as

    h(t)= (gG-gMW)2t2 , v=(gG-gMW)t for t<T ,

    h(t)= [(gG +gMW)/2](t-T)2 + v(T)(t-T)+h(T) , v(T)= (gG-gMW)T ,

    h(T) = [(gG-gMW)/2] T2 for t>T .

    Note that time parameter T tells how long time it takes for O to reach G if its has been stripped off from MW. A naive estimate for the value of T is as the time scale in which the gravitational field of galactic disk begins to look like that of point mass.

    This would suggest that h(T) is of the order of the radius R of MW so that one would have using gG= (1+x)gMW

    T∼ (1/x)1/2 (2R/gMW)1/2 .

  5. The direction of motion of O changes at v(Tmax)=0. One has

    Tmax= (2gG/(gG+gMW) T ,

    hmax= -[(gG +gMW)/2] (Tmax-T)2+ v(T)(Tmax-T)+h(T) .

  6. For t>Tmax one has

    h(t)= -[(gG+gMW)/2] (t-Tmax)2+hmax ,

    hmax=-(gG +gMW)2(Tmax-T)2 +h(T) .

    Expressing hmax in terms of T and parameter x= (gG gMW)/gMW one has

    hmax= y(x)gMW(T2/2) ,

    y(x)= x(5x + 4)/2(2+x) ≈ x for small values of x .

  7. If one assumes that hmax>hnow, where hnow∼ 1.2× 105 ly the recent height of the objects considered, one obtains an estimate for the time T from hmax>hnow giving

    T> [2(2+x)/x(5x+4)]1/2 T0 , T0=hnow gMW .

    Note that Tmax<2T holds true.

It is interesting to see whether the model really works.
  1. It is easy to find (one can check the numerical factors here) that gMW can be expressed at the limit of infinitely large galactic disk as

    gMW= 2π G (dM/dS)= 2GM/R2 ,

    where R is the radius of galactic disk and dM/dS= M/π R2 is the density of the matter of galactic disk per unit area. This expression is analogous to g= GM/R2E at the surface of Earth.

  2. One can express the estimate in terms of the acceleration g= 10 m/s2 as

    gMW≈ 2g (RE/R)2(M/ME) .

    The estimate for MW radius has lower bound R=105 ly, MW mass M∼ 1012 MSun, using MSun/ME=3×106 and REarth≈ 6× 106 m, one obtains gMW∼ 2× 10-10g.

  3. Using the estimate for gMW one obtains T> [2(2+x)/[x(5x+4)]]1/2 T0 with

    T0 ∼ 3× 109 years .

    The estimate T∼ (1/x(1/2 (2R/gMW)1/2 proposed above gives T>(1/x)1/2 × 108 years. The fraction of ordinary mass from total mass is roughly 10 per cent of the contribution of the dark energy and dark particles associated with the cosmic string. Therefore x<.1 is a reasonable upper bound for x parametrizing the mass difference of G and MW. For x≈ .1 one obtains T in the range 1-10 Gy.

See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies .



Dark matter and 21-cm line of hydrogen

Dark matter in TGD sense corresponds to heff/h=n phases of ordinary matter associated with magnetic flux tubes. These flux tubes would be n-sheeted covering spaces, and n would correspond to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss one of the latest anomalies - 21-cm anomaly.

Sabine Hossenfelder told about the article discussing the possible interpretation of so called 21-cm anomaly associated with the hyperfine transition of hydrogen atom and observed by EDGES collaboration.

The EDGES Collaboration has recently reported the detection of a stronger-than-expected absorption feature in the global 21-cm spectrum, centered at a frequency corresponding to a redshift of z ≈ 17. This observation has been interpreted as evidence that the gas was cooled during this era as a result of scattering with dark matter. In this study, we explore this possibility, applying constraints from the cosmic microwave background, light element abundances, Supernova 1987A, and a variety of laboratory experiments. After taking these constraints into account, we find that the vast majority of the parameter space capable of generating the observed 21-cm signal is ruled out. The only range of models that remains viable is that in which a small fraction, ≈ 0.3-2 per cent, of the dark matter consists of particles with a mass of ≈ 10-80 MeV and which couple to the photon through a small electric charge, ε ≈ 10-6-10-4. Furthermore, in order to avoid being overproduced in the early universe, such models must be supplemented with an additional depletion mechanism, such as annihilations through a Lμ-Lτ gauge boson or annihilations to a pair of rapidly decaying hidden sector scalars.

What has been found is an unexpectedly strong absorption feature in 21-cm spectrum: the redshift is about z ≈ 17 which corresponds to a distance of about 2.27× 1011 ly. Dark matter interpretation would be in terms of scattering of the baryons of gas from dark matter at lower temperature. The anomalous absorption of 21 cm line could be explained with the cooling of gas caused by the flow of energy to a colder medium consisting of dark matter. If I understood correctly, this would generate a temperature difference between background radiation and gas and consequent energy flow to gas inducing the anomaly.

The article excludes large amount of parameter space able to generate the observed signal. The idea is that the interaction of baryons of the gas with dark matter. The interaction would be mediated by photons. The small em charge of the new particle is needed to make it "dark enough". My conviction is that tinkering with the quantization of electromagnetic charge is only a symptom about how desperate the situation is concerning interpretation of dark matter in terms of some exotic particles is. Something genuinely new physics is involved and the old recipes of particle physicists do not work.

In TGD framework the dark matter at lower temperature would be heff/h=n phases of ordinary matter residing at magnetic flux tubes. This kind of energy transfer between ordinary and dark matter is a general signature of dark matter in TGD sense, and there are indications from some experiments relating to primordial life forms for this kind of energy flow in lab scale (see this) .

The ordinary photon line appearing in the Feynman diagram describing the exchange of photon would be replaced with a photon line containing a vertex in which the photon transforms to dark photon. The coupling in the vertex - call it m2 - would have dimensions of mass squared. This would transform the coupling e2 associated with the photon exchange to e2 m2/p2, where p2 is photon's virtual mass squared. The slow rate for the transformation of ordinary photon to dark photon could be see as an effective reduction of electromagnetic charge for dark matter particle from its quantized value.

Remark: In biological systems dark cyclotron photons would transform to ordinary photons and would be interpreted as bio-photons with energies in visible and UV.

To sum up, the importance of this finding is that it supports the view about dark matter as ordinary particles in a new phase. There are electromagnetic interactions but the transformation of ordinary photons to dark photons slows down the process and makes these exotic phases effectively dark.

See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies .



Low surface brightness galaxies as additional support for pearls-in-necklace model for galaxies

Sabine Hossenfelder had an inspiring post) about the problems of the halo dark matter scenario. My attention was caught by the title "Shut up and simulate". It was really to the point. People stopped first to think, then to calculate, and now they just simulate. Perhaps AI will replace them at the next step.

While reading I realized that Sabine mentioned a further strong piece of support for the TGD view about galaxies as knots along cosmic strings, which create cylindrically symmetric gravitational field orthogonal to the string rather than spherically symmetric field as in halo models. The string tension determines the rotation velocity of distant stars predicted to be constant constant up to arbitrarily long distances (the finite size of space-time sheet of course brings in cutoff length).

To express it concisely: Sabine told about galaxies, which have low surface brightness. In the halo model the density of both matter and dark matter halo should be low for these galaxies so that the velocity of distant stars should decrease and lead to a breakdown of so called Tully-Fisher relation. It doesn't. This is the message that the observational astrophysicist Stacy McGaugh is trying to convey in his blog: about this the post of Sabine mostly told.

I am not specialist in the field of astrophysics and it was nice to read the post and refresh my views about the problem of galactic dark matter.

  1. Tully-Fisher-relation (TFR) is an empirically well-established relation between the brightness of a galaxy and the velocity of its outermost stars. Luminosity L equals to apparent brightness (luminosity per unit area) of the galaxy multiplied by the area 4π d2 of sphere with radius equal to the distance d of the observed galaxy. The luminosity of galaxy is also proportional to the mass M of the galaxy. TFR says that luminosity of spiral galaxy - or equivalently its mass - is proportional to the emission line width, which is determined by the spectrum of angular velocities of stars in the spiral galaxy. Apparent brightness and line width can be measured, and from these one can deduce the distance d of the star: this is really elegant.
  2. It is easy to believe that the line width is determined by the rotation velocity of galaxy, which is primarily determined by the mass of the dark matter halo. The observation that the rotational velocity is roughly constant for distant stars of spiral galaxies - rather than decreasing like 1/ρ - this led to the hypothesis that there is dark matter halo around galaxy. By fitting the density of the dark matter properly, one obtains constant velocity. Flat velocity spectrum implies that the line width is same also for distant stars as for stars near galactic center.

    To explain this in halo model, one ends up with complex model for the interactions of dark matter and ordinary matter and here simulations are the only manner to deduce the predictions. As Sabine tells, the simulations typically take months and involve huge amount of bits.

  3. Since dark matter halo is finite, the rotation velocity should decrease at large enough distances like 1/R, R distance from the center of the galaxy. If one has very dilute galaxy - so called low surface brightness galaxy, which is very dim - the rotational velocities of distant stars should be smaller and therefore also their contribution to the average line width assignable to the galaxy. TFR is not expected to hold true anymore. The surprising finding is that it does!
The conclusion seems to be that there is something very badly wrong with the halo model.

Halo model of dark matter has also other problems.

  1. Too many dwarf galaxies tend to be predicted.
  2. There is also so called cusp problem: the density peak at the center of the galaxy tends to be too high. Observationally the density seems to be roughly constant in the center region, which behaves like rotating rigid body.
The excuses for the failures claim that the physics of normal matter is not well enough understood: the feedback from the physics of ordinary matter is believed to solve the problems. Sabine lists some possibilities.
  1. There is the pressure generated when stars go supernovae, which can prevent the formation of the density peak. The simulations however show that practically 100 per cent of energy liberated in the formation of supernovas should go to the creation of pressure preventing the development of the density peak.
  2. One can also claim that the dynamics of interstellar gas is not properly understood.
  3. Also the accretion and ejection of matter by supermassive black holes, which are at the center of most galaxies could reduce the density peak.
One can of course tinker with the parameters of the model and introduce new ones to get what one wants. This is why simulations are always successful!
  1. For instance, one can increase the relative portion of dark matter to overcome the problems but one ends up with fine tuning. The finding that TFR is true also for low surface brightness galaxies makes the challenge really difficult. Mere parameter fit is not enough: one should also identify the underlying dynamical processes allowing to get rid of the normal manner, and this has turned out to be difficult.
  2. What strongly speaks against the feedback from the ordinary matter is that the outcome should be the same irrespective of how galaxies were formed: directly or through mergers of other galaxies. The weak dependence on the dynamics of ordinary matter strongly suggests that stellar feedback is not a correct manner to overcome the problem.
One can look at the situation also in TGD framework.
  1. In pearls-in-necklace model galaxies are knots of long cosmic strings (see this, this, and this). Knots have constant density and this conforms with the observation: the cusp problem disappears.
  2. The long string creates gravitational field orthogonal to it and proportional to 1/ρ, ρ the orthogonal distance from the string. This cylindrically symmetric field creates correlations in much longer scales than the gravitational field of spherical halo, which for long distances is proportional to 1/r2, where r the distance from the center of the galaxy.

    Pearls-in-necklace model predicts automatically constant velocity spectrum at arbitrary long(!) distances. The velocity spectrum is independent of the details of the distribution of the visible matter and is proportional to the square root of string tension. There is almost total independence of the velocity spectrum of the ordinary matter as also the example of low surface brightness galaxies demonstrates. Also the history for the formation of the galaxy matters very little.

  3. From TFR one can conclude that the mass of the spiral galaxy is (proportional to the luminosity proportional to the line width) and also proportional to the string tension. Since galactic mass varies also string tension must vary. This is indeed predicted. String tension is essentially the energy per unit length for the thickened cosmic string and would characterize the contributions of dark matter in TGD sense (phases of ordinary matter with large heffh=n as well as dark energy, which contains both Kähler magnetic energy and constant term proportional to the 3-volume of the flux tube.

    Cosmology suggests that string thickness increases with time: this would reduce the Kähler magnetic contribution to the string tension but increase the contribution proportional to the 3-volume. There is also the dependence of the coefficient of the volume term (essentially the formal counterpart of cosmological constant), which depends on p-adic length scale like the inverse of the p-adic length scale squared L(k)∝ 2k/2, where k must be positive integer, characterizing the size scale involved (this is something totally new and solves the cosmological constant problem) (see this). It is difficult to say which contribution dominates.

  4. Dwarf galaxies would require small string tension, hence the strings with small string tension should be rather rare.
If this picture is correct, the standard views about dark matter are completely wrong, to put it bluntly. Dark matter corresponds to heff/h=n phases of ordinary matter rather than some exotic particle(s) having effectively only gravitational interaction, and there is no dark matter halo. TGD excludes also MOND. Dark energy and dark matter reside at the thickened cosmic strings, which belong to the simplest extremals of the action principle of TGD (see (see this and this). It should be emphasized that flux tubes are not ad hoc objects introduced to understand galactic velocity spectrum: they are a basic prediction of TGD and by fractality of TGD Universe present in all scales and are fundamental also for the TGD view about biology and neuroscience.

Maybe it might be a good idea to start to think again. Using brains instead of computers is also must a more cost-effective option: I have been thinking intensely for four decades, and this hasn't cost a single coin for the society! Recommended!

See the chapter TGD and astrophysics. For TGD based model of galaxies see for instance this .



A further blow against dark matter halo paradigm

The following is a comment to a FB posting by Sabine Hossenfelder giving a link to the most recent finding challenging the dark matter halo paradigm. The article titled "A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology" published in Science can be found also in Archiv.

The halo model for dark matter encounters continually lethal problems as I have repeatedly tried to tell in my blog postings and articles. But still this model continues to add items to the curriculum vitae of the specialists - presumably as long as the funding continues. Bad ideas never die.

Halo model predicts that the dwarf galaxies around massive galaxies like Milky should move randomly. The newest fatal blow comes from the observation that dwarf galaxies move along neat circular orbits in the galactic plane of Centaurus A.

Just like the TGD based pearls-in-necklace model of galaxies as knots (pearls) of long cosmic strings predicts! The long cosmic string creates gravitational field in transversal direction and the dwarf galaxies move around nearly circular orbits. The motion along long cosmic string would be free motion and would give rise to streams. The prediction is that at large distances the rotational velocities approach constant just as in the case of distant stars.

See the chapter TGD and astrophysics. For TGD based model of galaxies see for instance this .



To the ../index page