What's new inTopological Geometrodynamics: an OverviewNote: Newest contributions are at the top! 
Year 2018 
Is it possible to determine experimentally whether gravitation is quantal interaction?
Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

Did LIGO observe nonstandard value of G and are galactic blackholes really supermassive?
I have talked (see this) about the possibility that Planck length l_{P} is actually CP_{2} length R, which is scaled up by factor of order 10^{3.5} from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R^{2}/ℏ_{eff}. There would be only one fundamental scale in TGD as the original idea indeed was. ℏ_{eff} at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 10^{6}10^{7} times larger than h. Also other values of h_{eff} are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in superfluidity could correspond to a value of h_{eff}/h_{0}=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of h_{eff} would induce also larger delocalization height. Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be G_{max}= R^{2}/h_{0}, h=6h_{0}. Are the blackholes detected by LIGO really so massive? LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 1040 solar masses, which is regarded as something rather strange. Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and h_{eff} smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value  say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass. This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?
What about supermassive galactic blacholes? What about supermassive galactic black holes in the centers of galaxies: are they really supermassive or is G superlarge! The mass of Milky Way supermassive blackhole is in the range 10^{5}10^{9} solar masses. Geometric mean is n=10^{7} solar masses and of the order of the standard value of R^{2}/G_{N}=n ∼ 10^{7} . Could one think that this blackhole has actually mass in the range 1100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning! The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry. See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} of "Physics in manysheeted spacetime" or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Galois groups and genes
The question about possible variations of G_{eff} (see this) led again to the old observation that subgroups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new subgroup can emerge. The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by h_{eff}/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of spacetime surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology. One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions E^{H} leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials. Any subgroup H⊂ Gal(E/K)) leaves the intermediate extension E^{H} invariant in elementwise manner as a subfield of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension E^{H} and subgroup H_{1}⊂ H_{2}⊂... define a hierarchy of extensions E^{H1}>E^{H2}>E^{H3}... with decreasing dimension. The subgroups H are normal  in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order H is the dimension of E as an extension of E^{H}. This is a highly nontrivial piece of information. The dimension of E factorizes to a product ∏_{i} H_{i} of dimensions for a sequence of groups H_{i}. Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group H_{i} so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon? Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension E^{H} in a further extension to E. The degree of E^{H} increases by a factor, which is dimension of E/E^{H} and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?
See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?. 
Is the hierarchy of Planck constants behind the reported variation of Newton's constant?
It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants h_{eff}=nh_{0}, h=6h_{0} together with the condition that theory contains CP_{2} size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R^{2}/hbar_{eff}, where R replaces Planck length ( l_{P}= (ℏ G^{1/2}→ l_{P}=R) and hbar_{eff}/h is in the range 10^{6}10^{7}. The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbar_{eff} inducing scaling G is accompanied by opposite scaling of M^{4} coordinates in M^{4}× CP_{2}: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case h_{eff}=h_{gr}=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v_{0}/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m. In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying h_{eff}. Also a model for fountain effect of superfluidity as delocalization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of h_{eff} due to superfluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of G_{eff} at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology. See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?. 
How could Planck length be actually equal to much larger CP_{2} radius?!
The following argument stating that Planck length l_{P} equals to CP_{2} radius R: l_{P}=R and Newton's constant can be identified G= R^{2}/ℏ_{eff}. This idea looking nonsensical at first glance was inspired by an FB discussion with Stephen Paul King. First some background.
To get some perspective, consider first the phase transition replacing hbar and more generally hbar_{eff,i} with hbar_{eff,f}=h_{gr} .
See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant. 
Large scale fluctuations in metagalactic ionizing background for redhsift sixI learned about a very interesting result related to early cosmology and challenging the standard cosmology. The result is described in popular article " Early opaque universe linked to galaxy scarcity" (see this). The original article " Evidence for Largescale Fluctuations in the Metagalactic Ionizing Background Near Redshift Six" of Becker et al is published in Astrophysical Journal (see this). The abstract of the article is following. " The observed scatter in intergalactic Lyα opacity at z ≤ 6 requires largescale fluctuations in the neutral fraction of the intergalactic medium (IGM) after the expected end of reionization. Postreionization models that explain this scatter invoke fluctuations in either the ionizing ultraviolet background (UVB) or IGM temperature. These models make very different predictions, however, for the relationship between Lyα opacity and local density. Here, we test these models using Lyαemitting galaxies (LAEs) to trace the density field surrounding the longest and most opaque known Lyα trough at z < 6. Using deep Subaru Hyper SuprimeCam narrowband imaging, we find a highly significant deficit of z ≈ 5.7 LAEs within 20 h^{1} Mpc of the trough. The results are consistent with a model in which the scatter in Lyα opacity near z ∼ 6 is driven by largescale UVB fluctuations, and disfavor a scenario in which the scatter is primarily driven by variations in IGM temperature. UVB fluctuations at this epoch present a boundary condition for reionization models, and may help shed light on the nature of the ionizing sources. " The basic conclusion is that the opaque regions of the early Universe about 12.5 billion years ago (redshift z∼ 6) correspond to to small number of galaxies. This is in contrast to standard model expectations. Opacity is due to the absorption of radiation by atoms and the UV radiation generated by galaxies ionizes atoms and makes Universe transparent. In standard cosmology the radiation would arrive from rather large region. The formation of galaxies is estimated to have begun .5 Gy years after Big Bang but there is evidence for galaxies already for .2 Gy after Big Bang (see this). Since the region studied corresponds to a temporal distance about 12.5 Gly and the age of the Universe is around 13.7 Gy, UV radiation from a region of size about 1 Gly should have reached the intergalactic regions and have caused the ionization. Second conclusion is that there are large fluctuations in the opacity. What is suggested is that either the intensity of the UV radiation or that the density of intergalactic gas fluctuates. The fluctuations in the intensity of UV radiation could be understood if the radiation from the galaxies propagates only to finite distance in early times. Why this should be the case is difficult to understand in standard cosmology. Could TGD provide the explanation.

Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology
Penrose has proposed an interesting cyclic cosmology (see this, , and this) in which two subsequent cosmologies are glued along conformal boundary together. The metric of the next cosmology is related to that of previous by conformal scaling factor, which approaches zero at the 3D conformal boundary. The physical origin of this kind of distance scaling is difficult to understand. The prediction is the existence of concentric circles of cosmic size interpretable as kind of memories about previous cosmic cycles. In TGD framework zero energy ontology (ZEO) inspired theory of consciousness suggest an analogous sequence of cosmologies. Now the cycles would correspond to life cycles of cosmic size serving as a conscious entity having causal diamond (CD) as imbedding space correlate. The arrow of geometric time is defined as the time direction to which the temporal distance between the ends of CD increases in sequence of state function reductions leaving passive boundary of CD unaffected and having interpretation as weak measurements. The arrow of time changes "big" state function reductions changing the roles of the boundaries of CD and meaning the death and reincarnation of self with opposite arrow of time. Penrose's gluing procedure would be replaced with "big" state function reduction in TGD framework. This proposal is discussed in some detail and the possibility that also now concentric low variance circles in CMB could carry memories about the previous life cycles of cosmos. This picture applies to all levels in the hierarchy of cosmologies (hierarchy of selves) giving rise to a kind of Russian doll cosmology. See the chapter TGD based cosmology or the article Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology. 
About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant
Nottale's formula for the gravitational Planck constant hbar_{gr}= GMm/v_{0} involves parameter v_{0} with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v_{0}  or equivalently the dimensionless parameter β_{0}=v_{0}/c (to be used in the sequel) appearing in the formula has remained open hitherto. In the following a possible interpretation based on manysheeted spacetime concept, manysheeted cosmology, and zero energy ontology (ZEO) is discussed. A generalization of the Hubble formula β=L/L_{H} for the cosmic recession velocity, where L_{H}= c/H is Hubble length and L is radial distance to the object, is suggestive. This interpretation would suggest that some kind of expansion is present. The fact however is that stars, planetary systems, and planets do not seem to participate cosmic expansion. In TGD framework this is interpreted in terms of quantal jerkwise expansion taking place as relative rapid expansions analogous to atomic transitions or quantum phase transitions. The TGD based variant of Expanding Earth model assumes that during Cambrian explosion the radius of Earth expanded by factor 2. There are two measures for the size of the system. The M^{4} size L_{M4} is identifiable as the maximum of the radial M^{4} distance from the tip of CD associated with the center of mass of the system along the lightlike geodesic at the boundary of CD. System has also size L_{ind} defined defined in terms of the induced metric of the spacetime surface, which is spacelike at the boundary of CD. One has L_{ind}<L_{M4}. The identification β_{0}= L_{M4}/L_{H}<1 does not allow the identification L_{H}=L_{M4}. L_{H} would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD. One can deduce an estimate for β_{0} by approximating the spacetime surface near the lightcone boundary as RobertsonWalker cosmology, and expressing the mass density ρ defined as ρ=M/V_{M4}, where V_{M4}=(4π/3) L_{M4}^{3} is the M^{4} volume of the system. ρ can be expressed as a fraction ε^{2} of the critical mass density ρ_{cr}= 3H^{2}/8π G. This leads to the formula β_{0}= [r_{S}/L_{M4}]^{1/2} × (1/ε), where r_{S} is Schwartschild radius. This formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses. See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant. 
Solution of Hubble constant discrepancy from the length scale dependence of cosmological constantThe discrepancy of the two determinations of Hubble constant has led to a suggestion that new physics might be involved (see this).

CBM cold spot as problem of the inflationary cosmologyThe existence of large cold spot in CMB) is a serious problem for the inflationary cosmology. The explanation as apparent cold spot due to SachsWolfe effect caused by gravitational redshift of arriving CMB photons in so called super voids along the line of sight has been subjected to severe criticism. TGD based explanation as a region with genuinely lower temperature and average density relies on the view about primordial cosmology as cosmic string dominated period during which it is not possible to speak about spacetime in the sense of general relativity, and on the analog of inflationary period mediating a transition to radiation dominated cosmology in which spacetime in the sense of general relativity exists. The fluctuations for the time when this transition period ended would induce genuine fluctuations in CMB temperature and density. This picture would also explain the existence super voids. See the chapter TGD inspired cosmology or the article CBM cold spot as problem of the inflationary cosmology . 
Did you think that star formation is understood?In Cosmos Magazine there is an interesting article about about the work of a team of astronomers led by Fatemeh Tabatabaei published in Nature Astronomy. The problem is following. In the usual scenario for the star formation the stars would have formed almost instantaneously and star formation would not continue anymore significantly. Stars with the age of our sun however exist and star formation is still taking place: more than one half of galaxies is forming stars. So called starburst galaxies do this very actively. The standard story is that since stars explode as supernovae, the debris from supernovae condenses to stars of later generations. Something like this certainly occurs but this does not seem to be the whole story. Remark: It seems incredible that astrophysics would still have unsolved problems at this level. During years I have learned that standard reductionistic paradigm is full of holes. The notion of star formation quenching has been introduced: it would slow down the formation of stars. It is known that quenched galaxies mostly have a supermassive blackhole in their center and that quenching starts at the centers of galaxies. Quenching would preserve star forming material for future generations of stars. To study this process a team of astronomers led by Tabatabaei turned their attention to NCG 1079 located at distance of 45 million light years. It is still forming stars in central regions but shows signs of quenching and has a supermassive blackhole in its center. What was found that large magnetic fields, probably enhanced by the central black hole, affect the gas clouds that would normally collapse into stars, thereby inhibiting their collapse. These forces can even break big clouds into smaller ones, she says, ultimately leading to the formation of smaller stars. This is highly interesting from TGD point of view. I have already considered a TGD based model for star formation (see this). In the simplest TGD based model galaxies are formed as knots of long cosmic strings. Stars in turn would be formed as subknots of these galactic knots. There is also alternative vision in which knots are just closed flux tubes bound to long strings containing galaxies as closed flux tubes like pearls in necklace. These closed flux tubes could emerge from long string by reconnection and form elliptic galaxies. The signature would be nonflatness for the velocity spectrum of distant stars. Also in the case of stars similar reconnection process splitting star as subknot of galactic string can be imagined. If stars are subknots in knots of galactic string representing the galaxies, the formation of star would correspond to a formation of knot. This would involve reconnection process in which some portions of knot go "through each other". This is the manner how knots are reduced to trivial knot in knot cobordism used to construct knot invariants in knot theory (see this). Now it would work in opposite direction: to build a knots. This process is rather violent and would initiate star formation with dark matter from the cosmic string forming the star. This process would continue forever and would allow avoid the instantaneous transformation of matter into stars as in the standard model. At deeper level star formation would be induced by a process taking place at the level of dark matter for magnetic flux tubes: similar vision applies in TGD inspired biology. One could perhaps see these knots as seeds of a phase transition like process leading to a formation of star. This reconnection process could take place also in the formation of spiral galaxies. In Milky Way there are indeed indications for the reconnection process, which could be related to the formation of Milky as knot. The role of strong magnetic fields supposed to be amplified by the galactic blackhole is believed to be essential in quenching. They would be associated with dark flux tubes, possibly as return fluxes at ordinary spacetime sheets carrying visible matter (flux lines must be closed). These magnetic fields would somehow prevent the collapse of gas clouds to stars. They could also induce a splitting of the gas cloud to smaller clouds. The ratio of mass to magnetic flux ratio for clouds is studied and the clouds are found to be magnetically critical or stable against collapse to a core regions needed for the formation of star. The star formation efficiency of clouds drops with increasing magnetic field strength. Star formation would begin as the magnetic field has strength below a critical value. If the reconnection plays a role in the process, this would suggest that reconnection is probable for magnetic field strengths below critical value. Since the thickness of the magnetic flux tube associated with its M^{4}projection increases when magnetic field strength decreases, one can argue that the reconnection probability increases so that star formation becomes more probable. The development of galactic blackhole would amplify the magnetic fields. During cosmic evolution the flux tubes would thicken so that also the field strength would be reduced and eventually the star formation would begin if the needed gas clouds are present. At distant regions the thickness of flux tube loops can be argued to be larger since the padic length scale in question is longer since magnetic field strength is expected to scale like inverse of padic length scale squared (also larger value for h_{eff}/h=n would imply this). This would explain star formation in distant regions. This is just what observations tell. A natural model for the galactic blackhole is as a highly wounded portion of cosmic string. The blackhole Schwartschild radius would be R=2GM and the mass due to dark energy of string (there would be also dark matter contribution) to mass would be M≈ TL, where T is roughly T≈ 2^{11}. This would give the estimate L≈ 2^{10}R. See the chapter TGD and astrophysics or the article Five new strange effects associated with galaxies . 
Four new strange effects associated with galaxiesDark matter in TGD sense corresponds to h_{eff}/h=n phases of ordinary matter associated with magnetic flux tubes carrying monopole flux. These flux tubes are nsheeted covering spaces, and n corresponds to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss 4 latest galactic anomalies supporting the proposed view.

TGD based explanation for why the rotation periods of galaxies are sameI learned in FB about very interesting finding about the angular rotation velocities of stars near the edges of the galactic disks (see this). The rotation period is about one gigayear. The discovery was made by a team led by professor Gerhardt Meurer from the UWA node of the International Centre for Radio Astronomy Research (ICRAR). Also a population of older stars was found at the edges besides young stars and interstellar gas. The expectation was that older stars would not be present. The rotation periods are claimed to in a reasonable accuracy same for all spiral galaxies irrespective of the size. The constant velocity spectrum for distant stars implies ω ∝ 1/r for r>R. It is important do identify the value of the radius R of the edge of the visible part of galaxy precisely. I understood that outside the edge stars are not formed. According to Wikipedia, the size R of Milky Way is in the range (11.8)× 10^{5} ly and the velocity of distant stars is v=240 km/s. This gives T∼ R/v∼ .23 Gy, which is by a factor 1/4 smaller than the proposed universal period of T=1 Gy at the edge. It is clear that the value of T is sensitive to the identification of the edge and that one can challenge the identification R_{edge}=4× R. In the following I will consider two TGD inspired arguments. The first argument is classical and developed by studying the velocity spectrum of stars for Milky Way, and leads to a rough view about the dynamics of dark matter. Second argument is quantal and introduces the notion of gravitational Planck constant hbar_{gr} and quantization of angular momentum as multiples of hbar_{gr}. It allows to predict the value of T and deduce a relationship between the rotation period T and the average surface gravity of the galactic disk. In the attempts understand how T could be universal in TGD framework, it is best to look at the velocity spectrum of Milky Way depicted in a Wikipedia article about Milky Way (see this).
See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies . 
Strange finding about galactic halo as a possible further support for TGD based model of galaxiesA team led by Maria Bergemann from the Max Planck Institute for Astronomy in Heidelberg, has studied a small population of stars in the halo of the Milky Way (MW) and found its chemical composition to closely match that of the Galactic disk (see this). This similarity provides compelling evidence that these stars have originated from within the disc, rather than from merged dwarf galaxies. The reason for this stellar migration is thought to be theoretically proposed oscillations of the MW disk as a whole, induced by the tidal interaction of the MW with a passing massive satellite galaxy. One can divide the stars in MW to the stars in the galactic disk and those in the galactic halo. The halo has gigantic structures consisting of clouds and streams of stars rotating around the center of the MW. These structures have been identified as a kind of debris thought to reflect the violent past of the MW involving collisions with smaller galaxies. The scientists investigated 14 stars located in two different structures in the Galactic halo, the TriangulumAndromeda (TriAnd) and the A13 stellar overdensities, which lie at opposite sides of the Galactic disc plane. Earlier studies of motion of these two diffuse structures revealed that they are kinematically associated and could relate to the Monoceros Ring, a ringlike structure that twists around the Galaxy. The position of the two stellar overdensities could be determined as each lying about 5 kiloparsec (14000 ly) above and below the Galactic plane. Chemical analysis of the stars made possible by their spectral lines demonstrated that they must must originate from MW itself, which was a complete surprise. The proposed model for the findings is in terms of vertical vibrations of galactic disk analogous to those of drum membrane. In particular the fact that the structures are above and below of the Monoceros Ring supports this idea. The vibrations would be induced by the gravitational interactions of ordinary and dark matter of galactic halo with a passing satellite galaxy. The picture of the the article (see this) illustrates what the pattern of these vertical vibrations would look like according to simulations. In TGD framework this model is modified since dark matter halo is replaced with cosmic string. Due to the absence of the dark matter halo, the motion along cosmic string is free apart from gravitational attraction caused by the galactic disk. Cosmic string forces the migrated stars to rotate around to the cosmic string in plane parallel to the galactic plane and the stars studied indeed belong to ring like structures: the prediction is that these rings rotate around the axis of galaxy. One can argue that if one has stars are very far from galactic plane  say dwarf galaxy  the halo model of dark matter suggests that the orbital plane arbitrary but goes through galactic center since spherically symmetric dark matter halo dominates in mass density. TGD would predict that the orbital plane is parallel to to the galactic plane. Are the oscillations of the galactic plane necessary in TGD framework?

Dark matter and 21cm line of hydrogenDark matter in TGD sense corresponds to h_{eff}/h=n phases of ordinary matter associated with magnetic flux tubes. These flux tubes would be nsheeted covering spaces, and n would correspond to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss one of the latest anomalies  21cm anomaly. Sabine Hossenfelder told about the article discussing the possible interpretation of so called 21cm anomaly associated with the hyperfine transition of hydrogen atom and observed by EDGES collaboration. The EDGES Collaboration has recently reported the detection of a strongerthanexpected absorption feature in the global 21cm spectrum, centered at a frequency corresponding to a redshift of z ≈ 17. This observation has been interpreted as evidence that the gas was cooled during this era as a result of scattering with dark matter. In this study, we explore this possibility, applying constraints from the cosmic microwave background, light element abundances, Supernova 1987A, and a variety of laboratory experiments. After taking these constraints into account, we find that the vast majority of the parameter space capable of generating the observed 21cm signal is ruled out. The only range of models that remains viable is that in which a small fraction, ≈ 0.32 per cent, of the dark matter consists of particles with a mass of ≈ 1080 MeV and which couple to the photon through a small electric charge, ε ≈ 10^{6}10^{4}. Furthermore, in order to avoid being overproduced in the early universe, such models must be supplemented with an additional depletion mechanism, such as annihilations through a L_{μ}L_{τ} gauge boson or annihilations to a pair of rapidly decaying hidden sector scalars. What has been found is an unexpectedly strong absorption feature in 21cm spectrum: the redshift is about z ≈ 17 which corresponds to a distance of about 2.27× 10^{11} ly. Dark matter interpretation would be in terms of scattering of the baryons of gas from dark matter at lower temperature. The anomalous absorption of 21 cm line could be explained with the cooling of gas caused by the flow of energy to a colder medium consisting of dark matter. If I understood correctly, this would generate a temperature difference between background radiation and gas and consequent energy flow to gas inducing the anomaly. The article excludes large amount of parameter space able to generate the observed signal. The idea is that the interaction of baryons of the gas with dark matter. The interaction would be mediated by photons. The small em charge of the new particle is needed to make it "dark enough". My conviction is that tinkering with the quantization of electromagnetic charge is only a symptom about how desperate the situation is concerning interpretation of dark matter in terms of some exotic particles is. Something genuinely new physics is involved and the old recipes of particle physicists do not work. In TGD framework the dark matter at lower temperature would be h_{eff}/h=n phases of ordinary matter residing at magnetic flux tubes. This kind of energy transfer between ordinary and dark matter is a general signature of dark matter in TGD sense, and there are indications from some experiments relating to primordial life forms for this kind of energy flow in lab scale (see this) . The ordinary photon line appearing in the Feynman diagram describing the exchange of photon would be replaced with a photon line containing a vertex in which the photon transforms to dark photon. The coupling in the vertex  call it m^{2}  would have dimensions of mass squared. This would transform the coupling e^{2} associated with the photon exchange to e^{2} m^{2}/p^{2}, where p^{2} is photon's virtual mass squared. The slow rate for the transformation of ordinary photon to dark photon could be see as an effective reduction of electromagnetic charge for dark matter particle from its quantized value. Remark: In biological systems dark cyclotron photons would transform to ordinary photons and would be interpreted as biophotons with energies in visible and UV. To sum up, the importance of this finding is that it supports the view about dark matter as ordinary particles in a new phase. There are electromagnetic interactions but the transformation of ordinary photons to dark photons slows down the process and makes these exotic phases effectively dark. See the chapter TGD and astrophysics or the article Four new strange effects associated with galaxies . 
Low surface brightness galaxies as additional support for pearlsinnecklace model for galaxiesSabine Hossenfelder had an inspiring post) about the problems of the halo dark matter scenario. My attention was caught by the title "Shut up and simulate". It was really to the point. People stopped first to think, then to calculate, and now they just simulate. Perhaps AI will replace them at the next step. While reading I realized that Sabine mentioned a further strong piece of support for the TGD view about galaxies as knots along cosmic strings, which create cylindrically symmetric gravitational field orthogonal to the string rather than spherically symmetric field as in halo models. The string tension determines the rotation velocity of distant stars predicted to be constant constant up to arbitrarily long distances (the finite size of spacetime sheet of course brings in cutoff length). To express it concisely: Sabine told about galaxies, which have low surface brightness. In the halo model the density of both matter and dark matter halo should be low for these galaxies so that the velocity of distant stars should decrease and lead to a breakdown of so called TullyFisher relation. It doesn't. This is the message that the observational astrophysicist Stacy McGaugh is trying to convey in his blog: about this the post of Sabine mostly told. I am not specialist in the field of astrophysics and it was nice to read the post and refresh my views about the problem of galactic dark matter.
Halo model of dark matter has also other problems.
Maybe it might be a good idea to start to think again. Using brains instead of computers is also must a more costeffective option: I have been thinking intensely for four decades, and this hasn't cost a single coin for the society! Recommended! See the chapter TGD and astrophysics. For TGD based model of galaxies see for instance this .

A further blow against dark matter halo paradigmThe following is a comment to a FB posting by Sabine Hossenfelder giving a link to the most recent finding challenging the dark matter halo paradigm. The article titled "A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology" published in Science can be found also in Archiv. The halo model for dark matter encounters continually lethal problems as I have repeatedly tried to tell in my blog postings and articles. But still this model continues to add items to the curriculum vitae of the specialists  presumably as long as the funding continues. Bad ideas never die. Halo model predicts that the dwarf galaxies around massive galaxies like Milky should move randomly. The newest fatal blow comes from the observation that dwarf galaxies move along neat circular orbits in the galactic plane of Centaurus A. Just like the TGD based pearlsinnecklace model of galaxies as knots (pearls) of long cosmic strings predicts! The long cosmic string creates gravitational field in transversal direction and the dwarf galaxies move around nearly circular orbits. The motion along long cosmic string would be free motion and would give rise to streams. The prediction is that at large distances the rotational velocities approach constant just as in the case of distant stars. See the chapter TGD and astrophysics. For TGD based model of galaxies see for instance this . 