What's new in


Note: Newest contributions are at the top!

Year 2018

BCS super conductivity at almost room temperature

Towards the end of year 2018 I learned about the discovery of BCS type (ordinary) superconductivity at temperature warmer than that at North Pole (see this). The compound in question was Lantanium hydride LaH10. Mihail Eremets and his colleagues found that it BCSame superconducting at temperate -23 C and high pressure 170 GPa about 1.6 million times the atmospheric pressure (see this).

The popular article proposed an intuitive explanation of BCS superconductivity, which was new to me and deserves to be summarized here. Cooper pairs would surf on sound waves. The position would correspond to a constant phase for the wave and the velocity of motion would be the phase velocity of the sound wave. The intensity of sound wave would be either maximum or minimum corresponding to a vanishing force on Cooper pair. One would have equilibrium position changing adiabatically, which would conform with the absence of dissipation.

This picture would conform with the general TGD based vision inspired by Sheldrakes's findings and claims related to morphic resonance (see this) , and by the conjectured general properties of preferred extremals of the variational principle implied by twistor lift of TGD (see this). The experimental discovery is of course in flagrant conflict with the predictions of the BCS theory. As the popular article tells, before the work of Eremets et al the maximum critical temperature was thought to be something like 40 K corresponding to -233 C.

The TGD based view is that Cooper pairs have members (electrons) at parallel flux tubes with opposite directions of magnetic flux and spin and have non-standard value of Planck constant heff= n× h0= n× h/6 (see this and this), which is higher than the ordinary value, so that Cooper pairs can be stable at higher temperatures. The flux tubes would have contacts with the atoms of the lattice so that they would experience the sound oscillations and electrons could surf at the flux tubes.

The mechanism binding electrons to Cooper pair should be a variant of that in BCS model. The exchange of phonons generates an attractive interaction between electrons leading to the formation of the Cooper pair. The intuitive picture is that the electrons of the Cooper pair can be thought of lying on a mattress and creating a dip towards which the other electron tends to move. The interaction of the flux tubes with the lattice oscillations inducing magnetic oscillations should generate this kind of interaction between electrons at flux tubes and induce a formation of a Cooper pair.

Isotope effect is the crucial test: the gap energy and therefore critical temperature are proportional the oscillation frequency ωD of the lattice (Debye frequency) proportional to 1/M1/2 of the mass M of the molecule in question and decreases with the mass of the molecule. One has lantanium-hydroxide, and can use an isotope of hydrogen to reduce the Debye frequency. The gap energy was found to change in the expected manner.

Can TGD inspired model explain the isotope effect and the anomalously high value of Debye energy? The naive order of magnitude estimate for the gap energy is of form Egap= x× hbareffωD, x a numerical factor. The larger the value of heff= n× h0= n× h/6, the larger the gap energy. Unless the high pressure increases ωD dramatically, the critical temperature 250 K would require n/6∼ Tcr/Tmax(BCS)∼ 250/40∼ 6. For this value the cyclotron energy Ec= hefffc is much below thermal energy for magnetic fields even in Tesla range so that the binding energy must be due to the interaction with phonons.

The high pressure is needed to keep lattice rigid enough at high temperatures so that indeed oscillates rather than "flowing". I do not see how this could prevent flux tube mechanism from working. Neither do I know, whether high pressure could somehow increase the value of Debye frequency to get the large value of critical temperature. Unfortunately, the high pressure (170 GPa) makes this kind of high Tc superconductors unpractical.

See the chapter Quantum Criticality and dark matter or the article New findings related to high Tc super-conductivity.

Intelligent blackholes?

Thanks for Nikolina Benedikovic for kindly providing an interesting link and for arousing my curiosity. In the link one learns that Leonard Susskind has admitted that superstrings do not provide a theory of everything. This is actually not a mindblowing surprise since very few can claim that the news about the death of superstring theory would be premature. Congratulations in any case to Susskind: for a celebrated super string guru it requires courage to change one's mind publicly. I will not discuss in the following the tragic fate of superstrings. Life must continue despite the death of superstring theory and there are much more interesting ideas to consider.

Susskind is promoting an idea about growing blackholes increasing their volume as the swallow matter around them (see this). The idea is that the volume of the blackhole measures the complexity of the blackhole and from this its not long way to the idea that information - may be conscious information (I must admit that I cannot imagine any other kind of information) - is in question.

Some quantum information theorists find this idea attractive. Quantum information theoretic ideas find a natural place also in TGD. Magnetic flux tubes would naturally serving as space-time correlates for entanglement (the p-adic variants of entanglement entropy can be negative and would serve as measures of conscious information) and this leads to the idea about tensor networks formed by the flux tubes (see this). So called strong form of holography states that 2-D objects - string world sheets and partonic 2-surfaces as sub-manifolds of space-time surfaces carry the information about space-time surface and quantum states. M8-M4 ×CP2 correspondence would realize quantum information theoretic ideas at even deeper level and would mean that discrete finite set of data would code for the given space-time surface as preferred extremal.

In TGD Universe long cosmic strings thickened to flux tubes would be key players in the formation of galaxies and would contain galaxies as tangles along them. These tangles would contain sub-tangles having interpretation as stars and even planets could be such tangles.

I just wrote an article describing a model of quasars (see this) based on this idea. In this model quasars need not be blackholes in GRT sense but have structure including magnetic moment (blackhole has no hair), an empty disk around it created by the magnetic propeller effect caused by radial Lorentz force, a luminous ring and accretion disk, and so called Elvis structure involving outwards flow of matter. One could call them quasi- blackholes - I will later explain why.

  1. Matter would not fall in blackhole but magnetic and volume energy in the interior would transform to ordinary matter and mean thickening of the flux tubes forming a configuration analogous to flow lines of dipole magnetic fields by looping. Think of formation of dipole field by going around flux line replaced by flux tube, returning and continuing along another flux line/tube.
  2. The dipole part of the structure would be cylindrical volume in which flux tubes would form structure consisting analogous to a coil in which one makes n2 ≈ 107 (GN = R2/n2h0) windings in CP2 direction and continues in different position in M4 and repeats the same. This is like having a collection of coils in M4 but each in CP2 direction. This collection of coils would fill the dipole cylinder having the case of quasar studied a radius smaller than the Schwartshild radius rS ≈ 5×109 km but with the same order of magnitude. The wire from given coil would continue as a field line of the magnetic dipole field and return back at opposite end of dipole cylinder and return along it to opposite pole. The total number of loops in the collection of n1 dipole coils with n2 windings in CP2 direction is n1 ×n2.
  3. Both the Kähler magnetic energy and volume energy (actually magnetic energy associated with twistor sphere) are positive and the expansion of flux tubes stops when the minum string tension is achieved. This corresponds roughly to a biological length scale about 1 mm for the value of cosmological constant in the length scale of the observed universe (see this).

    Remark: Note that the twistor lift of TGD allows to consider entire hierarchy of cosmological constants behaving like 1/L(k)2, where L(k) is p-adic length scale corresponding to p≈ 2k.

    How to obtain the observed small value of cosmological constant? This is not possible for the simplest imaginable induced twistor structure and the cosmological consant would be huge. A simple solution of the problem would be the p-adic length scale evolution of Λ as Λ ∝ 1/p, p≈ 2k. At certain radius of flux tube the total energy becomes minimum. A phase transition reducing the value of Λ allows further expansion and transformation of the energy of flux tube to particles. There is also a simple proposal for the imbedding of the twistor sphere of space-time surface to the product of twistor spheres of M4 and CP2 allowing the desired dependence of Λ on p-adic length scale.

    This in turn leads to a precise definition what coupling constant evolution does mean: this has been one of the most longstanding problems of quantum TGD. The evolution would follow from the invariance of action under small enough changes of Λ induce by simple modifications of the imbedding of the twistor sphere of space-time surface the product of twistor spheres of M4 and CP2. There is a family of imbeddings labelled by rotations of these twistor spheres with respect to other and one can consider a one-dimension sub-family of these imbeddings.

    This would solve the basic problem of cosmology, which is understanding why cosmological constant manages to be so small at early times. Now time evolution would be replaced with length scale evolution and cosmological constant would be indeed huge in very short scales but its recent value would be extremely small.

  4. Cosmological expansion would naturally relate to the thickening of the flux tubes, and one can also consider the possibility that the long cosmic string gets more and more looped (dipole field gets more and more loops) so that the quasi-blackhole would increase in size by swallowing more and more of long cosmic string spaghetti to the dipole region and transforming it to the loops of dipole magnetic field.
  5. The quasar (and also galactic blackhole candidates and active galactic nuclei) would be extremely intelligent fellows with number theoretical intelligence quotient (number of sheets of the space-time surfaces as covering) about

    heff/h = n/6= n1×n2/6 > GMm(CP2)/v0×ℏ = (rS/R(CP2))× (1/2β0),

    where one has β0= v0/c, where v0 roughly of the order 10-3c is a parameter with dimensions of velocity rS is Schwartschild radius of quasi-blackhole of order 109 km, and R(CP2) is CP2 radius of order 10-32 meters. If this blackhole like structure is indeed cosmic string eater, its complexity and conscious intelligence increases and it would represent the brains of the galaxy as a living organism. This picture clearly resembles the vision of Susskind about blackholes.

  6. This cosmic spaghetti eater has also a time reversed version for which the magnetic propellor effect is in opposite spatial direction: mass consisting of ordinary particles flows to the interior. Could this object be the TGD counterpart of blackhole? Or could one see both these objects as e blackholes dual to each other (maybe as analogs of white holes and blackholes)? The quasar like blackhole would eat cosmic string and its time reversal would swallow from its environment the particle like matter that its time reversed predecessor generated. Could one speak of breathing? Inwards breath and outwards breath would be time reversals of each other. This brings in mind the TGD inspired living cosmology based on zero energy ontology (ZEO) (see this) as analog of Penrose's cyclic cosmology, which dies and re-incarnates with opposite arrow of time again and again.
A natural question is whether also the ordinary blackholes are quasi-blackholes of either kind. In the fractal Universe of TGD this would look extremely natural.
  1. How to understand the fusion of blackholes (or neutron stars, I will however talk only about blackholes in the sequel) to bigger blackhole observed by LIGO if quasi-blackholes are in question? Suppose that the blackholes indeed represent dipole light tangles in cosmic string. If they are associated with the same cosmic string, they collisions would be much more probable than one might expect. One can imagine two extreme cases for the motion of the blackholes. There are two options.
    1. Tangles plus matter move along string like along highway. The collision would be essentially head on collision.
    2. Tangles plus matter around them move like almost free particles and string follows: this would however still help the blackholes to find each either. The observed collisions can be modelled as a formation of gravitational bound state in which the blackholes rotate around each other first.
    The latter option seems to be more natural.
  2. Do the observed black-hole like entities correspond to quasar like objects or their time reversals (more like ordinary blackholes). The unexpectedly large masses would suggests that they have not yet lost their mass by thickening as stars usually so that they are analogs of quasars. These objects would be cosmic string eaters and this would also favour the collisions of blackhole like entities associated with the same cosmic string.
  3. This picture would provide a possible explanation for the evidence for gravitational echoes and evidence for magnetic fields in the case of blackholes formed in the fusion of blackholes in LIGO (see this). The echoes would result from the repeated reflection of the radiation from the inner blackhole like region and from the ring bounding the accretion disk.

    Note that I have earlier proposed a model of ordinary blackholes in which there would be Schwartschild radius but at some radius below it the space-time surface would become Euclidian. In the recent case the Euclidian regions would be however associated only with wormhole contacts with Euclidian signature of metric bounded by light-like orb its of partonic 2-surfaces and might have sizes of order Compton length scaled up by the value of heff/h for dark variants of particle and therefore rather small as compared to blackhole radius.

See the chapter TGD View about Quasars or the article with the same title.

What does one really mean with gravitational Planck constant?

There are important questions related to the notion of gravitational Planck constant, to the identification of gravitational constant, and to the general structure of magnetic body. What gravitational Planck constant really is? What the formula for gravitational constant in terms of CP2 length defining Planck length in TGD does really mean, and is it realistic? What space-time surface as covering space does really mean?

What does one mean with space-time as covering space?

The central idea is that space-time corresponds to n-fold covering for heff=n× h0. It is not however quite clear what this statement does mean.

  1. How the many-sheeted space-time corresponds to the space-time of QFT and GRT? QFT-GRT limit of TGD is defined by identifying the gauge potentials as sums of induced gauge potentials over the space-time sheets. Magnetic field is sum over its values for different space-time sheets. For single sheet the field would be extremely small in the present case as will be found.
  2. A central notion associated with the hierarchy of effective Planck constants heff/h0=n giving as a special case ℏgr= GMm/v0 assigned to the flux tubes mediating gravitational interactions. The most general view is that the space-time itself can be regarded as n-sheeted covering space. A more restricted view is that space-time surface can be regarded as n-sheeted covering of M4. But why not n-sheeted covering of CP2? And why not having n=n1× n2 such that one has n1-sheeted covering of CP2 and n2-sheeted covering of M4 as I indeed proposed for more than decade ago but gave up this notion later and consider only coverings of M4? There is indeed nothing preventing the more general coverings.
  3. n=n1× n2 covering can be illustrated for an electric engineer by considering a coil in very thin 3 dimensional slab having thickness L. The small vertical direction would serve and as analog of CP2. The remaining 2 large dimensions would serve as analog for M4. One could try to construct a coil with n loops in the vertical direction direction but for very large n one would encounter problems since loops would overlap because the thickness of the wire would be larger than available room L/n. There would be some maximum value of n, call it nmax.

    One could overcome this limit by using the decomposition n=n1× n2 existing if n is prime. In this case one could decompose the coil into n1 parallel coils in plane having n2≥ nmax loops in the vertical direction. This provided n2 is small enough to avoid problems due to finite thickness of the coil. For n prime this does not work but one can of also select n2 to be maximal and allow the last coil to have less than n2 loops.

    An interesting possibility is that that preferred extremal property implies the decomposition ngr=n1× n2 with nearly maximal value of n2, which can vary in some limits. Of course, one of the n2-coverings of M4 could be in-complete in the case that ngr is prime or not divisible by nearly maximal value of n2. We do not live in ideal Universe, and one can even imagine that the copies of M4 covering are not exact copies but that n2 can vary.

  4. In the case of M4× CP2 space-time sheet would replace single loop of the coil, and the procedure would be very similar. A highly interesting question is whether preferred extremal property favours the option in which one has as analog of n1 coils n1 full copies of n2-fold coverings of M4 at different positions in M4 and thus defining an n1 covering of CP2 in M4 direction. These positions of copies need not be close to each other but one could still have quantum coherence and this would be essential in TGD inspired quantum biology.

    Number theoretic vision suggests that the sheets could be related by discrete isometries of CP2 possibly representing the action of Galois group of the extension of rationals defining the adele and since the group is finite sub-group of CP2, the number of sheets would be finite.

    The finite sub-groups of SU(3) are analogous to the finite sub-groups of SU(2) and if they action is genuinely 3-D they correspond to the symmetries of Platonic solids (tetrahedron,cube,octahedron, icosahedron, dodecahedron). Otherwise one obtains symmetries of polygons and the order of group can be arbitrary large. Similar phenomenon is expected now. In fact the values of n2 could be quantized in terms of dimensions of discrete coset spaces associated with discrete sub-groups of SU(3). This would give rise to a large variation of n2 and could perhaps explain the large variation of G identified as G= R2(CP2)/n2 suggested by the fountain effect of superfluidity.

  5. There are indeed two kinds of values of n: the small values n=hem/h0=nem assigned with flux tubes mediating em interaction and appearing already in condensed matter physics and large values n=hgr/h0=ngr associated with gravitational flux tubes. The small values of n would be naturally associated with coverings of CP2. The large values ngr=n1× n2 would correspond n1-fold coverings of CP2 consisting of complete n2-fold coverings of M4. Note that in this picture one can formally define constants ℏ(M4)= n10 and ℏ(CP2)= n20 as proposed for more than decade ago.
Planck length as CP2 radius and identification of gravitational constant G

There is also a puzzle related to the identification of gravitational Planck constant. In TGD framework the only theoretically reasonable identification of Planck length is as CP2 length R(CP2), which is roughly 103.5 times longer than Planck length. Otherwise one must introduce the usual Planck length as separate fundamental length. The proposal was that gravitational constant would be defined as G =R2(CP2)/ℏgr, ℏgr≈ 107ℏ. The G indeed varies in un-expectedly wide limits and the fountain effect of superfluidity suggests that the variation can be surprisingly large.

There are however problems.

  1. Arbitrary small values of G=R2(CP2)/ℏgr are possible for the values of ℏgr appearing in the applications: the values of order ngr ∼ 1013 are encountered in the biological applications. The value range of G is however experimentally rather limited. Something clearly goes wrong with the proposed formula.
  2. Schwartschild radius rS= 2GM = 2R2(CP2)M/ℏgr would decrease with ℏgr. One would expect just the opposite since fundamental quantal length scales should scale like ℏgr.
  3. What about Nottale formula ℏgr= GMm/v0? Should one require self-consistency and substitute G= R2(CP2)/ℏgr to it to obtain ℏgr=(R2(CP2)Mm/v0)1/2. This formula leads to physically un-acceptable predictions, and I have used in all applications G=GN corresponding to ngr∼ 107 as the ratio of squares of CP2 length and ordinary Planck length.
Could one interpret the almost constancy of G by assuming that it corresponds to ℏ(CP2)= n20, n2≈ 107 and nearly maximal except possibly in some special situations? For ngr=n1× n2 the covering corresponding to ℏgr would be n1-fold covering of CP2 formed from n1 n2-fold coverings of M4. For ngr=n1× n2 the covering would decompose to n1 disjoint M4 coverings and this would also guarantee that the definition of rS remains the standard one since only the number of M4 coverings increases.

If n2 corresponds to the order of finite subgroup G of SU(3) or number of elements in a coset space G/H of G (itself sub-group for normal sub-group H), one would have very limited number of values of n2, and it might be possible to understand the fountain effect of superfluidity from the symmetries of CP2, which would take a role similar to the symmetries associated with Platonic solids. In fact, the smaller value of G in fountain effect would suggest that n2 in this case is larger than for GN so that n2 for GN would not be maximal.

See the chapter TGD View about Quasars or the article with the same title.

TGD View about Quasars

The work of Rudolph Schild and his colleagues Darryl Letier and Stanley Robertson (among others) suggests that quasars are not supermassive blackholes but something else - MECOs, magnetic eternally collapsing objects having no horizon and possessing magnetic moment. Schild et al argue that the same applies to galactic blackhole candidates and active galactic nuclei, perhaps even to ordinary blackholes as Abhas Mitra, the developer of the notion of MECO proposes.

In the sequel TGD inspired view about quasars relying on the general model for how galaxies are generated as the energy of thickened cosmic strings decays to ordinary matter is proposed. Quasars would not be be blackhole like objects but would serve as an analog of the decay of inflaton field producing the galactic matter. The energy of the string like object would replace galactic dark matter and automatically predict a flat velocity spectrum.

TGD is assumed to have standard model and GRT as QFT limit in long length scales. Could MECOs provide this limit? It seems that the answer is negative: MECOs represent still collapsing objects. The energy of inflaton field is replaced with the sum of the magnetic energy of cosmic string and negative volume energy, which both decrease as the thickness of flux tube increases. The liberated energy transforms to ordinary particles and their dark variants in TGD sense. Time reversal of blackhole would be more appropriate interpretation. One can of course ask, whether the blackhole candidates in galactic nuclei are time reversals of quasars in TGD sense.

The writing of the article led also to a considerable understanding of two key aspects of TGD. The understanding of twistor lift and p-adic evolution of cosmological constant improved considerably. Also the understanding of gravitational Planck constant and the notion of space-time as a covering space became much more detailed in turn allowing much more refined view about the anatomy of magnetic body.

See the chapter TGD View about Quasars or the article with the same title.

Could dark protons and electrons be involved with di-electric breakdown in gases and conduction in electrolytes?

I have had long time the intuitive feeling that electrolytes are not really understood in standard chemistry and physics and I have expressed this feeling in the TGD model of "cold fusion" (see this). This kind of feeling of course induces immediate horror reaction turning stomach around. Not a single scientist in the world seems to be challenging the age-old chemical wisdom. Who am I to do this? Perhaps I really am the miserable crackpot that colleagues have for four decades told me to be. Do I realize only at the high age of 68 that my wise colleagues have have been right all the time?

The question of my friend related to di-electric breakdown in gases led me to consider this problem more precisely. I will first consider di-electric breakdown and then ionic conduction in electrolytes from TGD point of view to see whether the hypothesis stating that dark matter consists of phases of ordinary matter with non-standard Planck constant heff=nh0 (see this) could provide concrete insights to these phenomena.

Ionization in di-electric breakdown

One can start from a model for the dielectric breakdown of gas (see this). The basic idea is that negatively charged cathode emits electrons by tunnelling in electric field and these accelerate in the electric field and ionize atoms provided they travel a distance longer than the free path l= 1/nσ before collision. Here n is number density of atoms and σ collision cross section, in geometric approximation the cross sectional area of gas atom. This implies a lower bound on the number density n of gas atoms. On the other hand, too low density makes also ionizations rare.

The positive ions in turn are absorbed by cathode and more electrons are liberated. In gas dielectric breakdown results if the field strength is above critical value Ecr. For air this one has Ecr=3 kV/mm.

  1. Cathode with a sharp tip liberates electrons. The electric field near the tip is very strong an in a reasonable approximation has strength

    E= V/r ,

    where r is radius of curvature of the tip and V is the voltage with respect to earth. If r is small enough, electron is able to tunnel from the metal.

  2. The tunnelling current from electron can be deduced from a simple model based on Scrödinger equation in one-dimensional potential having the form U(x) =-Φw+ Vx/r in the non-allowed region. One assumes that one can describe the electron using analog of plane wave exp(ikx) with kx replaced with ∫0x k(x)dx=i∫0xp(x) dx/ℏ with imaginary momentum p(x)= i(2m|E-U(x)|)1/2 in the non-allowed region. Tunnelling current is proportional to the exponential factor

    R= exp(i∫ k(x)dx)

    having interpretation as tunneling probability.

  3. Tunneling rate is highest near Fermi energy and at this energy the tunnelling rate is

    R= exp(-8π (2mΦw3)1/2/3hE) .

    Here m is electron's mass and Φw is work function of the metal telling the height of the potential well in which electron resides. In the model of photo-electric effect the energy of photon needed to kick out electron from metal must be above Φw. The exponential factor approaches extremely rapidly but for small enough curvature radii and it can be sufficiently near to unity.

    Remark: Imaginary momentum does not make sense in classical mechanics. What is interesting that in classical TGD the classical conserved quantities are in general complex numbers and the analogs of virtual particles are on mass shell states with complex moments as also in twistor Grassmannian approach having 8-D generalization in TGD framework. Could tunnelling have classical space-time description in TGD framework?

  4. The electric field needed in the tip cannot be much larger than

    Emax= Vr ∼ 8π (2mΦw3)1/2/3h

    to guarantee that the exponent is not too small. If one has h→ heff=n× h0>h (h=6h0 is a good guess, see this and this) tunnelling rate increases. This effect might serve as a signature for large value of heff. Tunnelling would be to magnetic flux tubes carrying dark electrons.

What is needed is di-electric breakdown in a manner already described.
  1. Electrons ionize atoms and the resulting electrons cause more ionizations. Also the positive ions collide with cathode and generate new electrons. A continual discharge, arc generation, would be the outcome.

    A rough criterion for ionization is that the free path l= 1/nσ of electron is so large that the electron gains so large energy in the electric field E that it exceeds ionization energy. The condition is El≥ EI. Small density increases l but also decreases the number of collisions so that there is some optimal density and pressure for the di-electric breakdown to occur. If electrons are dark they can travel along flux tubes, which would increase the free path in electric field and increase the rate of ionization.

  2. The generation of arc is described by Paschen's law giving the breakdown voltage and discovered 1989 empirically by Paschen (see this).
Do we really understand ionic conduction in electrolytes?

One must now explain why ions can act as charged carriers in relatively weak electric fields. Concerning the production of electrons at electrode the situation remains the same. In electrolyte however the free path is much shorter than in gas since the density n is orders of magnitude higher. Therefore the ionization mechanism in electrolytes must be different - at least in standard physics framework. One can of course ask whether the large value of heff might help both in the generation of dark electron at cathode and also help to increase the free path of electron so that they gain higher energy in the electric field of electrolyte typically much lower that in dielectric breakdown.

The mechanism for the dissolution of ions in water involves neither electrodes nor electric field. The ionization of NaCl in water serves as a good example.

  1. Na and Cl in NaCl are already ionized since ionic bond is in question. In dissolution giving rise to Na+ and Cl- ions NaCl ionizes into Na+ and Cl- in water. The sizes of ions vary in the range .2- 2 Angstrom. The explanation is that the presence of polar water molecules of size about 3 Angstrom of which some have ionized to OH- and H+ leads to a competition and the presence of OH- and H+ breaks ionic NaCl bonds and dissolves NaCl. Approximating the situation as one-dimensional would suggest that NaCl corresponds to a potential well for e2/r potential. From the distance r between Na and Cl one obtains an estimate for the Coulomb potential energy depending on distance. For r=2 Angstrom it is about 50 eV and therefore rather high.
  2. The presence of OH- or H+ means second potential well. The Coulomb potentials of say Cl- and OH- acting on H+ sum up and double potential well is created. In the original situation Na+ is the potential well of Cl-. The closer the Cl- and OH- (or H+ and Na+ ions are, the lower the barrier between the two wells is and the higher the tunnelling probability for Na+ from the potential well of Cl- to that of OH- is. This can make possible tunnelling of Na+/Cl- with subsequent formation of ionic bound state NaOH/HCl.

    The tunnelling probability is also now an exponential analogous to that appearing in the previous formula and proportional to 1/h. Ions must however get so close that the potential barrier is low enough. The rate for close encounters must be therefore high enough.

    Is this really the case or could heff come in rescue? Could the dark protons H+ with heff=n× h at magnetic flux tubes possibly formed in the ionization of water molecules to OH- and H+ play some role. Could also dark valence electrons assignable to OH play a role. Could one think that dark H+ and e- of H2O can reside at long flux tubes assignable to H2O so that H2O would look like OH- +H+.

    As a matter fact, a more realistic model replaces flux tubes with flux tube pairs since there are reasons to assume that the flux tubes carry monopole flux and they must form closed units (see this). Flux tube pairs are also central for the TGD based model of high Tc superconductivity (see this and this).

    Same would apply to HCl and NaOH. This leads to several variants of these molecules in which proton or electron or both are dark and resides at long flux tube. External electric field could induce lengthening of this flux flux tube pairs or at least the motion of dark proton and electron along it. These molecules would look like having long charged tentacles formed by flux tube pairs parallel or antiparallel to the direction of electric field. Electric field would force the charged flux tube pair to move so that it would point to the direction to which charged particle moves in the field.

  3. According to standard physics this process generates only different ionic bound states HCl and NaOH are formed from NaCl and H2O and vice versa. One does not obtain Na+ and Cl- serving as charge carriers. How could the presence of the relatively weak electric field in electrolyte make possible electric currents if there are no charge carriers?
  4. Are HCl and NaOH in water really what they would be in gas? Could HCl in water be a bound state of H+ and Cl- such that H+ has a large value of heff. Could also Cl- be Cl for which electron could be dark electron at flux tube? This would make the size of HCl much larger than in gas and the ions involved look like free charge carriers in much longer scale. Could same apply also to NaOH, NaCl ad H2O.

    Could the fundamental current carriers be dark protons and dark electrons at dark flux tubes pairs? Consider a long tentacle formed by a long flux tube pair carrying dark proton or electron with the direction of flux tube pair determined by the sign of the electric force on the charge. This tentacle could reconnect with a neutral tentacle and the charge would be transferred to the latter. This flux tube pair would be in turn driven by by the field perhaps also inducing the increase of heff (requiring energy provided by the field) and therefore flux tube length so that it points to the same direction as the original long tentacle. The outcome would be conduction based on the hopping of protons and electrons over a distance of the order of tentacle length. This hopping mechanism could serve as a universal mechanism of conduction in electrolytes and also in living matter.

For TGD view about "cold fusion" see the chapter Cold fusion again, the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?, or the shorter article Could dark protons and electrons be involved with di-electric breakdown in gases and conduction in electrolytes?.

The analogs of CKM mixing and neutrino oscillations for particle and its dark variants

The called 21-cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon having effective value heff/h0=n larger than standard value h implying that em charge of dark matter particle is effectively reduced. Interaction vertices would involve only particles with the same value of heff/h0=n.

In this article a simple model for the mixing of ordinary photon and its dark variants is proposed. Due to the transformations between different values of heff/h0=n during propagation, mass squared eigenstates are mixtures of photons with various values of n. An the analog of CKM matrix describing the mixing is proposed. Also the model for neutrino oscillations is generalized so that it applies - not only to photons - but to all elementary particles. The condition that "ordinary" photon is essentially massless during propagation forces to assume that during propagation photon is mixture of ordinary and dark photons, which would be both massive in absence of mixing. A reduction to ordinary photon would take place in the interaction vertices and therefore also in the absorption. The mixing provides a new contribution to particle mass besides that coming from p-adic thermodynamics and from the Kähler magnetic fields assignable to the string like object associated with the particle.

See the chapter Quantum criticality and dark matter or the article The analogs of CKM mixing and neutrino oscillations for particle and its dark variants.

Is dark DNA dark also in TGD sense

I encountered last year a highly interesting article about "dark DNA" hitherto found in the genome of gerbils and birds, for instance in the genome of of the sand rat living in deserts. I have written about this in another book but thought that it might a good idea to add it also here.

The gene called Pdxl related to the production of insulin seems to be missing as also 87 other genes surrounding it! What makes this so strange that the animal cannot survive without these genes! Products that the instructions from the missing genes would create are however detected!

According to the ordinary genetic, these genes cannot be missing but should be hidden, hence the attribute "dark" in analogy with dark matter. The dark genes contain A lot of G and C molecules and this kind of genes are not easy to detect: this might explain why the genes remain undetected.

A further interesting observation is that one part of the sand rat genome has many more mutations than found in other rodent genomes and is also GC rich. Could the mutated genes do the job of the original genes? Missing DNA are found in birds too. For instance, the gene for leptin - a hormone regulating energy balance - seems to be missing.

The finding is extremely interesting from TGD view point, where dark DNA has very concrete meaning. Dark matter at magnetic flux tubes is what makes matter living in TGD Universe. Dark variants of particles have non-standard value heff=n× h0 (h= 6h0 is the most plausible option) of Planck constant making possible macroscopic quantum coherence among other things. Dark matter would serve as template for ordinary matter in living systems and biochemistry could be kind of shadow of the dynamics of dark matter. What I call dark DNA would correspond to dark analogs of atomic nuclei realized as dark proton sequences with entangled proton triplet representing DNA codon. The model predicts correctly the numbers of DNA codons coding for given amino-acid in the case of vertebrate genetic code and therefore I am forced to take it very seriously (see this and this).

The chemical DNA strands would be attached to parallel dark DNA strands and the chemical representation would not be always perfect: this could explain variations of DNA. This picture inspires also the proposal that evolution is not a passive process occurring via random mutations with survivors selected by the evolutionary pressures. Rather, living system would have R&D lab as one particular department. Various variants of DNA would be tested by transcribing dark DNA to ordinary mRNA in turn translated to amino-acids to see whether the outcome survives. This experimentation might be possible in much shorter time scale than that based on random mutations. Also immune system, which is rapidly changing, could involve this kind of R&D lab.

Also dark mRNA and amino-acids could be present but dark DNA is the fundamental information carrying unit and it would be natural to transcribe it to ordinary mRNA. Of course, also dark mRNA could be be produced and translated to amino-acids and even dark amino-acids could be transformed to ordinary ones. This would however require additional machinery.

What is remarkable is that the missing DNA is indeed associated with DNA sequences with exceptionally high mutation rate. Maybe R&D lab is there! If so, the dark DNA would be dark also in TGD sense! Why GC richness should relate to this, is an interesting question.

See the chapter Quantum criticality and dark matter.

Is it possible to determine experimentally whether gravitation is quantal interaction?

Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

  1. If gravitational field is quantum it makes possible entanglement between two states. This is the intuitive idea but what it means in TGD picture? Feynman interpreted this as entanglement of gravitational field of an objects with the state of object. If object is in a state, which is superposition of states localized at two different points xi, the classical gravitational fields φgr are different and one has a superposition of states with different locations

    | I>= ∑i=1,2 | mi ~at~ xi> | φgr,xi> == | L> +|R> .

  2. Put two such de-localized states with masses mi at some distance d to get state I1>I2>, | i> =| L>i +| R>i. The 4 components pairs of the states interact gravitationally and since there are different gravitational fields between different states the states get different phases, one can obtain entangled state.

    Gravitational field would entangle the masses. If one integrates over the degrees of freedom associated with gravitational field one obtains density matrix and the density matrix is not pure if gravitational field is quantum in the sense that it entangles with the particle position.

    That gravitation is able to entangle the masses would be a proof for the quantum nature of gravitational field. It is not however easy to detect this. If gravitation only serves as a parameter in the interaction Hamiltonian of the two masses, entanglement can be generated but does not prove that gravitational interaction is quantal. It is required that the only interaction between the systems is gravitational so that other interactions do not generate entanglement. Certainly, one should use masses having no em charges.

  3. In TGD framework the view of Feynman is natural. One has superposition of space-time surfaces representing this situation. Gravitational field of particle is associated with the magnetic body of particle represented as 4-surface and superposition corresponds to a de-localized quantum state in the "world of classical worlds" with xi representing particular WCW coordinates.
I am not specialist in quantum information theory nor as quantum gravity experimentalist, and hereafter I must proceed keeping fingers crossed and I can only hope that I have understood correctly. To my best understanding, the general idea of the experiment would be to use interferometer to detect phase differences generated by gravitational interaction and inducing the entanglement. Not for photons but for gravitationally interacting masses m1 and m2 assumed to be in quantum coherent state and be describable by wave function analogous to em field. It is assumed that gravitational interact can be describe classically and this is also the case in TGD by quantum-classical correspondence.
  1. Authors think quantum information theoretically and reduce everything to qubits. The de-localization of masses to a superposition of two positions correspond to a qubit analogous to spin or a polarization of photon.
  2. One must use and analog of interferometer to measure the phase difference between different values of this "polarization".

    In the normal interferometer is a flattened square like arrangement. Photons in superpositions of different spin states enter a beam splitter at the left-lower corner of interferometer dividing the beam to two beams with different polarizations: horizontal (H) and vertical (V). Vertical (horizontal) beam enters to a mirror which reflects it to horizontal (vertical beam). One obtains paths V-H and H-V meeting at a transparent mirror located at the upper right corner of interferometer and interfere.

    There is detector D0 resp. D1 detecting component of light gone through in vertical resp. horizontal direction of the fourth mirror. Firing of D1 would select the H-V and the firing of D0 the V-H path. This thus would tells what path (V-H or H-V) the photon arrived. The interference and thus also the detection probabilities depend on the phases of beams generated during the travel: this is important.

  3. If I have understood correctly, this picture about interferometer must be generalized. Photon is replaced by mass m in quantum state which is superposition of two states with polarizations corresponding to the two different positions. Beam splitting would mean that the components of state of mass m localized at positions x1 and x2 travel along different routes. The wave functions must be reflected in the first mirrors at both path and transmitted through the mirror at the upper right corner. The detectors Di measure which path the mass state arrived and localize the mass state at either position. The probabilities for the positions depend on the phase difference generated during the path. I can only hope that I have understood correctly: in any case the notion of mirror and transparent mirror in principle make sense also for solutions of Schrödinger eequation.
  4. One must however have two interferometers. One for each mass. Masses m1 and m2 interact quantum gravitationally and the phases generated for different polarization states differ. The phase is generated by the gravitational interaction. Authors estimate that phases generate along the paths are of form

    Φi = [m1m2G/ℏ di] Δ t .

    Δ t =L/v is the time taken to pass through the path of length L with velocity v. d1 is the smaller distance between upper path for lower mass m2 and lower path for upper mass m1. d2 is the distance between upper path for upper mass m1 and lower m2. See Figure 1 of the article.

What one needs for the experiment?
  1. One should have de-localization of massive objects. In atomic scales this is possible. If one has heff/h0>h one could also have zoomed up scale of de-localization and this might be very relevant. Fountain effect of superfluidity pops up in mind.
  2. The gravitational fields created by atomic objects are extremely weak and this is an obvious problem. Gm1m2 for atomic mass scales is extremely small: since Planck mass mP is something like 1019 proton masses and atomic masses are of order 10-100 atomic masses.

    One should have objects with masses not far from Planck mass to make Gm1m2 large enough. Authors suggest using condensed matter objects having masses of order m∼ 10-12 kg, which is about 1015 proton masses 10-4 Planck masses. Authors claim that recent technology allows de-localization of masses of this scale at two points. The distance d between the objects would be of order micron.

  3. For masses larger than Planck mass one could have difficulties since quantum gravitational perturbation series need not converge for Gm1m2> 1 (say). For proposed mass scales this would not be a problem.
What can one say about the situation in TGD framework?
  1. In TGD framework the gravitational Planck hgr= Gm1m2/v0 assignable to the flux tubes mediating interaction between m1 and m2 as macroscopic quantum systems could enter into the game and could reduce in extreme case the value of gravitational fine structure constant from Gm1m2/4π ℏ to Gm1m2/4π ℏeff = β0/4π, β0= v0/c<1. This would make perturbation series convergent even for macroscopic masses behaving like quantal objects. The physically motivated proposal is β0∼ 2-11. This would zoom up the quantum coherence length scales by hgr/h.
  2. What can one say in TGD framework about the values of phases Φ?
    1. For ℏ → ℏeff one would have

      Φi = [Gm1m2/ℏeff di] Δ t .

      For ℏ → ℏeff the phase differences would be reduced for given Δ t. On the other hand, quantum gravitational coherence time is expected to increase like heff so that the values of phase differences would not change if Δ t is increased correspondingly. The time of 10-6 seconds could be scaled up but this would require the increase of the total length L of interferometer arms and/or slowing down of the velocity v.

    2. For ℏeff=ℏgr this would give a universal prediction having no dependence on G or masses mi

      Φi = [v0Δ t/di] = [v0/v] [L/di] .

      If Planck length is actually equal to CP2 length R∼ 103.5(GNℏ)1/2, one would has GN = R2/ℏeff, ℏeff∼ 107. One can consider both smaller and larger values of G and for larger values the phase difference would be larger. For this option one would obtain 1/ℏeff2 scaling for Φ. Also for this option the prediction for the phase difference is universal for heff=hgr.

    3. What is important is that the universality could be tested by varying the masses mi. This would however require that mi behave as coherent quantum systems gravitationally. It is however possible that the largest quantum systems behaving quantum coherently correspond to much smaller masses.
See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Did LIGO observe non-standard value of G and are galactic blackholes really supermassive?

I have talked (see this) about the possibility that Planck length lP is actually CP2 length R, which is scaled up by factor of order 103.5 from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R2/ℏeff. There would be only one fundamental scale in TGD as the original idea indeed was. ℏeff at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 106-107 times larger than h.

Also other values of heff are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in super-fluidity could correspond to a value of heff/h0=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of heff would induce also larger delocalization height.

Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be Gmax= R2/h0, h=6h0.

Are the blackholes detected by LIGO really so massive?

LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 10-40 solar masses, which is regarded as something rather strange.

Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and heff smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value - say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass.

This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?

  1. If the power of gravitational radiation distinguishes between different values of M+m, when G(M+m) is kept constant, the idea is dead. This seems to be the case. The dependence on G(M+m) only leads to contradiction at the limit when M+m approaches zero and G(M+m) is fixed. The reason is that the energy emitted per single period of rotation would be larger than M+m. The natural expectation is that the radiated power per cycle and per mass M+m depends on G(M+m) only as a dimensionless quantity .
  2. From arXiv one can find an (see article, in which the energy per unit solid angle and frequency radiated ina collision of blackholes is estimated and the outcome is proportional to E2G(M+m)2, where E is the energy of the colliding blackhole.

    The result is proportional mass squared measured in units of Planck mass squared as one might indeed naively expect since GM2 is analogous to the total gravitational charge squared measured using Planck mass.

    The proportionality to E2 comes from the condition that dimensions come out correctly. Therefore the scaling of G upwards would reduce mass and the power of gravitational radiation would be reduced down like M+m. The power per unit mass depends on G(M+m) only. Gravitational radiation allows to distinguish between two systems with the same Schwartschild radius, although the classical dynamics does not allow this.

  3. One can express the classical gravitational energy E as gravitational potential energy proportional to GM/R. This gives only dependence on GM as also Equivalence Principle for classical dynamics requires and for the collisions of blackholes R is measured by using GM as a natural unit.
Remark: The calculation uses the notion of energym which in general relativity is precisely defined only for stationary solutions. Radiation spoils the stationarity. The calculations of the radiation power in GRT is to some degree artwork feeding in the classical conservation laws in post-Newtonian approximation lost in GRT. In TGD framework the conservation laws are not lost and hold true at the level of M4×CP2.

What about supermassive galactic blacholes?

What about supermassive galactic black holes in the centers of galaxies: are they really super-massive or is G super-large! The mass of Milky Way super-massive blackhole is in the range 105-109 solar masses. Geometric mean is n=107 solar masses and of the order of the standard value of R2/GN=n ∼ 107 . Could one think that this blackhole has actually mass in the range 1-100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning!

The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Deviation from the prediction of standard quantum theory for radiative energy transfer in faraway region

I encountered in FB a highly interesting finding discussed in two popular articles (see this and this). The original article (see this) is behind paywall but one can find the crucial figure 5 online (see this) . It seems that experimental physics is in the middle of revolution of century and theoretical physicists straying in superstring landscape do not have a slightest idea about what is happening.

The size scale of objects studied - membranes in temperature of order room temperature T=300 K for instance - is about 1/2 micrometers: cell length scale range is in question. They produce radiation and other similar object is heated if there is temperature difference between the objects. The heat flow is proportional to the temperature difference and radiative conductance called Grad characterizes the situation. Planck's black body radiation law, which initiated the development of quantum theory for more than century ago, predicts Grad at large enough distances.

  1. The radiative transfer is larger than predicted by Planck's radiation law at small distances (nearby region) of order average wavelength of thermal radiation deducible from its temperature. This is not a news.
  2. The surprise was that radiative conductance is 100 times larger than expected from Planck's law at large distances (faraway region) for small objects with size of order .5 micron. This is a really big news.
The obvious explanation in TGD framework is provided by the hierarchy of Planck constants. Part of radiation has Planck constant heff=n×h0, which is larger than the standard value of h=6h0 (good guess for atoms). This scales up the wavelengths and the size of nearby region is scaled up by n. Faraway region can become effectively nearby region and conductance increases.

My guess is that this unavoidably means beginning of the second quantum revolution brought by the hierarchy of Planck constants. These experimental findings cannot be put under the rug anymore.

See the chapter Quantum criticality and dark matter or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Galois groups and genes

The question about possible variations of Geff (see this) led again to the old observation that sub-groups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new sub-group can emerge.

The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by heff/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of space-time surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology.

One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions EH leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials.

Any sub-group H⊂ Gal(E/K)) leaves the intermediate extension EH invariant in element-wise manner as a sub-field of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension EH and subgroup H1⊂ H2⊂... define a hierarchy of extensions EH1>EH2>EH3... with decreasing dimension. The subgroups H are normal - in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order |H| is the dimension of E as an extension of EH. This is a highly non-trivial piece of information. The dimension of E factorizes to a product ∏i |Hi| of dimensions for a sequence of groups Hi.

Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group Hi so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon?

Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension EH in a further extension to E. The degree of EH increases by a factor, which is dimension of E/EH and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?

  1. What comes into mathematically uneducated mind of physicist is the functional decomposition Pm+n(x)= Pm(Pn(x)) of polynomials assignable to sub-units (letters/codons/genes) with coefficients in K for a algebraic counterpart for the product of sub-units. Pm(Pn(x)) would be a polynomial of degree n+m in K and polynomial of degree m in EH and one could assign to a given gene a fixed polynomial obtained as an iterated function composition. Intuitively it seems clear that in the generic case Pm(Pn(x)) does not decompose to a product of lower order polynomials. One could use also polynomials assignable to codons or letters as basic units. Also polynomials of genes could be fused in the same manner.
  2. If this indeed gives a Galois extension, the dimension m of the intermediate extension should be same as the order of its Galois group. Composition would be non-commutative but associative as the physical picture demands. The longer the gene, the higher the algebraic complexity would be. Could functional decomposition define the rule for who extensions and Galois groups correspond to genes? Very naively, functional decomposition in mathematical sense would correspond to composition of functions in biological sense.
  3. This picture would conform with M8-M4× CP2 correspondence (see this) in which the construction of space-time surface at level of M8 reduces to the construction of zero loci of polynomials of octonions, with rational coefficients. DNA letters, codons, and genes would correspond to polynomials of this kind.
Could one say anything about the Galois groups of DNA letters?
  1. Since n=heff/h serves as a kind of quantum IQ, and since molecular structures consisting of large number of particles are very complex, one could argue that n for DNA or its dark variant realized as dark proton sequences can be rather large and depend on the evolutionary level of organism and even the type of cell (neuron viz. soma cell). On the other, hand one could argue that in some sense DNA, which is often thought as information processor, could be analogous to an integrable quantum field theory and be solvable in some sense. Notice also that one can start from a background defined by given extension K of rationals and consider polynomials with coefficients in K. Under some conditions situation could be like that for rationals.
  2. The simplest guess would be that the 4 DNA letters correspond to 4 non-trivial finite groups with smaller possible orders: the cyclic groups Z2,Z3 with orders 2 and 3 plus 2 finite groups of order 4 (see the table of finite groups in this). The groups of order 4 are cyclic group Z4=Z2× Z2 and Klein group Z2⊕ Z2 acting as a symmetry group of rectangle that is not square - its elements have square equal to unit element. All these 4 groups are Abelian.
  3. On the other hand, polynomial equations of degree not larger than 4 can be solved exactly in the sense that one can write their roots in terms of radicals. Could there exist some kind of connection between the number 4 of DNA letters and 4 polynomials of degree less than 5 for whose roots one can write closed expressions in terms of radicals as Galois found? Could the polynomials obtained by a a repeated functional composition of the polynomials of DNA letters also have this solvability property?

    This could be the case! Galois theory states that the roots of polynomial are solvable in terms of radicals if and only if the Galois group is solvable meaning that it can be constructed from abelian groups using Abelian extensions (see this).

    Solvability translates to a statement that the group allows so called sub-normal series 1<G0<G1 ...<Gk=G such that Gj-1 is normal subgroup of Gj and Gj/Gj-1 is an abelian group: it is essential that the series extends to G. An equivalent condition is that the derived series is G→ G(1) → G(2) → ...→ 1 in which j+1:th group is commutator group of Gj: the essential point is that the series ends to trivial group.

    If one constructs the iterated polynomials by using only the 4 polynomials with Abelian Galois groups, the intuition of physicist suggests that the solvability condition is guaranteed!

  4. Wikipedia article also informs that for finite groups solvable group is a group whose composition series has only factors which are cyclic groups of prime order. Abelian groups are trivially solvable, nilpotent groups are solvable, and p-groups (having order, which is power prime) are solvable and all finite p-groups are nilpotent. This might relate to the importance of primes and their powers in TGD.

    Every group with order less than 60 elements is solvable. Fourth order polynomials can have at most S4 with 24 elements as Galois groups and are thus solvable. Fifth order polynomial can have the smallest non-solvable group, which is alternating group A5 with 60 elements as Galois group and in this case is not solvable. Sn is not solvable for n>4 and by the finding that Sn as Galois group is favored by its special properties (see this). It would seem that solvable polynomials are exceptions.

    A5 acts as the group of icosahedral orientation preserving isometries (rotations). Icosahedron and tetrahedron glued to it along one triangular face play a key role in TGD inspired model of bio-harmony and of genetic code (see this and this). The gluing of tetrahedron increases the number of codons from 60 to 64. The gluing of tetrahedron to icosahedron also reduces the order of isometry group to the rotations leaving the common face fixed and makes it solvable: could this explain why the ugly looking gluing of tetrahedron to icosahedron is needed? Could the smallest solvable groups and smallest non-solvable group be crucial for understanding the number theory of the genetic code.

An interesting question inspired by M8-H-duality (see this) is whether the solvability could be posed on octonionic polynomials as a condition guaranteeing that TGD is integrable theory in number theoretical sense or perhaps following from the conditions posed on the octonionic polynomials. Space-time surfaces in M8 would correspond to zero loci of real/imaginary parts (in quaternionic sense) for octonionic polynomials obtained from rational polynomials by analytic continuation. Could solvability relate to the condition guaranteeing M8 duality boiling down to the condition that the tangent spaces of space-time surface are labelled by points of CP2. This requires that tangent or normal space is associative (quaternionic) and that it contains fixed complex sub-space of octonions or perhaps more generally, there exists an integrable distribution of complex subspaces of octonions defining an analog of string world sheet.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Is the hierarchy of Planck constants behind the reported variation of Newton's constant?

It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= (ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107.

The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.

In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

How could Planck length be actually equal to much larger CP2 radius?!

The following argument stating that Planck length lP equals to CP2 radius R: lP=R and Newton's constant can be identified G= R2/ℏeff. This idea looking non-sensical at first glance was inspired by an FB discussion with Stephen Paul King.

First some background.

  1. I believed for long time that Planck length lP would be CP2 length scale R squared multiplied by a numerical constant of order 10-3.5. Quantum criticality would have fixed the value of lP and therefore G=lP2/ℏ.
  2. Twistor lift of TGD led to the conclusion that that Planck length lP is essentially the radius of twistor sphere of M4 so that in TGD the situation seemed to be settled since lP would be purely geometric parameter rather than genuine coupling constant. But it is not! One should be able to understand why the ratio lP/R but here quantum criticality, which should determine only the values of genuine coupling parameters, does not seem to help.

    Remark: M4 has twistor space as the usual conformal sense with metric determined only apart from a conformal factor and in geometric sense as M4× S2: these two twistor spaces are part of double fibering.

Could CP2 radius R be the radius of M4 twistor sphere, and could one say that Planck length lP is actually equal to R: lP=R? One might get G= lP2/ℏ from G= R2/ℏeff!
  1. It is indeed important to notice that one has G=lP2/ℏ. ℏ is in TGD replaced with a spectrum of ℏeff=nℏ0, where ℏ= 6ℏ0 is a good guess. At flux tubes mediating gravitational interactions one has

    eff=ℏgr= GMm/v0 ,

    where v0 is a parameter with dimensions of velocity. I recently proposed a concrete physical interpretation for v0 (see this). The value v0=2-12 is suggestive on basis of the proposed applications but the parameter can in principle depend on the system considered.

  2. Could one consider the possibility that twistor sphere radius for M4 has CP2 radius R: lP= R after all? This would allow to circumvent introduction of Planck length as new fundamental length and would mean a partial return to the original picture. One would lP= R and G= R2/ℏeff. ℏeff/ℏ would be of 107-108!
The problem is that ℏeff varies in large limits so that also G would vary. This does not seem to make sense at all. Or does it?!

To get some perspective, consider first the phase transition replacing hbar and more generally hbareff,i with hbareff,f=hgr .

  1. Fine structure constant is what matters in electrodynamics. For a pair of interacting systems with charges Z1 and Z2 one has coupling strength Z1Z2e2/4πℏ= Z1Z2α, α≈ 1/137.
  2. One can also define gravitational fine structure constant αgr. Only αgr should matter in quantum gravitational scattering amplitudes. αgr wold be given by

    αgr= GMm/4πℏgr= v0/4π .

    v0/4π would appear as a small expansion parameter in the scattering amplitudes. This in fact suggests that v0 is analogous to α and a universal coupling constant which could however be subject to discrete number theoretic coupling constant evolution.

  3. The proposed physical interpretation is that a phase transition hbareff,i→ hbareff,f=hgr at the flux tubes mediating gravitational interaction between M and m occurs if the perturbation series in αgr=GMm/4π/hbar fails to converge (Mm∼ mPl2 is the naive first guess for this value). Nature would be theoretician friendly and increase heff and reducing αgr so that perturbation series converges again.

    Number theoretically this means the increase of algebraic complexity as the dimension n=heff/h0 of the extension of rationals involved increases fron ni to nf and the number n sheets in the covering defined by space-time surfaces increases correspondingly. Also the scale of the sheets would increase by the ratio nf/ni.

    This phase transition can also occur for gauge interactions. For electromagnetism the criterion is that Z1Z2α is so large that perturbation theory fails. The replacement hbar→ Z1Z2e2/v0 makes v0/4π the coupling constant strength. The phase transition could occur for atoms having Z≥ 137, which are indeed problematic for Dirac equation. For color interactions the criterion would mean that v0/4π becomes coupling strength of color interactions when αs is above some critical value. Hadronization would naturally correspond to the emergence of this phase.

    One can raise interesting questions. Is v0 (presumably depending on the extension of rationals) a completely universal coupling strength characterizing any quantum critical system independent of the interaction making it critical? Can for instance gravitation and electromagnetism are mediated by the same flux tubes? I have assumed that this is not the case. It it could be the case, one could have for GMm<mPl2 a situtation in which effective coupling strength is of form (GmMm/Z1Z2e2) (v0/4π).

The possibility of the proposed phase transition has rather dramatic implications for both quantum and classical gravitation.
  1. Consider first quantum gravitation. v0 does not depend on the value of G at all!The dependence of G on ℏeff could be therefore allowed and one could have lP= R. At quantum level scattering amplitudes would not depend on G but on v0. I was happy of having found small expansion parameter v0 but did not realize the enormous importance of the independence on G!

    Quantum gravitation would be like any gauge interaction with dimensionless coupling, which is even small! This might relate closely to the speculated TGD counterpart of AdS/CFT duality between gauge theories and gravitational theories.

  2. But what about classical gravitation? Here G should appear. What could the proportionality of classical gravitational force on 1/ℏeff mean? The invariance of Newton's equation

    dv/dt =-GM r/r3

    under heff→ xheff would be achieved by scaling vv/x and t→ t/x. Note that these transformations have general coordinate invariant meaning as transformations of coordinates of M4 in M4×CP2. This scaling means the zooming up of size of space-time sheet by x, which indeed is expected to happen in heff→ xheff!

What is so intriguing that this connects to an old problem that I pondered a lot during the period 1980-1990 as I attempted to construct to the field equations for Kähler action approximate spherically symmetric stationary solutions. The naive arguments based on the asymptotic behavior of the solution ansatz suggested that the one should have G= R2/ℏ. For a long time indeed assumed R=lP but p-adic mass calculations and work with cosmic strings forced to conclude that this cannot be the case. The mystery was how G= R2/ℏ could be normalized to G=lP2/ℏ: the solution of the mystery is ℏ→ ℏeff as I have now - decades later - realized!

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant.

Unexpected support for nuclear string model

Nuclear string model (see this) replaces in TGD framework the shell model. Completely unexpected support for nuclear string model emerged from a research published by CLAS Collaboration in Nature (see this). The popular article "Protons May Have Outsize Influence on Properties of Neutron Stars" refers to possible implications for the understanding of neutron stars but my view is that the implications might dramatically modify the prevailing view about nuclei themselves. The abstract of popular article reads as (see this).

"A study conducted by an international consortium called the CLAS Collaboration, made up of 182 members from 42 institutions in 9 countries, has confirmed that increasing the number of neutrons as compared to protons in the atom’s nucleus also increases the average momentum of its protons. The result, reported in the journal Nature, has implications for the dynamics of neutron stars."

The finding is that protons tend to pair with neutrons. If the number of neutrons increases, the probability for the pairing increases too. The binding energy of the pair is liberated as kinetic energy of the pair - rather than becoming kinetic energy of proton as the popular text inaccurately states.

Pairing does not fit with shell model in which proton and neutron shells correlate very weakly. The weakness of proton-neutron correlations in nuclear shell model looks somewhat paradoxical in this sense since - as text books tell to us - it is just the attractive strong interaction between neutron and proton, which gives rise to the nuclear binding.

In TGD based view about nucleus protons and neutrons are connected to nuclear strings with short color flux tubes connecting nucleons so that one obtains what I call nuclear string (see this). These color flux tubes would bind nucleons rather than nuclear force in the conventional sense.

What can one say about correlations between nucleons in nuclear string model? If the nuclear string has low string tension, one expects that nucleons far away from each other are weakly correlated but neighboring nuclei correlate strongly by the presence of the color flux tube connecting them.

Minimization of repulsive Coulomb energy would favor protons with neutrons as nearest neighbors so that pairing would be favored. For instance, one could have n-n-n... near the ends of the nuclear string and -p-n-p-n-... in the middle region and strong correlations and higher kinetic energy. Even more neutrons could be between protons if the nucleus is neutron rich. This could also relate to neutron halo and the fact that the number of neutrons tends to be larger than that of protons. Optimistic could see the experimental finding as a support for nuclear string model.

Color flux tubes can certainly have charge 0 but also charges 1 and -1 are possible since the string has quark and antiquark at its ends giving uubar, ddbar, udbar, dubar with charges 0,0,-1,+1. Proton plus color flux tube with charge -1 would effectively behave as neuron. Could this kind of pseudo neutrons exist in nucleus? Or even more radically: could all neurons in the nucleus be this kind of pseudo neutrons?

The radical view conforms with the model of dark nuclei as dark proton sequences - formed for instance in Pollack effect (see this) - in which some color bonds can become also negatively charged to reduce Coulomb repulsion. Dark nuclei have scaled down binding energy and scaled up size. They can decay to ordinary nuclei liberating almost all ordinary nuclear binding energy: this could explaining "cold fusion" (see this).

See the chapter Nuclear string model.

About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant

Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In the following a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed.

A generalization of the Hubble formula β=L/LH for the cosmic recession velocity, where LH= c/H is Hubble length and L is radial distance to the object, is suggestive. This interpretation would suggest that some kind of expansion is present. The fact however is that stars, planetary systems, and planets do not seem to participate cosmic expansion. In TGD framework this is interpreted in terms of quantal jerk-wise expansion taking place as relative rapid expansions analogous to atomic transitions or quantum phase transitions. The TGD based variant of Expanding Earth model assumes that during Cambrian explosion the radius of Earth expanded by factor 2.

There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind<LM4. The identification β0= LM4/LH<1 does not allow the identification LH=LM4. LH would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD.

One can deduce an estimate for β0 by approximating the space-time surface near the light-cone boundary as Robertson-Walker cosmology, and expressing the mass density ρ defined as ρ=M/VM4, where VM4=(4π/3) LM43 is the M4 volume of the system. ρ can be expressed as a fraction ε2 of the critical mass density ρcr= 3H2/8π G. This leads to the formula β0= [rS/LM4]1/2 × (1/ε), where rS is Schwartschild radius.

This formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses.

See the chapter Quantum Criticality and dark matter or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant.

An island at which body size shrinks

I encountered in Facebook an article claiming that the bodies of animals shrink at the island of Flores belonging to Indonesia. This news is not Dog's days news (Dog's days news is a direct translation from the finnish synonym for fake news).

Both animals and humans really are claimed to have shrinked in size. The bodies of both hominins (predecessors of humans, humans, ane even elephants) have shrinked at Flores.

  1. In 2003, researchers discovered in a mountain cave in the island of Flores fossils of tiny, humanlike individual. It had chimp sized brain and was 90 cm tall. Several villages at the area are inhabited by people with average body height about 1.45 meters.
  2. Could the small size of the recent humans at Flores be due to interbreeding between modern humans with Homo Florensiensis (HF) occurred long time ago? The hypothesis could be tested by studying the DNA of HF. Since the estimate age of fossils of HF was 10,000 years, researchers hoped that they could find some DNA to HF. DNA was not found but researchers realized that if HF as interbreeded with humans, this DNA could show itself in DNA of modern humans at Flores. It was found that this DNA can be identified but differs insignificantly from that of modern humans. It was also found that the age of the fossils was about 60,000 years.
  3. Therefore it seems that the interbreeding did not cause the reduction in size. The study also showed that at least twice in the ancient history of humans and their relatives arrived as Flores and then grew shorter. This happened also for elephants that arrived to Flores at twice.
This looks really weird! Weirdness in this proportion allows some totally irresponsible speculation.
  1. The hierarchy of Planck constants heff=nh0 (h=6h0 is a good guess ) assigned with dark matter as phases of ordinary matter and responsible for macroscopic quantum coherence is central in TGD inspired biology . Quantum scales are proportional to or its power (heff2 for atoms, heff for Compton length, and heff1/2 for cyclotron states).
  2. The value of gravitational Planck constant hgr (=heff) at the flux tubes mediating gravitational interaction could determine the size scale of the animals. Could one consider a local anomaly in which the value of hgr is reduced and leads to a shrinkage of also body size?
  3. hgr is of form hgr=GMDm/v0, where v0 a velocity parameter (see this, this, and this). MD is a large dark mass of order 10-4 times the mass of Earth. Gravitational Compton length Λgr= hgr/m=GM/v0 for a particle with mass m. Λgr= hgr/m does not depend on the mass of the particle - this conforms with Equivalence Principle.

    The estimate of this article gives Λgr= 2πM D/v0= 2.9× rS(E)$, where the Schwartshild radius of Earth is $rS(E)=2GME=.9$ mm. This gives Λgr= 2.6 mm, which corresponds to p-adic length scale L(k=187). Brain contains neuron blobs with this size scale. The size scale of organism is expected to be some not too large multiple of this scale.

    Could one think that v0 at Flores is larger than normally and reduces the value of Λgr so that the size for the gravitational part of the magnetic body of any organism shrinks, and that this gradually leads to a reduction of the size of the biological body. Second possibility is that the value of dark mass MD is at Flores smaller than elsewhere: one would have a dark analogy of ordinary local gravitational anomaly. The reduction of hgr should be rather large so that the first option looks more plausible.

See the chapter Quantum Criticality and dark matter or the article An island at which body size shrinks.

Badly behaving photons again

I wrote about two years ago about strange halving of the unit of angular momentum for photons. The article had title Badly behaving photons and space-time as 4-surface).

Now I encountered a popular article (see this) telling about this strange halving of photon angular momentum unit two years after writing the above comments. I found nothing new but my immediate reaction was that the finding could be seen as a direct proof for heff=nh0 hierarchy, where h0 is the minimal value of Planck constants, which need not be ordinary Planck constant h as I have often assumed in previous writings.

Various arguments indeed support for h=6h0. This hypothesis would explain the strange findings about hydrogen atom having what Mills calls hydrino states having larger binding energy than normal hydrogen atom (see this): the increase of the binding energy would follow from the proportionality of the binding energy to 1/heff2. For n0=6→ n<6 the binding energy is scale up as (n/6)2. The values of n=1,2,3 dividing n are preferred. Second argument supporting h=6h0 comes from the model for the color vision (see this).

What is the interpretation of the ordinary photon angular momentum for n=n0= 6? Quantization for angular momentum as multiples of hbar0 reads as l= l0hbar0= (l0/6)hbar, l0=1,2... so that fractional angular momenta are possible. l0=6 gives the ordinary quantization for which the wave function has same value for all 6 sheets of the covering. l0=3 gives the claimed half-quantization.

See the chapter Quantum Criticality and dark matter or the article Badly behaving photons and space-time as 4-surface.

Two new findings related to high Tc super-conductivity

I learned simultaneously about two findings related to high Tc super-conductivity leading to a proposal of a general mechanism of bio-control in which small signal can serve as a control knob inducing phase transition producing macroscopically quantum coherent large heff phases in living matter.

1. High Tc superconductivity at room temperature and pressure

Indian physicists Kumar Thapa and Anshu Pandey have found evidence for superconductivity at ambient (room) temperature and pressure in nanostructures (see this). There are also earlier claims about room temperature superconductivity that I have discussed in my writings.

1.1 The effect

Here is part of the abstract of the article of Kumar Thapa and Anshu Pandey.

We report the observation of superconductivity at ambient temperature and pressure conditions in films and pellets of a nanostructured material that is composed of silver particles embedded into a gold matrix. Specifically, we observe that upon cooling below 236 K at ambient pressures, the resistance of sample films drops below 10-4 Ohm, being limited by instrument sensitivity. Further, below the transition temperature, samples become strongly diamagnetic, with volume susceptibilities as low as -0.056. We further describe methods to tune the transition to temperatures higher than room temperature.

During years I have developed a TGD based model of high Tc superconductivity and of bio-superconductivity (see this and this).

Dark matter is identified as phases of ordinary matter with non-standard value heff/h=n of Planck constant (see this) (h=6h0 is the most plausible option). Charge carriers are heff/h0=n dark macroscopically quantum coherent phases of ordinary charge carriers at magnetic flux tubes along which the supra current can flow. The only source of dissipation relates to the transfer of ordinary particles to flux tubes involving also phase transition changing the value of heff.

This superconductivity is essential also for microtubules exhibit signatures for the generation of this kind of phase at critical frequencies of AC voltages serving as a metabolic energy feed providing for charged particles the needed energy that they have in heff/h0=n phase.

Large heff phases with same parameters than ordinary phase have typically energies large than ordinary phase. For instance. Atomic binding energies scale like 1/heff2 and cyclotron energies and harmonic oscillator energies quite generally like heff. Free particle in box is however quantum critical in the sense that the energy scale E= hbareff2/2mL2 does not depend on the heff if one has L∝ heff. At space-time level this is true quite generally for external (free) particles identified as minimal 4-surfaces. Quantum criticality means independence on various coupling parameters.

What is interesting is that Ag and Au have single valence electron. The obvious guess would be that valence electrons become dark and form Cooper pairs in the transition to superconductivity. What is interesting that the basic claim of a layman researcher David Hudson is that ORMEs or mono-atomic elements as he calls them include also Gold. These claims are not of course taken seriously by academic researchers. In the language of quantum physics the claim is that ORMEs behave like macroscopic quantum systems. I decided to play with the thought that the claims are correct and this hypothesis served later one of the motivations for the hypothesis about dark matter as large heff phases: this hypothesis follows from adelic physics (see this), which is a number theoretical generalization of ordinary real number based physics.

TGD explanation of high Tc superconductivity and its biological applications strongly suggest that a feed of "metabolic" energy is a prerequisite of high Tc superconductivity quite generally. The natural question is whether experimenters might have found something suggesting that the external energy feed - usually seen as a prerequisite for self-organization - is involved with high Tc superconductivity. During same day I got FB link to another interesting finding related to high Tc superconductivity in cuprates and suggesting positive answer to this question!

1.2 The strange observation of Brian Skinner about the effect

After writing the above comments I learned from a popular article (see this) about and objection (see this) challenging the claimed discovery (see this). The claimed finding received a lot of attention and physicist Brian Skinner in MIT decided to test the claims. At first the findings look quite convincing to him. He however decided to look for the noise in the measured value of volume susceptibility χV. χV relates the magnetic field B in superconductor to the external magnetic field Bext via the formulate B= (1+χV)Bext (in units with μ0=1 one has Bext=H, where H is used usually).

For diamagnetic materials χV is negative since they tend to repel external magnetic fields. For superconductors one has χV=-1 in the ideal situation. The situation is not however ideal and stepwise change of χV from χV=0 to χV to some negative value but satisfying |μV| <1 serves as a signature of high Tc superconductivity. Both superconducting and ordinary phase would be present in the sample.

Figure 3a of the article of authors gives χV as function of temperature for some values of Bext with the color of the curve indicating the value of Bext. Note that μV depends on Bext, whereas in strictly linear situtation it would not do so. There is indeed transition at critical temperature Tc= 225 K reducing χV=0 to negative value in the range χV ∈ [-0.05 ,-.06 ] having no visible temperature dependence but decreasing somewhat with Bext.

The problem is that the fluctuations of χV for green curve (Bext=1 Tesla) and blue curve (Bext=0.1 Tesla) have the same shape. With blue curve only only shifted downward relative to the green one (shifting corresponds to somewhat larger dia-magnetism for lower value of Bext). If I have understood correctly, the finding applies only to these two curves and for one sample corresponding to Tc= 256 K. The article reports superconductivity with Tc varying in the range [145,400] K.

The pessimistic interpretation is that this part of data is fabricated. Second possibility is that human error is involved. The third interpretation would be that the random looking variation with temperature is not a fluctuation but represents genuine temperature dependence: this possibility looks infeasible but can be tested by repeating the measurements or simply looking whether it is present for the other measurements.

1.3 TGD explanation of the effect found by Skinner

One should understand why the effect found by Skinner occurs only for certain pairs of magnetic fields strengths Bext and why the shape of pseudo fluctuations is the same in these situations.

Suppose that Bext is realized as flux tubes of fixed radius. The magnetization is due to the penetration of magnetic field to the ordinary fraction of the sample as flux tubes. Suppose that the superconducting flux tubes assignable 2-D surfaces as in high Tc superconductivity. Could the fraction of super-conducting flux tubes with non-standard value of heff - depends on magnetic field and temperature in predictable manner?

The pseudo fluctuation should have same shape as a function temperature for the two values of magnetic fields involved but not for other pairs of magnetic field strengths.

  1. Concerning the selection of only preferred pairs of magnetic fields Haas-van Alphen effect gives a clue. As the intensity of magnetic field is varied, one observes so called de Haas-van Alphen effect (see this) used to deduce the shape of the Fermi sphere: magnetization and some other observables vary periodically as function of 1/B. In particular, this is true for χV.

    The value of P is

    PH-A== 1/BH-A= 2π e/hbar Se ,

    where Se is the extremum Fermi surface cross-sectional area in the plane perpendicular to the magnetic field and can be interpreted as area of electron orbit in momentum space (for illustration see this).

    Haas-van Alphen effect can be understood in the following manner. As B increases, cyclotron orbits contract. For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation of say magnetization. From this one learns that the electrons rotating at magnetic flux tubes of Bext are responsible for magnetization.

  2. One can get a more detailed theoretical view about de Haas-van Alphen effect from the article of Lifschitz and Mosevich (see this). In a reasonable approximation one can write

    P= e× ℏ/meEF = [4α/32/3π1/3]× [1/Be] , Be == e/ae2 =[x-216 Tesla ,

    ae= (V/N)1/3= =xa , a=10-10 m .

    Here N/V corresponds to valence electron density assumed to form free Fermi gas with Fermi energy EF= ℏ2(3pi2N/V)2/3/2me. a=10-10 m corresponds to atomic length scale. α≈ 1/137 is fine structure constant. For P one obtains the approximate expression

    P≈ .15 x2 Tesla-1 .

    If the difference of Δ (1/Bext) for Bext=1 Tesla and Bext=.1 Tesla correspond to a k-multiple of P, one obtains the condition

    kx2 ≈ 60 .

  3. Suppose that Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P of Haas-van Alphen effect. This would predict same value of χV for the two field strengths, which is not true. The formula used for χV however holds true only inside given flux tube: call this value χV,H-A.

    The fraction f of flux tubes penetrating into the superconductor can depend on the value of Bext and this could explain the deviation. f can depend also on temperature. The simplest guess is that that two effects separate:

    χV= χV,H-A(BH-A/Bext)× f(Bext,T) .

    Here χV,H-A has period PH-A as function of 1/Bext and f characterizes the fraction of penetrated flux tubes.

  4. What could one say about the function f(Bext,T)? BH-A=1/PH-A has dimensions of magnetic field and depends on 1/Bext periodically. The dimensionless ratio Ec,H-A/T of cyclotron energy Ec,H-A= hbar eBH-A/me and thermal energy T and Bext could serve as arguments of f(Bext,T) so that one would have

    f(Bext,T)=f1(Bext)f2(x) ,

    x=T/EH-A(Bext)) .

    One can consider also the possibility that Ec,H-A is cyclotron energy with hbareff=nh0 and larger than otherwise. For heff=h and Bext= 1 Tesla one would have Ec= .8 K, which is same order of magnitude as variation length for the pseudo fluctuation. For instance, periodicity as a function of x might be considered.

    If Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P one would have

    χV(Bext,1,T)/χV(Bext,2,T) =f1(Bext,1)/f1(Bext,2)

    independently of T. For arbitrary pairs of magnetic fields this does not hold true. This property and also the predicted periodicity are testable.

2. Transition to high Tc superconductivity involves positive feedback

The discovery of positive feedback in the transition to hight Tc superconductivity is described in the popular article " Physicists find clues to the origins of high-temperature superconductivity" (see this). Haoxian Li et al at the University of Colorado at Boulder and the Ecole Polytechnique Federale de Lausanne have published a paper on their experimental results obtained by using ARPES (Angle Resolved Photoemission Spectroscopy) in Nature Communications (see this).

The article reports the discovery of a positive feedback loop that greatly enhances the superconductivity of cupra superconductors. The abstract of the article is here.

Strong diffusive or incoherent electronic correlations are the signature of the strange-metal normal state of the cuprate superconductors, with these correlations considered to be undressed or removed in the superconducting state. A critical question is if these correlations are responsible for the high-temperature superconductivity. Here, utilizing a development in the analysis of angle-resolved photoemission data, we show that the strange-metal correlations don’t simply disappear in the superconducting state, but are instead converted into a strongly renormalized coherent state, with stronger normal state correlations leading to stronger superconducting state renormalization. This conversion begins well above Tc at the onset of superconducting fluctuations and it greatly increases the number of states that can pair. Therefore, there is positive feedback––the superconductive pairing creates the conversion that in turn strengthens the pairing. Although such positive feedback should enhance a conventional pairing mechanism, it could potentially also sustain an electronic pairing mechanism.

The explanation of the positive feedback in TGD TGD framework could be following. The formation of dark electrons requires "metabolic" energy. The combination of dark electrons to Cooper pairs however liberates energy. If the liberated energy is larger than the energy needed to transform electron to its dark variant it can transform more electrons to dark state so that one obtains a spontaneous transition to high Tc superconductivity. The condition for positive feedback could serve as a criterion in the search for materials allowing high Tc superconductivity.

The mechanism could be fundamental in TGD inspired quantum biology. The spontaneous occurrence of the transition would make possible to induce large scale phase transitions by using a very small signal acting therefore as a kind of control knob. For instance, it could apply to bio-superconductivity in TGD sense, and also in the transition of protons to dark proton sequences giving rise to dark analogs of nuclei with a scaled down nuclear binding energy at magnetic flux tubes explaining Pollack effect. This transition could be also essential in TGD based model of "cold fusion" based also on the analog of Pollack effect. It could be also involved with the TGD based model for the finding of macroscopic quantum phase of microtubules induced by AC voltage at critical frequencies (see this).

See the chapter Quantum criticality and dark matter or the article Two new findings related to high Tc super-conductivity.

Two different values for the metallicity of Sun and heating of solar corona: two puzzles with a common solution?

Solar corona could be also a seat of dark nucleosynthesis and there are indications that this is the case (see this) . The metallicity of stellar objects gives important information about its size, age, temperature, brightness, etc... The problem is that measurements give two widely different values for the metallicity of Sun depending on how one measures it. One obtains 1.3 per cent from the absorption lines of the radiation from Sun and 1.8 from solar seismic data. Solar neutrinos give also the latter value. What could cause the discrepancy?

Problems do not in general appear alone. There is also a second old problem: what is the origin of the heating of the solar corona. Where does the energy needed for the heating come from?

TGD proposal is based on a model, which emerged initially as a model for "cold fusion" (not really) in terms of dark nucleosynthesis, which produced dark scaled up variants of ordinary nuclei as dark proton sequences with much smaller binding energy. This can happen even in living matter: Pollack effect involving irradiation by IR light of water bounded by gel phase creates negatively charged regions from which part of protons go somewhere. They could go to magnetic flux tubes and form dark nuclei. This could explain the reported transmutations in living matter not taken seriously by academic nuclear physicists.

TGD proposal is that the protons transform to dark proton sequences at magnetic flux tubes with nonstandard value of Planck constant heff/h0=n. Dark nuclei with scaled up size. Dark nuclei can transform to ordinary nuclei by heff→ h (h= 6h0 is the most plausible option) and liberate almost all nuclear binding energy in the process. The outcome would be "cold fusion".

This leads to a vision about pre-stellar evolution. First came the dark nucleosynthesis, which heated the system and eventually led to a temperature at which the ordinary nuclear fusion started. This process could occur also outside stellar cores - say in planet interiors - and a considerable part of nuclei could be created outside star.

A good candidate for the site of dark nucleosynthesis would be solar corona . Dark nucleosynthesis could heat the corona and create metals also here. They would absorb the radiation coming from the solar core and reduce the measured effective metallicity to 1.3 per cent.

See the chapter Cold fusion again or the article Morphogenesis in TGD Universe .

About Comorosan effect in the clustering of RNA II polymerase proteins

The time scales τ equal 5, 10, and 20 seconds appear in the clustering of RNA II polymerase proteins and Mediator proteins (see this and the previous posting). What is intriguing that so called Comorosan effect involves time scale of 5 seconds and its multiples claimed by Comorosan long time ago to be universal time scales in biology. The origin of these time scales has remained more or less a mystery although I have considered several TGD inspired explanations for this time scale is based on the notion of gravitational Planck constant (see this).

One can consider several starting point ideas, which need not be mutually exclusive.

  1. The time scales τ associated with RNA II polymerase and perhaps more general bio-catalytic systems could correspond to the durations of processes ending with "big" state function reduction. In zero energy ontology (ZEO) there are two kinds of state function reductions. "Small" reductions - analogs of weak measurements - leave passive boundary of causal diamond (CD) unaffected and thus give rise to self as generalized Zeno effect. The states at the active boundary change by a sequence of unitary time evolutions followed by measurements inducing also time localization of the active boundary of CD. The size of CD increases and gives rise to flow of time defined as the temporal distance between the tips of CD. Large reductions change the roles of the passive and active boundaries and mean death of self. The process with duration of τ could correspond to a life-time of self assignable to CD.

    Remark: It is not quite clear whether CD can disappear and generated from vacuum. In principle this is possible and the generation of mental images as sub-selves and sub-CDs could correspond to this kind of process.

  2. I have proposed (see this) that Josephson junctions are formed between reacting molecules in bio-catalysis. These could correspond to the shortened flux tubes . The difference EJ=ZeV of Coulomb energy of Cooper pair over flux tube defining Josephson junction between molecules would correspond to Josephson frequency fJ= 2eV/heff. If this frequency corresponds to τJ= 5 seconds, heff should be rather large since EJ is expected to be above thermal energy at physiological temperatures.

    Could Josephson radiation serve as a kind of of synchronizing clock for the state function reductions so that its role would be analogous to that of EEG in case of brain? A more plausible option is that Josephson radiation is a reaction to the presence of cyclotron radiation generated at MB and performing control actions at the biological body (BB) defined in very general sense. In the case of brain dark cyclotron radiation would generate EEG rhythms responsible for control via genome and dark generalized Josephson radiation modulated by nerve pulse patterns would mediate sensory input to the MB at EEG frequencies.

    A good guess is that the energy in question corresponds to Josephson energy for protein through cell membrane acting as Josephson junction and giving to ionic channel or pump. This energy could be universal as therefore same also in the molecular reactions. The flux tubes themselves have universal properties.

  3. The hypothesis ℏeff= ℏgr= GMm/β0c of Nottale for the value of gravitational Planck constant gives large ℏ. Here v00c has dimensions of velocity. For dark cyclotron photons this gives large energy Ec∝ ℏgr and for dark Josephson photons small frequency fJ∝ 1/hgr. Josephson time scale τf would be proportional to the mass m of the charged particle and therefore to mass number of ion involved. Cyclotron time scale does not depend on the mass of the charged particle at all and now sub-harmonics of τc are natural.
The time scales assignable to CD or the lifetime-time of self in question could correspond to either cyclotron of Josephson time scale τ.
  1. If one requires that the multiplies of the time scale 5 seconds are possible, Josephson radiation is favoured since the Josephson time scale proportional to hgr ∝ m ∝ A, A mass number of ion.

    The problem is that the values A= 2,3,4,5 are not plausible for ordinary nuclei in living matter. Dark nuclei at magnetic flux tubes consisting of dark proton sequences could however have arbitrary number of dark protons and if dark nuclei appear at flux tubes defining Josephson junctions, one would have the desired hierarchy.

  2. Although cyclotron frequencies do not have sub-harmonics naturally, MB could adapt to the situation by changing the thickness of its flux tubes and by flux conservation the magnetic field strength to which fc is proportional to. This would allow MB to produce cyclotron radiation with the same frequency as Josephson radiation and MB and BB would be in resonant coupling.
Consider now the model quantitatively.
  1. For ℏeff= ℏgr one has

    r= ℏgr/ℏ= GMDm/cβ0= 4.5 × 1014× (m/mp) (y/β0) .

    Here y=MD/ME gives the ratio of dark mass MD to the Earth mass ME. One can consider 2 favoured values for m corresponding to proton mass mp and electron mass me.

  2. E= hefff gives the concrete relationship f =(E/eV) × 2.4 × 1014× (h/heff) Hz between frequencies and energies. This gives

    x=E/eV = 0.4× r × (f/1014 Hz) .

  3. If the cyclotron frequency fc=300 Hz of proton for Bend=.2 Gauss corresponds to bio-photon energy of x eV, one obtains the condition

    r=GMDmp/ ℏ β0≈ .83 × 1012x .

    Note that the cyclotron energy does not depend on the mass of the charged particle. One obtains for the relation between Josephson energy and Josephson frequency the condition

    x=EJ/eV = 0.4× .83 × 10-2× (m/mp)× (xfJ/Hz) , EJ= ZeV .

    One should not confuse eV in ZeV with unit of energy. Note also that the value of Josephson energy does not depend on heff so that there is no actual mass dependence involved.

For proton one would give a hierarchy of time scales as A-multiples ofτ(p) and is therefore more natural so that it is natural to consider this case first.
  1. For fJ=.2 Hz corresponding to the Comorosan time scale of τ= 5 seconds this would give ZeV= .66x meV. This is above thermal energy Eth= T=27.5 meV at T=25 Celsius for x> 42. For ordinary photon (heff= h) proton cyclotron frequency fc(p) would correspond for x>42 to EUV energy E>42 eV and to wavelength of λ<31 nm.

    The energy scale of Josephson junctions formed by proteins through cell membrane of thickness L(151)=10 nm is slightly above thermal energy, which suggests x≈ 120 allowing to identify L(151)=10 nm as the length scale of the flux tube portion connecting the reactants. This would give E≈ 120 eV - the upper bound of EUV range. For x=120 one would have GMEmp y/v0≈ 1014 requiring β0/y≈ 2.2. The earlier estimates (see this) give for the mass MD the estimate y∼ 2× 10-4 giving β0∼ 4.4× 10-4. This is rather near to β0= 2-11∼ me/mp obtained also in the model for the orbits of inner planets as Bohr orbits.

  2. For ion with mass number A this would predict τA= A× τp= A× 5 seconds so that also multiples of the 5 second time scale would appear. These multiples were indeed found by Comoran and appear also in the case of RNA II polymerase.
  3. For proton one would thus have 2 biological extremes - EUV energy scale associated with cyclotron radiation and thermal energy scale assignable to Josephson radiation. Both would be assignable to dark photons with heff=hgr with very long wavelength. Dark and ordinary photons of both kind would be able to transform to each other meaning a coupling between very long lengths scales assignable to MB and short wavelengths/time scales assignable to BB.

    The energy scale of dark Josephson photons would be that assignable with junctions of length 10 nm with long wavelengths and energies slightly above Eth at physiological temperature. The EUV energy scale would be 120 eV for dark cyclotron photons of highest energy.

    For lower cyclotron energies suggested by the presence of bio-photons in the range containing visible and UV and obtained for Bend below .2 Gauss, the Josephson photons would have energies ≤ Eth. That the possible values of Bend are below the nominal value Bend=.2 Gauss deduced from the experiments of Blackman does not conform with the earlier ad hoc assumption that Bend represents lower bound. This does not change the earlier conclusions.

    Could the 120 eV energy scale have some physical meaning in TGD framework? The corresponding wavelength for ordinary photons corresponds to the scale L(151)=10 nm which correspond to the thickness of DNA double strand. Dark DNA having dark proton triplets as codons could correspond to either k=149 or k=151. The energetics of Pollack effect suggests that k=149 is realized in water even during prebiotic period (see this).. In the effect discovered by Blackman the ELF photons would transform dark cyclotron photons having heff=hgr and energy about .12 keV. They would induce cyclotron transitions at flux tubes of Bend with thickness of order cell size scale. These states would decay back to previous states and the dark photons transformed to ordinary photons absorbed by ordinary DNA with coil structure with thickness of 10 nm. Kind of standing waves would be formed. These waves could transform to acoustic waves and induce the observed effects. Quite generally, dark cyclotron photons would control the dynamics of ordinary DNA by this mechanism.

    It is indeed natural to assume that Bend corresponds to upper bound since the values of magnetic field are expected to weaken farther from Earth's surface: weakening could correspond to thickening of flux tubes reducing the field intensity by flux conservation. The model for hearing (see this ) requires cyclotron frequencies considerably above proton's cyclotron frequency in Bend=.2 Gauss. This requires that audible frequencies are mapped to electron's cyclotron frequency having upper bound fc(e) = (mp/me) fc(p)≈ 6× 105 Hz. This frequency is indeed above the range of audible frequencies even for bats.

For electron one has hgr(e)= (me/mp)hgr(p) ≈ 5.3 × 10-4 hgr(p), ℏgr(p)=4.5× 1014/ (β0. Since Josephson energy remains invariant, the Josephson time scales up from τ(p)=5 seconds to τ(e)=(me/mP) τ(p)≈ 2.5 milliseconds, which is the time scale assignable to nerve pulses (see this).

To sum up, the model suggests that the idealization of flux tubes as kind of universal Josephson junctions. The model is consistent with bio-photon hypothesis. The constraints on hgr= GMDm/v0 are consistent with the earlier views and allows to assign Comorosan time scale 5 seconds to proton and nerve pulse time scale to electron as Josephson time scales. This inspires the question whether the dynamics of bio-catalysis and nerve pulse generation be seen as scaled variants of each other at quantum level? This would not be surprising if MB controls the dynamics. The earlier assumption that Bend=0.2 Gauss is minimal value for Bend must be replaced with the assumption that it is maximal value of Bend.

See the chapter Quantum Criticality and dark matter or the article Clustering of RNA polymerase molecules and Comorosan effect.

Why do RNA polymerase molecules cluster?

I received a link to a highly interesting popular article telling about the work of Ibrahim Cisse at MIT and colleagues (see this): at this time about clustering of proteins in the transcription of RNA. Similar clustering has been observed already earlier and interpreted as a phase separation Similar clustering has been observed already earlier and interpreted as a phase separation (see this). Now this interpretation is not proposed by experiments but experimenters say that it is quite possible but they cannot prove it.

I have already earlier discussed the coalescence of proteins into droplets as this kind of process in TGD framework. The basic TGD based ideas is that proteins - and biomolecules in general - are connected by flux tubes characterized by the value of Planck constant heff=n× h0 for the dark particles at the flux tube. The higher the value of n is the larger the energy of given state. For instance, the binding energies of atoms decrease like 1/n2. Therefore the formation of the molecular cluster liberates energy usable as metabolic energy.

Remark: h0 is the minimal value of heff. The best guess is that ordinary Planck constant equals to h=6h0 (see this and this).

TGD view about the findings

Gene control switches - such as RNA II polymerases in the DNA transcription to RNA - are found to form clusters called super-enhancers. Also so called Mediator proteins form clusters. In both cases the number of members is in the range 200-400. The clusters are stable but individual molecules spend very brief time in them. Clusers have average lifetime of 5.1±.4 seconds.

Why the clustering should take place? Why large number of these proteins are present although single one would be enough in the standard picture. In TGD framework one can imagine several explanations. One can imagine at least following reasons.

  1. One explanation could relate to non-determinism of state function reduction. The transcription and its initiation should be a deterministic process at the level of single gene. Suppose that the initiation of transcription is one particular outcome of state function reduction. If there is only single RNA II polymerase, which can make only single trial, the changes to initiate the transcription are low. This would be the case if the molecule provides metabolic energy to initiate the process and becomes too "tired" to try again. In nerve pulse transmission there is analogous situation: after the passing of the nerve pulse generation the neuron has dead time period. As a matter of fact, it turns out that the analogy could be much deeper.

    How to achieve the initiation with certainty in this kind of situation? Suppose that the other outcomes do not affect the situation appreciabley. If one particular RNA polymerase fails to initiate it, the others can try. If the number of RNA transcriptase molecule is large enough, the transcription is bound to begin eventually! This is much like in fairy tales about princess and suitors trying to kill the dragon to get the hand of princess. Eventually the penniless swineherd enters the stage.

  2. If the initiation of transcription requires large amount of metabolic energy then only some minimal number of N of RNA II polymerase molecules might be able to provide it collectively. The collective formed by N molecules could correspond to a formation of magnetic body with a large value of heff=n×h. The molecules would be connected by magnetic flux tubes.
  3. If the rate for occurrence is determined by amplitude which is superposition of amplitudes assignable to individual proteins the the rate is proportional to N2, N the number of RNA transcriptase molecules.

    The process in the case of cluster is indeed reported to to be suprisingly fast as compared to the expectations - something like 20 seconds. The earlier studies have suggests that single RNA polymerase stays at the DNA for minutes to hours. This would be a possible mechanism allowing to speed up bio-catalysis besides the mechanism allowing to find molecules to find by a reduction of heff/h= n for the bonds connecting the reactants and the associated liberation of metabolic energy allowing to kick the reactants over the potential wall hindering the reaction.

Concerning the situation before clustering there are two alternative options both relying on the model of liquid phase explaining Maxwell's rule assuming the presence of flux tube bonds in liquid and of water explaining its numerous anomalies in terms of flux tubes which can be also dark (see this).
  1. Option I: Molecules could be in a phase analogous to vapour phase and there would be very few flux tube bonds between them. The phase transition would create liquid phase as flux tube loops assignable to molecules would reconnect form flux tube pairs connecting the molecules to a tensor network giving rise to quantum liquid phase. The larger then value of n, the longer the bonds between molecules would be.
  2. Option I: The molecules are in the initial state connected by flux tubes and form a kind of liquid phase and the clustering reduces the value of n and therefore the lengths of flux tubes. This would liberate dark energy as metabolic energy going to the initiation of the transcription. One could indeed argue that connectedness in the initial state with large enough value of n is necessary since the protein cluster must have high enough "IQ" to perform intelligent intentional actions.
Protein blobs are said to be drawn together by the "floppy" bits (pieces) of intrinsically disorder proteins. What could this mean in the proposed picture? Disorder suggests absence of correlations between building bricks of floppy parts of the proteins.
  1. Could floppiness correspond to low string tension assignable to long flux loops with large heff/h=n assignable to the building bricks of "floppy" pieces? Could reconnection for these loops give rise to pairs of flux tubes connecting the proteins in the transition to liquid phase? Floppiness could also make possible to scan the enviroment by flux loops for flux loops of other molecules and in case of hit (cyclotron resonance) induce reconnection.
  2. In spite of floppiness in this sense, one could have quantum correlations between the internal quantum numbers of the building bricks of the floppy pieces. This would also increase the value of n serving as molecular IQ and provide molecule with higher metabolic energy liberated in the catalysis.
What about the interpretation of the time scales 5, 10, and 20 seconds? What is intriguing that so called Comorosan effect involves time scale of 5 seconds and its multiplest claimed by Comorosan long time ago to be universal time scales in bio-catalysis.

See the chapter Quantum Criticality and dark matter or the article Clustering of RNA polymerase molecules and Comorosan effect.

The discovery of "invisible visible matter" and more detailed view about dark pre-nuclear physics

That 30 per cent of visible matter has remained invisible is not so well-known problem related to dark matter. It is now identified and assigned to the network of filaments in intergalactic space. Reader can consult the popular article "Researchers find last of universe's missing ordinary matter" (see this). The article "Observations of the missing baryons in the warm-hot intergalactic medium" by Nicastro et al (see this) describes the finding at technical level. Note that warm-hot refers to the temperature range 105-106 K.

In TGD framework one can interpret the filament network as as a signature of flux tubes/cosmic string network to which one can assign dark matter and dark energy. The interpretation could be that the "invisible visible" matter emerges from the network of cosmic strings as part of dark energy is transformed to ordinary matter. This is TGD variant of inflationary scenario with inflaton vacuum energy replaced with cosmic strings/flux tubes carrying dark energy and matter.

This inspires more detailed speculations about pre-stellar physics according to TGD. The questions are following. What preceded the formation of stellar cores? What heated the matter to the needed temperatures? The TGD inspired proposal is that it was dark nuclear physics (see the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?). Dark nuclei with heff=n× h0 were formed first and these decayed to ordinary nuclei or dark nuclei with smaller value of heff=n× h0 and heated the matter so that ordinary nuclear fusion became possible.

Remark: h0 is the minimal value of heff. The best guess is that ordinary Planck constant equals to h=6h0 (see this and this).

  1. The temperature of the recently detected missing baryonic matter is around 106 K and roughly 1/10:th of the temperature 107 K at solar core. This serves as a valuable guideline.

    I already earlier realized that the temperature at solar core, where fusion occurs happens to be same as the estimated temperature for the binding energy of dark nuclei identified as dark proton sequences with dark nucleon size equal to electron size. The estimate is obtained by scaling down the typical nuclear binding energy for low mass nuclei by the ratio 2-11 of sizes of ordinary and dark nuclear (electron/proton mass ratio, dark proton has same size as ordinary electron). This led to the idea that nuclear fusion in the solar core creates first dark nuclei, which then decay to ordinary nuclei and liberate essentially all of nuclear binding energy. After that ordinary nuclear fusion at resulting high enough temperature would take the lead.

  2. Dark nuclear strings can correspond to several values of heff=n× h0 with size scale scaled up by n. p-Adic length scales L(k)= 2(k-151)/2L(151), L(151)≈ 10 nm, define favoured values of n as integers in good approximation proportional to 2k/2. The binding energy scales for dark nuclei is inversely proportional to 1/n (to the inverse of the p-adic length scale). Could 106 K correspond to a p-adic length scale k=137 - atomic length scale of 1 Angstrom?

    Could dark cold fusion start at this temperature and first give rise to "pre-nuclear physics generating dark nuclei as dark proton sequences and with dark nuclear binding energy about . 1 keV with dark nuclei decaying to k=127 dark nuclei with binding energy about 1 keV, and lead to heating of the matter and eventually to cold fusion at k=127 and after than the ordinary fusion? Also the values intermediate in the range [137,127] can be considered as intermediate steps. Note that also k=131 is prime.

  3. Interestingly, the temperature at solar corona is about 1 million degrees and by factor 140-150 hotter than the inner solar surface. The heating of solar corona has remained a mystery and the obvious question is whether dark nuclear fusion giving rise to "pre-nuclear" fusion for k=137 generates the energy needed.
  4. If this picture makes sense, the standard views about the nuclear history of astrophysical objects stating that the nuclei in stars come from the nuclei from supernovas would change radically. Even planetary cores might be formed by a sequence of dark nuclear fusions ending with ordinary fusion and the iron in the Earth's core could be an outcome of dark nuclear fusion. The temperature at Earth's core is about 6× 103 K. This corresponds to k=151 in reasonable approximation.

    Remark: What is amusing that the earlier fractal analogy of Earth as cell would make sense in the sense that k=151 corresponds to the p-adic length scale of cell membrane.

    I have also considered the possibility that dark nuclear fusion could have provided metabolic energy for prebiotic lifeforms in underground oceans of Earth and that life came to the surface in Cambrian explosion (see this). The proposal would solve the hen-egg question which came first: metabolism or genetic code since dark proton sequences provide a realization of genetic code (see this).

  5. One can imagine also a longer sequence of p-adic length scales starting at lower temperatures and and longer p-adic length scales characterized by integer k for which prime values are the primary candidates. k=139 corresponding to T=.5× 106 K is one possibility. For k= 149 and k=151 (thicknesses for the lipid layer of the cell membrane and cell membrane) one would have T ≈ 2× 104 K and T ≈ 104 K - roughly the temperature at the surface of Sun and biologically important energies E= 2 eV of red light and E=1 eV of infrared light (quite recently it was found that also IR light can serve as metabolic energy in photosynthesis).

    Could dark nuclear fusion process occur at the surface of the Sun? Could one image that the sequence of dark phase transitions proceeding to opposite directions as: k=137 ← 139 ← 149 ← 151→ 149→ 139→ 137→ 131→ 127 between dark nuclear physics corresponding to p-adic length scales L(k) takes place as one proceeds from the surface of the Sun upwards to solar corona and downwards to the core. Of course, also other values of k can be considered: k:s in this sequence are primes: the ends of the warm-hot temperature range 105-106 corresponds roughly to k=143 = 13× 11 and k=137.

For TGD view about "cold fusion" and for comments about its possible role on star formation see the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.

Did animal mitochondrial evolution have a long period of stagnation?

I encountered an interesting popular article telling about findings challenging Darwin's evolutionary theory. The original article of Stoeckle and Thaler is here.

The conclusion of the article is that almost all animals, 9 out of 10 animal species on Earth today, including humans, would have emerged about 100,000 200,000 years ago. According to Wikipedia all animals are assumed to have emerged about 650 million years ago from a common ancestor. Cambrian explosion began around 542 million years ago. According to Wikipedia Homo Sapiens would have emerged 300,000-800,000 years ago.

On basis of Darwin's theory based on survival of the fittest and adaptation to a new environment, one would expect that the species such as ants and humans with large populations distributed around the globe become genetically more diverse over time than the species living in the same environment. The study of so called neutral mutations not relevant for survival and assumed to occur with some constant rate however finds that this is not the case. The study of so called mitochondrial DNA barcodes across 100,000 species showed that the variation of neutral mutations became very small about 100,000-200,00 years ago. One could say that the evolution differentiating between them began (or effectively began) after this time. As if mitochondrial clocks for these species would have been reset to zero at that time as the article states it This is taken as a support for the conclusion that all animals emerged about the same time as humans.

The proposal of (at least ) the writer of popular article is that the life was almost wiped out by a great catastrophe and extraterrestrials could have helped to start the new beginning. This brings in mind Noah's Ark scenario. But can one argue that humans and the other animals emerged at that time: were they only survivors from a catastrophe. One can also argue that the rate of mitochondrial mutations increased dramatically for some reason at that time.

Could one think that great evolutionary leap initiating the differentiation of mitochondrial genomes at that time and that before it the differentiation was very slow for some reason? Why this change would have occurred simultaneously in almost all animals? Something should have happened to the mitochondria and what kind of external evolutionary pressure could have caused it?

  1. To me the idea about ETs performing large scale genetic engineering does not sound very convincing. That only a small fraction of animals survived the catastrophe sounds more plausible idea. Was it great flood? One can argue that animals living in water would have survived in this case. Could some cosmic event such as nearby supernova have produced radiation killing most animals? But is mass extinction really necessary? Could some evolutionary pressure without extinction caused the apparent resetting of mitochondrial clock?
  2. In TGD based quantum biology the great leaps could be caused by quantum criticality perhaps induced by some evolutionary pressure due to some kind of catastrophe. The value of heff=nh0 (h0 is the minimal value of Planck constant) - kind of IQ in very general sense - in some part of mitochondria could have increased and also its value would have fluctuated. Did a new longer length scale relevant to the functioning of mitochondrias emerge? Did the mitochondrial size increase? Here I meet the boundaries of my knowledge about evolutionary biology!
  3. Forget for a moment the possibility of mass extinction. Could the rate of mutations, in particular the rate of neutral mutations, have increased as a response to evolutionary pressure? Just the increased ability to change helps to survive. This rate would become high at quantum criticality due to the presence of large quantum fluctuations (variations of heff). If the mitochondria were far from quantum quantum criticality before the catastrophe, the rate of mutations would have been very slow. Animal kingdom would have lived a period of stagnation. The emerging quantum criticality - forced by a catastrophe but not involving an extinction - could have increased the rate dramatically.
See the chapter Quantum Criticality and dark matter.

The experiments of Masaru Emoto with emotional imprinting of water

Sini Kunnas sent a link to a video telling about experiments of Masaru Emoto (see this) with water, which is at criticality with respect to freezing and then frozen. Emoto reports is that words expressing emotions are transmitted to water: positive emotions tend to generate beautiful crystal structures and negative emotions ugly ones. Also music and even pictures are claimed to have similar effects. Emoto has also carried out similar experiments with rice in water. Rice subjected to words began to ferment and water subject to words expressing negative emotions began to rotten.

Remark: Fermentation is a metabolic process consuming sugar in absence of oxygen. Metabolism is a basic signature of life so that at least in this aspect the water+rice system would become alive. The words expressing positive emotions or even music would serve as a signal "waking up" the system.

One could define genuine skeptic as a person who challenges existing beliefs and pseudo-skeptic (PS in the sequel) as a person challenging - usually denying - everything challenging the mainstream beliefs. The reception of the claims of Emoto is a representative example about the extremely hostile reactions of PSs as aggressive watchdogs of materialistic science towards anything that challenges their belief system. The psychology behind this attitude is same as behind religious and political fanatism.

I must emphasize that I see myself as a thinker and regard myself as a skeptic in the old-fashioned sense of the word challenging the prevailing world view rather than phenomena challenging the prevailing world view. I do not want to be classified as believer or non-believer. The fact is that if TGD inspired theory of consciousness and quantum biology describes reality, a revolution in the world view is unavoidable. Therefore it is natural to consider the working hypothesis that the effects are real and see what the TGD based explanation for them could be.

The Wikipedia article about Masaru Emoto (see this) provides a good summary of the experiments of Emoto and provides a lot of links so that I will give here only a brief sketch. According to the article Emoto believed that water was a "blueprint for our reality" and that emotional "energies" and "vibrations" could change the physical structure of water. The water crystallization experiments of Emoto consisted of exposing water in glasses to different words, pictures or music, and then freezing and examining the aesthetic properties of the resulting crystals with microscopic photography. Emoto made the claim that water exposed to positive speech and thoughts would result in visually "pleasing" crystals being formed when that water was frozen, and that negative intention would yield "ugly" crystal formations.

In 2008, Emoto and collaborators published and article titled "Double-Blind Test of the Effects of Distant Intention on Water Crystal Formation" about his about experiments with water in the Journal of Scientific Exploration, a peer reviewed scientific journal of the Society for Scientific Explorations (see this). The work was performed by Masaru Emoto and Takashige Kizu of Emoto’s own IHM General Institute, along with Dean Radin and Nancy Lund of the Institute of Noetic Sciences, which is on Stephen Barrett's Quackwatch (see this) blacklist of questionable organizations. PSs are the modern jesuits and for jesuits the end justifies the means.

Emoto has also carried experiments with rice samples in water. There are 3 samples. First sample "hears" words with positive emotional meaning, second sample words with negative emotional meaning, and the third sample serving as a control sample. Emoto reports (see this) that the rice subjected to words with positive emotional content began to ferment whereas water subject to words expressing negative emotions began to rotten. The control sample also began to rotten but not so fast.

In the article The experiments of Masaru Emoto with emotional imprinting of water I will consider the working hypothesis that the effects are real, and develop an explanation based on TGD inspired quantum biology. The basic ingredients of the model are following: magnetic body (MB) carrying dark matter as heff/h=n phases of ordinary matter; communications between MB and biological body (BB) using dark photons able to transform to ordinary photons identifiable as bio-photons; the special properties of water explained in TGD framework by assuming dark component of water implying that criticality for freezing involves also quantum criticality, and the realization of genetic code and counterparts of the basic bio-molecules as dark proton sequences and as 3-chords consisting of light or sound providing a universal language allowing universal manner to express emotions in terms of bio-harmony realized as music of light or sound. The entanglement of water sample and the subject person (with MBs included) realized as flux tube connections would give rise to a larger conscious entity expressing emotions via language realized in terms of basic biomolecules in a universal manner by utilizing genetic code realized in terms of both dark proton sequences and music of light of light and sound.

See the chapter Dark Nuclear Physics and Condensed Matter or the article The experiments of Masaru Emoto with emotional imprinting of water.

How molecules in cells "find" one another and organize into structures?

The title of the popular article How molecules in cells 'find' one another and organize into structures expresses an old problem of biology. Now the group led by Amy S. Gladfelter has made experimental progress in this problem. The work has been published in Science (see this).

It is reported that RNA molecules recognize each other to condense into the same droplet due to the specific 3D shapes that the molecules assume. Molecules with complementary base pairing can find each other and only similar RNAs condense on same droplet. This brings in mind DNA replication, transcription and translation. Furthermore, the same proteins that form liquid droplets in healthy cells, solidify in diseases like neurodegenerative disorders.

Some kind of phase transition is involved with the process but what brings the molecules together remains still a mystery. The TGD based solution of this mystery is one of the first applications of the notion of many-sheeted space-time in biology, and relies on the notion of magnetic flux tubes connecting molecules to form networks.

Consider first TGD based model about condensed and living matter. As a matter fact, the core of this model applies in all scales. What is new is there are not only particles but also bonds connecting them. In TGD they are flux tubes which can carry dark particles with nonstandard value heff/h=n of Planck constant. In ER-EPR approach in fashion they would be wormholes connecting distance space-time regions. In this case the problem is instability: wormholes pinch and split. In TGD monopole magnetic flux takes care of the stability topologically.

The flux tube networks occur in all scales but especially important are biological length scales.

  1. In chemistry the flux tubes are associated with valence bonds and hydrogen bonds (see this). In biology genetic code would be realized as dark nuclei formed by sequences of dark protons at magnetic flux tubes. Also RNA, amino-acids, and even tRNA could have dark counterparts of this kind (see this). Dark variants of biomolecules would serve as templates for their ordinary variants also at the level of dynamics. Biochemistry would be shadow dynamics dictated to high degree by the dark matter at flux tubes.
  2. Dark valence bonds can have quite long length and the outcome is entangled tensor net (see this). These neuronal nets serve as correlates for cognitive mental images in brain (see this) emotional mental images in body (see this). Dark photons propagating along flux tubes (more precisely topological light rays parallel to them) would be the fundamental communication mechanism (see this). Transmitters and nerve pulses would only change the connectedness properties of these nets.
The topological dynamics of flux tubes has two basic mechanisms (I have discussed this dynamics from the point of view of AI here).
  1. Reconnection of flux tubes serves is the first basic mechanism in the dynamics of flux tube networks and would give among other things rise to neural nets. The connection between neurons would correspond basically to flux tube pair which can split by reconnection. Also two flux tube pairs can reconnect forming Y shaped structures. Flux tube pairs could be quite generally associated with long dark hydrogen bonds scaled up by heff/h=n from their ordinary lengths. Flux tube pairs would carry besides dark protons also supra phases formed by the lone electron pairs associated quite generally with hydrogen bonding atoms. Also dark ions could appear at flux tubes.

    Biomolecules would have flux loops continually scanning the environment and reconnecting if they meet another flux loop. This however requires that magnetic field strengths are same at the two loops so that a resonance is achieved at level of dark photon communications. This makes possible recognition by cyclotron frequency spectrum serving as signature of the magnetic body of the molecule.

    Water memory (see this) would rely on this recognition mechanism based on cyclotron frequencies and also immune system would use it at basic level (here one cannot avoid saying something about homeopathy although I know that this spoils the day of the skeptic: the same mechanism would be involved also with it). For instance, dark DNA strand accompanying ordinary DNA and dark RNA molecules find each other by this mechanism (see this). Same applies to other reactions such as replication and translation .

  2. Shortening of the flux tubes heff/h reducing phase transition is second basic mechanism explaining how biomolecules can find each other in dense molecular soup. It is essential that the magnetic fields at flux tubes are nearly the same for the reconnection to form. A more refined model for the shortening involves two steps: reconnection of flux tubes leading to a formation of flux tube pair between molecules and shortening by heff/h reducing phase transition.
Also ordinary condensed matter phase transitions involve change of the topology of flux tube networks and the model for it allows to put the findings described in the article in TGD perspective.
  1. I just wrote an article (see this) about a solution of two old problems of hydrothermodynamics: the behavior of liquid-gas system in the critical region not consistent with the predictions of statistical mechanics (known already at times of Maxwell!) and the behavior of water above freezing point and in freezing. Dark flux tubes carrying dark protons and possibly electronic Cooper pairs made from so called lone electron pairs characterizing atoms forming hydrogen bonds.
  2. The phase transition from gas to liquid occurs when the number of flux tubes per molecule is high enough. At criticality both phases are in mechanical equilibrium - same pressure. Most interestingly, in solidification the large heff flux tubes transform to ordinary ones and liberate energy: this explains anomalously high latent heats of water and ammonia. The loss of large heff flux tubes however reduces "IQ" of the system.
The phase transitions changing the connectedness of the flux tube networks are fundamental in TGD inspired quantum biology.
  1. Sol-gel transition would correspond to this kind of biological phase transitions. Protein folding (see this) - kind of freezing of protein making it biologically inactive - and unfolding would be second basic example of this transition. The freezing would involve formation of flux tube bonds between points of linear protein and assignable to hydrogen bonds. External perturbations induce melting of the proteins and they become biologically active as the value of heff/h=n characterizing their maximal possible entanglement negentropy content (molecular IQ) increases. External perturbation feeds in energy acting as metabolic energy. I have called this period molecular summer.
  2. Solidification of proteins reducing is reported to be associated with diseases such neurodegenerative disorders. In TGD picture this would reduce the molecular IQ since the ability of system to generate negentropy would be reduced when heff for the flux tubes decreases to its ordinary value. What brings molecules together is not understood and TGD provides the explanation as heff reducing phase transition for flux tube pairs.

See the chapter Quantum Criticality and Dark Matter.

Maxwell's lever rule and expansion of water in freezing: two poorly understood phenomena

The view about condensed matter as a network with nodes identifiable as molecules and bonds as flux tubes is one of the basic predictions of TGD and obviously means a radical modification of the existing picture. In the sequel two old anomalies of standard physics are explained in this conceptual framework. The first anomaly was known already at the time of Maxwell. In critical region for gas liquid-phase transition van der Waals equation of state fails. Empirically the pressure in critical region depends only on temperature and is independent on molecular volume whereas van der Waals predicting cusp catastrophe type behavior predicts this dependence. This problem is quite general and plagues all analytical models based on statistical mechanics.

Maxwell's area rule and lever rule is the proposed modification of van der Waals in critical region. There are two phases corresponding to liquid and gas in the same pressure and the proportions of the phases vary so that the volume varies.

The lever rule used for metal allows allows to explain the mixture but requires that there are two "elements" involved. What the second "element" is in the case of liquid-gas system is poorly understood. TGD suggests the identification of the second "element" as magnetic flux tubes connecting the molecules. Their number per molecule varies and above critical number a phase transition to liquid phase would take place.

Second old problem relates to the numerous anomalies of water (see the web pages of Martin Chaplin). I have discussed these anomalies from TGD viewpoint in (see this). The most well-known anomalies relate to the behavior near freezing point. Below 4 degrees Celsius water expands rather than contracts as temperature is lowered. Also in the freezing an expansion takes place.

A general TGD based explanation for the anomalies of water would be the presence of also dark phases with non-standard value of Planck constant heff/h=n (see this). Combining this idea with the above proposal this would mean that flux tubes associated with hydrogen bonds can have also non-standard value of Planck constant in which case the flux tube length scales like n. The reduction of n would shorten long flexible flux tubes to short and rigid ones. This reduce the motility of molecules and also force them nearer to each other. This would create empty volume and lead to an increase of volume per molecule as temperature is lowered.

Quite generally, the energy for particles with non-standard value of Planck constant is higher than for ordinary ones (see this). In freezing all dark flux tubes would transform to ordinary ones and the surplus energy would be liberated so that the latent heat should be anomalously high for all molecules forming hydrogen bonds. Indeed, for both water and NH3 having hydrogen bonds the latent heat is anomalously high. Hydrogen bonding is possible if molecules have atoms with lone electron pairs (electrons are not assignable to valence bonds). Lone electron pairs could form Cooper pairs at flux tube pairs assignable to hydrogen bonds and carrying the dark proton. Therefore also high Tc superconductivity could be possible.

See the chapter Quantum Criticality and Dark Matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" or the article Maxwell's lever rule and expansion of water in freezing: two poorly understood phenomena.

Superfluids dissipate!

People in Aalto University - located in Finland by the way - are doing excellent work: there is full reason to be proud! I learned from the most recent experimental discovery by people working in Aalto University from Karl Stonjek. The title of the popular article is Friction found where there should be none—in superfluids near absolute zero.

In rotating superfluid one has vortices and they should not dissipate. The researchers of Aalto University however observed dissipation: the finding by J. Mäkinen et al is published in Phys Rev B. Dissipation means that they lose energy to environment. How could one explain this?

What comes in mind for an inhabitant of TGD Universe, is the hierarchy of Planck constants heff =n×h labelling a hierarchy of dark matters as phases of ordinary matter. The reduction of Planck constant heff liberates energy in a phase transition like manner giving rise to dissipation. This kind of burst like liberation of energy is mentioned in the popular article ("glitches" in neutron stars). I have already earlier proposed an explanation of fountain effect of superfluidity in which superfluid flow seems to defy gravity. The explanation is in terms of large value of heff implying delocalization of superfluid particles in long length scale (see this).

Remark: Quite generally, binding energies are reduced as function of heff/h= n. One has 1/n2 proportionality for atomic binding energies so that atomic energies defined as rest energy minus binding energy indeed increase with n. Interestingly, dimension 3 of space is unique in this respect. Harmonic oscillator energy and cyclotron energies are in turn proportional to n. The value of n for molecular valence bonds depends on n and the binding energies of valence bonds decrease as the valence of the atom with larger valence increases. One can say that the valence bonds involving atom at the right end of the row of the periodic table carry metabolic energy. This is indeed the case as one finds by looking the chemistry of nutrient molecules.

The burst of energy would correspond to a reduction of n at the flux tubes associated with the superfluid. Could the vortices decompose to smaller vortices with a smaller radius, maybe proportional to n? I have proposed similar mechanism of dissipation in ordinary fluids for more than two decades ago. Could also ordinary fluids involve hierarchy of Planck constants and could they dissipate in the same manner?

In biology liberation of metabolic energy - say in motor action - would take place in this kind of "glitch". It would reduce heff resources and thus the ability to generate negentropy: this leads to smaller negentropy resources and one gets tired and thinking becomes fuzzy.

See the chapter Quantum criticality and dark matter.

Condensed matter simulation of 4-D quantum Hall effect from TGD point of view

There is an interesting experimental work related to the condensed matter simulation of physics in space-times with D=4 spatial dimensions meaning that one would have D=1+4=5-dimensional space-time (see this and this). What is simulated is 4-D quantum Hall effect (QHE). In M-theory D= 1+4-dimensional branes would have 4 spatial dimensions and also 4-D QH would be possible so that the simulation allows to study this speculative higher-D physics but of course does not prove that 4 spatial dimensions are there.

In this article I try to understand the simulation, discuss the question whether 4 spatial dimensions and even 4+1 dimensions are possible in TGD framework in some sense, and also consider the general idea of the simulation higher-D physics using 4-D physics. This possibility is suggested by the fact that it is possible to imagine higher-dimensional spaces and physics: maybe this ability requires simulation of high-D physics using 4-D physics.

See the chapter Quantum Hall effect and Hierarchy of Planck Constants or the article Condensed matter simulation of 4-D quantum Hall effect from TGD point of view.

Exciton-polariton Bose-Einstein condensate at room temperature and heff hierarchy

Ulla gave in my blog a link to a very interesting work about Bose-Einstein condensation of quasi-particles known as exciton-polaritons. The popular article tells about a research article published in Nature by IBM scientists.

Bose-Einstein condensation happens for exciton-polaritons at room temperature, this temperature is four orders of magnitude higher than the corresponding temperature for crystals. This puts bells ringing. Could heff/h=n be involved?

One learns from Wikipedia that exciton-polaritons are electron hole pairs- photons kick electron to higher energy state and exciton is created.These quasiparticles would form a Bose-Einstein condensate with large number of particles in ground state. The critical temperature corresponds to the divergence of Boltzmann factor given by Bose-Einstein statistics.

  1. The energy of excitons must be of order thermal energy at room temperature: IR photons are in question. Membrane potential happens to corresponds to this energy. That the material is organic, might be of relevance. Living matter involves various Bose-Einstein condensate and one can consider also excitons.

    As noticed the critical temperature is surprisingly high. For crystal BECs it is of order .01 K. Now by a factor 30,000 times higher!

  2. Does the large value of heff =n×h visible make the critical temperature so high?

    Here I must look at Wikipedia for BEC of quasiparticles. Unfortunately the formula for n1/3 is copied from source and contains several errors. Dimensions are completely wrong.

    It should read n1/3= (ℏ)-1 (meffkTcr)x, x= 1/2.

    [not x=-1/2 and 1/ℏ rather than ℏ as in Wikipedia formula. This is usual: it would important to have Wikipedia contributors who understand at least something about what they are copying from various sources].

  3. The correct formula for critical temperature Tcr reads as

    Tcr= (dn/dV)y2/meff , y=2/3.

    [Tcr replaces Tc and y=2/3 replaces y=2 in Wikipedia formula. Note that in Wikipedia formula dn/dV is denoted by n reserved now for heff=n×h].

  4. In TGD one can generalize by replacing ℏ with ℏeff=n ×ℏ so that one has

    Tcr→ n2Tcr .

    Critical temperature would behave like n2 and the high critical temperature (room temperature) could be understood. In crystals the critical temperature is very low but in organic matter a large value of n≈ 100 could change the situation. n≈ 100 would scale up the atomic scale of 1 Angstrom as a coherence length of valence electron orbitals to cell membrane thickness about 10 nm. There would be one dark electron-hole pair per volume taken by dark valence electron: this would look reasonable.

One must consider also the conservative option n=1. Tcr is also proportional to (dn/dV)2, where dn/dV is the density of excitons and to the inverse of the effective mass meff. meff must be of order electron mass so that the density dn/dV or n is the critical parameter. In standard physics so high a critical temperature would require either large density dn/dV about factor 106 higher than in crystals.

Is this possible?

  1. Fermi energy E is given by almost identical formula but with factor 1/2 appearing on the right hand side. Using the density dne/dV for electrons instead of dn/dV gives an upper bound for Tcr ≤ 2EF. EF varies in the range 2-10 eV. The actual values of Tcr in crystals is of order 10-6 eV so that the density of quasi particles must be very small for crystals: dncryst/dV≈ 10-9dne/dV .
  2. For crystal the size scale Lcryst of the volume taken by quasiparticle would be 10-3 times larger than that taken by electron, which varies in the range 101/3-102/3 Angstroms giving the range (220-460) nm for Lcryst.
  3. On the other hand, the thickness of the plastic layer is Llayer= 35 nm, roughly 10 times smaller than Lcryst. One can argue that Lplast ≈ Llayer is a natural order of magnitude for Lcryst for quasiparticle in plastic layer. If so, the density of quasiparticles is roughly 103 times higher than for crystals. The (dn/dV)2-proportionality of Tcr would give the factor Tcr,plast≈ 106 Tcr,cryst so that there would be no need for non-standard value of heff!

    But is the assumption Lplast ≈ Llayer really justified in standard physics framework? Why this would be the case? What would make the dirty plastic different from super pure crystal?

The question which option is correct remains open: conservative would of course argue that the now-new-physics option is correct and might be right.

For background see the chapter Criticality and dark matter.

To the index page