Was von Neumann Right After All?

The work with TGD inspired model for topological quantum computation led to the realization that von Neumann algebras, in particular so called hyper-finite factors of type II1, seem to provide the mathematics needed to develop a more explicit view about the construction of S-matrix. The original discussion has transformed during years from free speculation reflecting in many aspects my ignorance about the mathematics involved to a more realistic view about the role of these algebras in quantum TGD. The discussions of this chapter have been restricted to the basic notions are discussed and only short mention is made to TGD applications discussed in second chapter.

The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation * and observables correspond to Hermitian operators. Any measurable function f(A) of operator A belongs to the algebra and one can say that non-commutative measure theory is in question.

The predictions of quantum theory are expressible in terms of traces of observables. Density matrix defining expectations of observables in ensemble is the basic example. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace: tr(Id)=1.

In the finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as the dimension of the projection.

The non-trivial implication of the fact that traces of projections are never larger than one is that the eigen spaces of the density matrix must be infinite-dimensional for non-vanishing projection probabilities. Quantum measurements can lead with a finite probability only to mixed states with a\ density matrix which is projection operator to infinite-dimensional subspace. The simple von Neumann algebras for which unit operator has unit trace are known as factors of type II1.

The definitions of adopted by von Neumann allow however more general algebras. Type In algebras correspond to finite-dimensional matrix algebras with finite traces whereas I associated with a separable infinite-dimensional Hilbert space does not allow bounded traces. For algebras of type III non-trivial traces are always infinite and the notion of trace becomes useless being replaced by the notion of state which is generalization of the notion of thermodynamical state. The fascinating feature of this notion of state is that it defines a unique modular automorphism of the factor defined apart from unitary inner automorphism and the question is whether this notion or its generalization might be relevant for the construction of M-matrix in TGD. It however seems that in TGD framework based on Zero Energy Ontology identifiable as "square root" of thermodynamics a square root of thermodynamical state is needed.

The inclusions of hyper-finite factors define an excellent candidate for the description of finite measurement resolution with included factor representing the degrees of freedom below measurement resolution. The would also give connection to the notion of quantum group whose physical interpretation has remained unclear. This idea is central to the proposed applications to quantum TGD discussed in separate chapter.

Back to the table of contents

Evolution of Ideas about Hyper-finite Factors in TGD

The work with TGD inspired model for quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors, could provide the mathematics needed to develop a more explicit view about the construction of M-matrix generalizing the notion of S-matrix in zero energy ontology (ZEO). In this chapter I will discuss various aspects of hyper-finite factors and their possible physical interpretation in TGD framework.

1. Hyper-finite factors in quantum TGD

The following argument suggests that von Neumann algebras known as hyper-finite factors (HFFs) of type III1 appearing in relativistic quantum field theories provide also the proper mathematical framework for quantum TGD.

  1. The Clifford algebra of the infinite-dimensional Hilbert space is a von Neumann algebra known as HFF of type II1. Therefore also the Clifford algebra at a given point (light-like 3-surface) of world of classical worlds (WCW) is HFF of type II1. If the fermionic Fock algebra defined by the fermionic oscillator operators assignable to the induced spinor fields (this is actually not obvious!) is infinite-dimensional it defines a representation for HFF of type II1. Super-conformal symmetry suggests that the extension of the Clifford algebra defining the fermionic part of a super-conformal algebra by adding bosonic super-generators representing symmetries of WCW respects the HFF property. It could however occur that HFF of type II results.
  2. WCW is a union of sub-WCWs associated with causal diamonds (CD) defined as intersections of future and past directed light-cones. One can allow also unions of CDs and the proposal is that CDs within CDs are possible. Whether CDs can intersect is not clear.
  3. The assumption that the M4 proper distance a between the tips of CD is quantized in powers of 2 reproduces p-adic length scale hypothesis but one must also consider the possibility that a can have all possible values. Since SO(3) is the isotropy group of CD, the CDs associated with a given value of a and with fixed lower tip are parameterized by the Lobatchevski space L(a)=SO(3,1)/SO(3). Therefore the CDs with a free position of lower tip are parameterized by M4× L(a). A possible interpretation is in terms of quantum cosmology with a identified as cosmic time. Since Lorentz boosts define a non-compact group, the generalization of so called crossed product construction strongly suggests that the local Clifford algebra of WCW is HFF of type III1. If one allows all values of a, one ends up with M4× M4+ as the space of moduli for WCW.
  4. An interesting special aspect of 8-dimensional Clifford algebra with Minkowski signature is that it allows an octonionic representation of gamma matrices obtained as tensor products of unit matrix 1 and 7-D gamma matrices γk and Pauli sigma matrices by replacing 1 and γk by octonions. This inspires the idea that it might be possible to end up with quantum TGD from purely number theoretical arguments. One can start from a local octonionic Clifford algebra in M8. Associativity (co-associativity) condition is satisfied if one restricts the octonionic algebra to a subalgebra associated with any hyper-quaternionic and thus 4-D sub-manifold of M8. This means that the induced gamma matrices associated with the Kähler action span a complex quaternionic (complex co-quaternionic) sub-space at each point of the sub-manifold. This associative (co-associative) sub-algebra can be mapped a matrix algebra. Together with M8-H duality this leads automatically to quantum TGD and therefore also to the notion of WCW and its Clifford algebra which is however only mappable to an associative (co-associative( algebra and thus to HFF of type II1.

2. Hyper-finite factors and M-matrix

HFFs of type III1 provide a general vision about M-matrix.

  1. The factors of type III allow unique modular automorphism Δit (fixed apart from unitary inner automorphism). This raises the question whether the modular automorphism could be used to define the M-matrix of quantum TGD. This is not the case as is obvious already from the fact that unitary time evolution is not a sensible concept in zero energy ontology.
  2. Concerning the identification of M-matrix the notion of state as it is used in theory of factors is a more appropriate starting point than the notion modular automorphism but as a generalization of thermodynamical state is certainly not enough for the purposes of quantum TGD and quantum field theories (algebraic quantum field theorists might disagree!). Zero energy ontology requires that the notion of thermodynamical state should be replaced with its "complex square root" abstracting the idea about M-matrix as a product of positive square root of a diagonal density matrix and a unitary S-matrix. This generalization of thermodynamical state -if it exists- would provide a firm mathematical basis for the notion of M-matrix and for the fuzzy notion of path integral.
  3. The existence of the modular automorphisms relies on Tomita-Takesaki theorem, which assumes that the Hilbert space in which HFF acts allows cyclic and separable vector serving as ground state for both HFF and its commutant. The translation to the language of physicists states that the vacuum is a tensor product of two vacua annihilated by annihilation oscillator type algebra elements of HFF and creation operator type algebra elements of its commutant isomorphic to it. Note however that these algebras commute so that the two algebras are not hermitian conjugates of each other. This kind of situation is exactly what emerges in zero energy ontology (ZEO): the two vacua can be assigned with the positive and negative energy parts of the zero energy states entangled by M-matrix.
  4. There exists infinite number of thermodynamical states related by modular automorphisms. This must be true also for their possibly existing "complex square roots". Physically they would correspond to different measurement interactions meaning the analog of state function collapse in zero modes fixing the classical conserved charges equal to the quantal counterparts. Classical charges would be parameters characterizing zero modes.
A concrete construction of M-matrix motivated the recent rather precise view about basic variational principles is proposed. Fundamental fermions localized to string world sheets can be said to propagate as massless particles along their boundaries. The fundamental interaction vertices correspond to two fermion scattering for fermions at opposite throats of wormhole contact and the inverse of the conformal scaling generator L0 would define the stringy propagator characterizing this interaction. Fundamental bosons correspond to pairs of fermion and antifermion at opposite throats of wormhole contact. Physical particles correspond to pairs of wormhole contacts with monopole Kähler magnetic flux flowing around a loop going through wormhole contacts.

3. Connes tensor product as a realization of finite measurement resolution

The inclusions N M of factors allow an attractive mathematical description of finite measurement resolution in terms of Connes tensor product but do not fix M-matrix as was the original optimistic belief.

  1. In ZEO N would create states experimentally indistinguishable from the original one. Therefore N takes the role of complex numbers in non-commutative quantum theory. The space M/ N would correspond to the operators creating physical states modulo measurement resolution and has typically fractal dimension given as the index of the inclusion. The corresponding spinor spaces have an identification as quantum spaces with non-commutative N-valued coordinates.
  2. This leads to an elegant description of finite measurement resolution. Suppose that a universal M-matrix describing the situation for an ideal measurement resolution exists as the idea about square root of state encourages to think. Finite measurement resolution forces to replace the probabilities defined by the M-matrix with their N "averaged counterparts. The "averaging" would be in terms of the complex square root of N-state and a direct analog of functionally or path integral over the degrees of freedom below measurement resolution defined by (say) length scale cutoff.
  3. One can construct also directly M-matrices satisfying the measurement resolution constraint. The condition that N acts like complex numbers on M-matrix elements as far as N-"averaged probabilities are considered is satisfied if M-matrix is a tensor product of M-matrix in M( N interpreted as finite-dimensional space with a projection operator to N. The condition that N averaging in terms of a complex square root of N state produces this kind of M-matrix poses a very strong constraint on M-matrix if it is assumed to be universal (apart from variants corresponding to different measurement interactions).

4. Analogs of quantum matrix groups from finite measurement resolution?

The notion of quantum group replaces ordinary matrices with matrices with non-commutative elements. In TGD framework I have proposed that the notion should relate to the inclusions of von Neumann algebras allowing to describe mathematically the notion of finite measurement resolution.

In this chapter I will consider the notion of quantum matrix inspired by recent view about quantum TGD and it provides a concrete representation and physical interpretation of quantum groups in terms of finite measurement resolution. The basic idea is to replace complex matrix elements with operators expressible as products of non-negative hermitian operators and unitary operators analogous to the products of modulus and phase as a representation for complex numbers.

The condition that determinant and sub-determinants exist is crucial for the well-definedness of eigenvalue problem in the generalized sense. The weak definition of determinant meaning its development with respect to a fixed row or column does not pose additional conditions. Strong definition of determinant requires its invariance under permutations of rows and columns. The permutation of rows/columns turns out to have interpretation as braiding for the hermitian operators defined by the moduli of operator valued matrix elements. The commutativity of all sub-determinants is essential for the replacement of eigenvalues with eigenvalue spectra of hermitian operators and sub-determinants define mutually commuting set of operators.

The resulting quantum matrices define a more general structure than quantum group but provide a concrete representation and interpretation for quantum group in terms of finite measurement resolution if q is a root of unity. For q=+/- 1 (Bose-Einstein or Fermi-Dirac statistics) one obtains quantum matrices for which the determinant is apart from possible change by sign factor invariant under the permutations of both rows and columns. One could also understand the fractal structure of inclusion sequences of hyper-finite factors resulting by recursively replacing operators appearing as matrix elements with quantum matrices.

5. Quantum spinors and fuzzy quantum mechanics

The notion of quantum spinor leads to a quantum mechanical description of fuzzy probabilities. For quantum spinors state function reduction cannot be performed unless quantum deformation parameter equals to q=1. The reason is that the components of quantum spinor do not commute: it is however possible to measure the commuting operators representing moduli squared of the components giving the probabilities associated with "true" and "false". The universal eigenvalue spectrum for probabilities does not in general contain (1,0) so that quantum qbits are inherently fuzzy. State function reduction would occur only after a transition to q=1 phase and decoherence is not a problem as long as it does not induce this transition.

Back to the table of contents

Does TGD Predict the Spectrum of Planck Constants?

The quantization of Planck constant has been the basic them of TGD since 2005. The basic idea was stimulated by the finding of Nottale that planetary orbits could be seen as Bohr orbits with enormous value of Planck constant given by hbargr= GM1M2/v0, where the velocity parameter v0 has the approximate value v0≈ 2-11 for the inner planets. This inspired the ideas that quantization is due to a condensation of ordinary matter around dark matter concentrated near Bohr orbits and that dark matter is in macroscopic quantum phase in astrophysical scales. The second crucial empirical input were the anomalies associated with living matter. The recent version of the chapter represents the evolution of ideas about quantization of Planck constants from a perspective given by seven years's work with the idea. A very concise summary about the situation is as follows.

Basic physical ideas

The basic phenomenological rules are simple and there is no need to modify them.

  1. The phases with non-standard values of effective Planck constant are identified as dark matter. The motivation comes from the natural assumption that only the particles with the same value of effective Planck can appear in the same vertex. One can illustrate the situation in terms of the book metaphor. Imbedding spaces with different values of Planck constant form a book like structure and matter can be transferred between different pages only through the back of the book where the pages are glued together. One important implication is that light exotic charged particles lighter than weak bosons are possible if they have non-standard value of Planck constant. The standard argument excluding them is based on decay widths of weak bosons and has led to a neglect of large number of particle physics anomalies.
  2. Large effective or real value of Planck constant scales up Compton length - or at least de Broglie wave length - and its geometric correlate at space-time level identified as size scale of the space-time sheet assignable to the particle. This could correspond to the Kähler magnetic flux tube for the particle forming consisting of two flux tubes at parallel space-time sheets and short flux tubes at ends with length of order CP2 size.

    This rule has far reaching implications in quantum biology and neuroscience since macroscopic quantum phases become possible as the basic criterion stating that macroscopic quantum phase becomes possible if the density of particles is so high that particles as Compton length sized objects overlap. Dark matter therefore forms macroscopic quantum phases. One implication is the explanation of mysterious looking quantal effects of ELF radiation in EEG frequency range on vertebrate brain: E=hf implies that the energies for the ordinary value of Planck constant are much below the thermal threshold but large value of Planck constant changes the situation. Also the phase transitions modifying the value of Planck constant and changing the lengths of flux tubes (by quantum classical correspondence) are crucial as also reconnections of the flux tubes.

    The hierarchy of Planck constants suggests also a new interpretation for FQHE (fractional quantum Hall effect) in terms of anyonic phases with non-standard value of effective Planck constant realized in terms of the effective multi-sheeted covering of imbedding space: multi-sheeted space-time is to be distinguished from many-sheeted space-time.

    In astrophysics and cosmology the implications are even more dramatic. It was Nottale who first introduced the notion of gravitational Planck constant as hbargr= GMm/v0, v0<1 has interpretation as velocity light parameter in units c=1. This would be true for GMm/v0 ≥ 1. The interpretation of hbargr in TGD framework is as an effective Planck constant associated with space-time sheets mediating gravitational interaction between masses M and m. The huge value of hbargr means that the integer hbargr/hbar0 interpreted as the number of sheets of covering is gigantic and that Universe possesses gravitational quantum coherence in super-astronomical scales for masses which are large. This changes the view about gravitons and suggests that gravitational radiation is emitted as dark gravitons which decay to pulses of ordinary gravitons replacing continuous flow of gravitational radiation.

  3. Why Nature would like to have large effective value of Planck constant? A possible answer relies on the observation that in perturbation theory the expansion takes in powers of gauge couplings strengths α=g2/4πhbar. If the effective value of hbar replaces its real value as one might expect to happen for multi-sheeted particles behaving like single particle, α is scaled down and perturbative expansion converges for the new particles. One could say that Mother Nature loves theoreticians and comes in rescue in their attempts to calculate. In quantum gravitation the problem is especially acute since the dimensionless parameter GMm/hbar has gigantic value. Replacing hbar with hbargr=GMm/v0 the coupling strength becomes v0<1.

Space-time correlates for the hierarchy of Planck constants

The hierarchy of Planck constants was introduced to TGD originally as an additional postulate and formulated as the existence of a hierarchy of imbedding spaces defined as Cartesian products of singular coverings of M4 and CP2 with numbers of sheets given by integers na and nb and hbar=nhbar0. n=nanb.

With the advent of zero energy ontology, it became clear that the notion of singular covering space of the imbedding space could be only a convenient auxiliary notion. Singular means that the sheets fuse together at the boundary of multi-sheeted region. The effective covering space emerges naturally from the vacuum degeneracy of Kähler action meaning that all deformations of canonically imbedded M4 in M4×CP2 have vanishing action up to fourth order in small perturbation. This is clear from the fact that the induced Kähler form is quadratic in the gradients of CP2 coordinates and Kähler action is essentially Maxwell action for the induced Kähler form. The vacuum degeneracy implies that the correspondence between canonical momentum currents ∂LK/∂(∂αhk) defining the modified gamma matrices and gradients ∂α hk is not one-to-one. Same canonical momentum current corresponds to several values of gradients of imbedding space coordinates. At the partonic 2-surfaces at the light-like boundaries of CD carrying the elementary particle quantum numbers this implies that the two normal derivatives of hk are many-valued functions of canonical momentum currents in normal directions.

Multi-furcation is in question and multi-furcations are indeed generic in highly non-linear systems and Kähler action is an extreme example about non-linear system. What multi-furcation means in quantum theory? The branches of multi-furcation are obviously analogous to single particle states. In quantum theory second quantization means that one constructs not only single particle states but also the many particle states formed from them. At space-time level single particle states would correspond to N branches bi of multi-furcation carrying fermion number. Two-particle states would correspond to 2-fold covering consisting of 2 branches bi and bj of multi-furcation. N-particle state would correspond to N-sheeted covering with all branches present and carrying elementary particle quantum numbers. The branches co-incide at the partonic 2-surface but since their normal space data are different they correspond to different tensor product factors of state space. Also now the factorization N= nanb occurs but now na and nb would relate to branching in the direction of space-like 3-surface and light-like 3-surface rather than M4 and CP2 as in the original hypothesis.

Multi-furcations relate closely to the quantum criticality of Kähler action. Feigenbaum bifurcations represent a toy example of a system which via successive bifurcations approaches chaos. Now more general multi-furcations in which each branch of given multi-furcation can multi-furcate further, are possible unless on poses any additional conditions. This allows to identify additional aspect of the geometric arrow of time. Either the positive or negative energy part of the zero energy state is "prepared" meaning that single n-sub-furcations of N-furcation is selected. The most general state of this kind involves superposition of various n-sub-furcations.

Back to the table of contents

Mathematical Speculations Inspired by the Hierarchy of Planck Constants

This chapter contains the purely mathematical speculations about the hierarchy of Planck constants (actually only effective hierarchy if the recent interpretation is correct) as separate from the material describing the physical ideas, key mathematical concepts, and the basic applications. These mathematical speculations emerged during the first stormy years in the evolution of the ideas about Planck constant and must be taken with a big grain of salt. I feel myself rather conservative as compared to the fellow who produced this stuff for 7 years ago. This all is of course very relative. Many readers might experience this recent me as a reckless speculator.

The first speculative question is about possible relationship between Jones inclusions of hyperfinite factors of type $II_1$ (hyper-finite factors are von Neuman algebras emerging naturally in TGD framework). The basic idea is that the discrete groups assignable to inclusions could correspond to discrete groups acting in the effective covering spaces of imbedding space assignable to the hierarchy of Planck constants.

There are also speculations relating to the hierarchy of Planck constants, Mc-Kay correspondence, and Jones inclusions. Even Farey sequences, Riemann hypothesis and and N-tangles are discussed. Depending on reader these speculations might be experienced as irritating or entertaining. It would be interesting to go this stuff through in the light of recent understanding of the effective hierarchy of Planck constants to see what portion of its survives.

Back to the table of contents

Negentropy Maximization Principle and TGD Inspired Theory of Consciousness

In TGD Universe the moments of consciousness are associated with quantum jumps between quantum histories. The proposal is that the dynamics of consciousness is governed by Negentropy Maximization Principle (NMP), which states the information content of conscious experience is maximal. The formulation of NMP is the basic topic of this chapter.

NMP codes for the dynamics of standard state function reduction and states that the state function reduction process following U-process gives rise to a maximal reduction of entanglement entropy at each step. In the generic case this implies at each step a decomposition of the system to unique unentangled subsystems and the process repeats itself for these subsystems. The process stops when the resulting subsystem cannot be decomposed to a pair of free systems since energy conservation makes the reduction of entanglement kinematically impossible in the case of bound states. The natural assumption is that self loses consciousness when it entangles via bound state entanglement.

There is an important exception to this vision based on ordinary Shannon entropy. There exists an infinite hierarchy of number theoretical entropies making sense for rational or even algebraic entanglement probabilities. In this case the entanglement negentropy can be negative so that NMP favors the generation of negentropic entanglement (NE), which is not bound state entanglement in standard sense since the condition that state function reduction leads to an eigenstate of density matrix requires the final state density matrix to be a projection operator.

NE might serve as a correlate for emotions like love and experience of understanding. The reduction of ordinary entanglement entropy to random final state implies second law at the level of ensemble. For the generation of NE the outcome of the reduction is not random: the prediction is that second law is not a universal truth holding true in all scales. Since number theoretic entropies are natural in the intersection of real and p-adic worlds, this suggests that life resides in this intersection. The existence effectively bound states with no binding energy might have important implications for the understanding the stability of basic bio-polymers and the key aspects of metabolism. A natural assumption is that self experiences expansion of consciousness as it entangles in this manner. Quite generally, an infinite self hierarchy with the entire Universe at the top is predicted.

There are two options to consider. Strong form of NMP, which would demand maximal negentropy gain: this would not allow morally responsible free will if ethics is defined in terms of evolution as increase of NE resources. Weak form of NMP would allow self to choose also lower-dimensional sub-space of the projector defining the final state sub-space for strong form of NMP. Weak form turns out to have several highly desirable consequences: it favours dimensions of final state space coming as powers of prime, and in particular dimensions which are primes near powers of prime: as a special case, p-adic length scale hypothesis follows. Weak form of NMP allows also quantum computations, which halt unlike strong form of NMP.

Besides number theoretic negentropies there are also other new elements as compared to the earlier formulation of NMP.

  1. ZEO modifies dramatically the formulation of NMP since U-matrix acts between zero energy states and can be regarded as a collection of orthonormal M-matrices, which generalize the ordinary S-matrix and define what might be called a complex square root of density matrix so that kind of a square root of thermodynamics at single particle level justifying also p-adic mass calculations based on p-adic thermodynamics is in question.
  2. The hierarchy of Planck constants labelling a hierarchy of quantum criticalities is a further new element having important implications for conciousness and biology.
  3. Hyper-finite factors of type II1 represent an additional technical complication requiring separate treatment of NMP taking into account finite measurement resolution realized in terms of inclusions of these factors.
NMP has wide range of important implications.
  1. In particular, one must give up the standard view about second law and replace it with NMP taking into account the hierarchy of CDs assigned with ZEO and dark matter hierarchy labelled by the values of Planck constants, as well as the effects due to NE. The breaking of second law in standard sense is expected to take place and be crucial for the understanding of evolution.
  2. Self hierarchy having the hierarchy of CDs as imbedding space correlate leads naturally to a description of the contents of consciousness analogous to thermodynamics except that the entropy is replaced with negentropy.
  3. In the case of living matter NMP allows to understand the origin of metabolism. NMP demands that self generates somehow negentropy: otherwise a state function reduction to tjhe opposite boundary of CD takes place and means death and re-incarnation of self. Metabolism as gathering of nutrients, which by definition carry NE is the manner to avoid this fate. This leads to a vision about the role of NE in the generation of sensory qualia and a connection with metabolism. Metabolites would carry NE and each metabolite would correspond to a particular qualia (not only energy but also other quantum numbers would correspond to metabolites). That primary qualia would be associated with nutrient flow is not actually surprising!
  4. NE leads to a vision about cognition. Negentropically entangled state consisting of a superposition of pairs can be interpreted as a conscious abstraction or rule: negentropically entangled Schrödinger cat knows that it is better to keep the bottle closed.
  5. NMP implies continual generation of NE. One might refer to this ever expanding universal library as "Akaschic records". NE could be experienced directly during the repeated state function reductions to the passive boundary of CD - that is during the life cycle of sub-self defining the mental image. Another, less feasible option is that interaction free measurement is required to assign to NE conscious experience. As mentioned, qualia characterizing the metabolite carrying the NE could characterize this conscious experience.
  6. A connection with fuzzy qubits and quantum groups with NE is highly suggestive. The implications are highly non-trivial also for quantum computation allowed by weak form of NMP since NE is by definition stable and lasts the lifetime of self in question.

Back to the table of contents

Quantum criticality and dark matter

Quantum criticality is one of the corner stone assumptions of TGD. The value of Kähler coupling strength fixes quantum TGD and is analogous to critical temperature. TGD Universe would be quantum critical. What does this mean is however far from obvious and I have pondered the notion repeatedly both from the point of view of mathematical description and phenomenology. Superfluids exhibit rather mysterious looking effects such as fountain effect and what looks like quantum coherence of superfluid containers which should be classically isolated. These findings serve as a motivation for the proposal that genuine superfluid portion of superfluid corresponds to a large heff phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity.

Back to the table of contents

About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff

Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In this chapter a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed. In ZEO the non-changing parts of zero energy states are assigned to the passive boundary of CD and β0 should be assigned to it.

There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind<LH. The identification β0= LM4/LH does not allow the identification of LH=LM4. LH would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD.

One can deduce an estimate for β0 by approximating the space-time surface as Robertson-Walker cosmology expected to be a good approximation near the passive light-like boundary of CD. The resulting formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses.

β0/4π is analogous to gravitational fine structure constant for heff=hgr. Could one see it as fundamental coupling parameter appearing also in other interactions at quantum criticality in which ordinary perturbation series diverges? Remarkably, the value of G does not appear at all in the perturbative expansion in this region! Could G have several values? This suggests the generalization G= lP2/hbar → G= R2/hbareff so that G would indeed have a spectrum and that Planck length lP would be equal to CP2 radius R so that only one fundamental length would be associated with twistorialization. Ordinary Newton's constant would be given by G= R2/heff with heff/h0 having value in the range 107-108.

The second topic of the chapter relates to the the fact that the measurements of G give differing results with differences between measurements larger than the measurement accuracy. This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107. The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.

In this chapter I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.

Back to the table of contents

TGD View about Quasars

The work of Rudolph Schild and his colleagues Darryl Letier and Stanley Robertson (among others) suggests that quasars are not supermassive blackholes but something else - MECOs, magnetic eternally collapsing objects having no horizon and possessing magnetic moment. Schild et al argue that the same applies to galactic blackhole candidates and active galactic nuclei, perhaps even to ordinary blackholes as Abhas Mitra, the developer of the notion of MECO proposes.

In the sequel TGD inspired view about quasars relying on the general model for how galaxies are generated as the energy of thickened cosmic strings decays to ordinary matter is proposed. Quasars would not be be blackhole like objects but would serve as an analog of the decay of inflaton field producing the galactic matter. The energy of the string like object would replace galactic dark matter and automatically predict a flat velocity spectrum.

TGD is assumed to have standard model and GRT as QFT limit in long length scales. Could MECOs provide this limit? It seems that the answer is negative: MECOs represent still collapsing objects. The energy of inflaton field is replaced with the sum of the magnetic energy of cosmic string and negative volume energy, which both decrease as the thickness of flux tube increases. The liberated energy transforms to ordinary particles and their dark variants in TGD sense. Time reversal of blackhole would be more appropriate interpretation. One can of course ask, whether the blackhole candidates in galactic nuclei are time reversals of quasars in TGD sense.

The writing of the article led also to a considerable understanding of two key aspects of TGD. The understanding of twistor lift and p-adic evolution of cosmological constant improved considerably. Also the understanding of gravitational Planck constant and the notion of space-time as a covering space became much more detailed in turn allowing much more refined view about the anatomy of magnetic body.

Back to the table of contents

Holography and Quantum Error Correcting Codes: TGD View

Preskill et all suggest a highly interesting representation of holography in terms of quantum error correction codes The idea is that time= constant section of AdS, which is hyperbolic space allowing tessellations, can define tensor networks. So called perfect tensors are building bricks of the tensor networks providing representation for holography and at the same time defining error correcting codes by mapping localized interior states (logical qubits) to highly entangled non-local boundary states (physical qubits).

There are three observations that put bells ringing and actually motivated this article.

  1. Perfect tensors define entanglement which TGD framework corresponds negentropic entanglement playing key role in TGD inspired theory of consciousness and of living matter.
  2. In TGD framework the hyperbolic tesselations are realized at hyperbolic spaces H3(a) defining light-cone proper time hyperboloids of M4 light-cone.
  3. TGD replaces AdS/CFT correspondence with strong form of holography.
A very attractive idea is that in living matter magnetic flux tube networks defining quantum computational networks provide a realization of tensor networks realizing also holographic error correction mechanism: negentropic entanglement - perfect tensors - would be the key element. As I have proposed, these flux tube networks would define kind of central nervous system make it possible for living matter to experience consciously its biological body using magnetic body.

These networks would also give rise to the counterpart of condensed matter physics of dark matter at the level of magnetic body: the replacement of lattices based on subgroups of translation group with infinite number of tesselations means that this analog of condensed matter physics describes quantum complexity.

Back to the table of contents


Recent Status of Lepto-Hadron Hypothesis

TGD suggests strongly the existence of lepto-hadron physics. Lepto-hadrons are bound states of color excited leptons and the anomalous production of e+e- pairs in heavy ion collisions finds a nice explanation as resulting from the decays of lepto-hadrons with basic condensate level k=127 and having typical mass scale of one MeV . The recent indications on the existence of a new fermion with quantum numbers of muon neutrino and the anomaly observed in the decay of orto-positronium give further support for the lepto-hadron hypothesis. There is also evidence for anomalous production of low energy photons and e+e- pairs in hadronic collisions. The previous work (,which contained some errors) is summarized and developed further.

The identification of leptohadrons as a particular instance in the predicted hierarchy of dark matters interacting directly only via graviton exchange allows to circumvent the lethal counter arguments against the leptohadron hypothesis ( Z0 decay width and production of colored lepton jets in e+e- annihilation) even without assumption about the loss of asymptotic freedom.

PCAC hypothesis and its σ model realization lead to a model containing only the coupling of the lepto-pion to the axial vector current as a free parameter. The prediction for e+e- production cross section is of correct order of magnitude only provided one assumes that lepto-pions decay to lepto-nucleon pair eex+eex- first and that lepto-nucleons, having quantum numbers of electron and having mass only slightly larger than electron mass, decay to lepton and photon. The peculiar production characteristics are correctly predicted. There is some evidence that the resonances decay to a final state containing n>2 particle and the experimental demonstration that lepto-nucleon pairs are indeed in question, would be a breakthrough for TGD.

During 18 years after the first published version of the model also evidence for colored μ has emerged. Towards the end of 2008 CDF anomaly gave a strong support for the colored excitation of τ. The lifetime of the light long lived state identified as a charged τ-pion comes out correctly and the identification of the reported 3 new particles as p-adically scaled up variants of neutral τ-pion predicts their masses correctly. The observed muon jets can be understood in terms of the special reaction kinematics for the decays of neutral τ-pion to 3 τ-pions with mass scale smaller by a factor 1/2 and therefore almost at rest. A spectrum of new particles is predicted. The discussion of CDF anomaly led to a modification and generalization of the original model for lepto-pion production and the predicted production cross section is consistent with the experimental estimate.

Back to the table of contents

TGD and Nuclear Physics

This chapter is devoted to the possible implications of TGD for nuclear physics. In the original version of the chapter the focus was in the attempt to resolve the problems caused by the incorrect interpretation of the predicted long ranged weak gauge fields. What seems to be a breakthrough in this respect came only quite recently (2005), more than a decade after the first version of this chapter, and is based on TGD based view about dark matter inspired by the developments in the mathematical understanding of quantum TGD. In this approach condensed matter nuclei can be either ordinary, that is behave essentially like standard model nuclei, or be in dark matter phase in which case they generate long ranged dark weak gauge fields responsible for the large parity breaking effects in living matter. This approach resolves trivially the objections against long range classical weak fields.

The basic criterion for the transition to dark matter phase having by definition large value of hbar is that the condition α Q1Q2≈1 for appropriate gauge interactions expressing the fact that the perturbation series does not converge. The increase of hbar makes perturbation series converging since the value of α is reduced but leaves lowest order classical predictions invariant.

This criterion can be applied to color force and inspires the hypothesis that valence quarks inside nucleons correspond to large hbar phase whereas sea quark space-time sheets correspond to the ordinary value of hbar. This hypothesis is combined with the earlier model of strong nuclear force based on the assumption that long color bonds with p-adically scaled down quarks with mass of order MeV at their ends are responsible for the nuclear strong force.

1.Is strong force due to color bonds between exotic quark pairs?

The basic assumptions are following.

  1. Valence quarks correspond to large hbar phase with p-adic length scale L(keff=129)= L(107)/v0≈ 211L(107)≈ 5× 10-12 m whereas sea quarks correspond to ordinary hbar and define the standard size of nucleons.

  2. Color bonds with length of order L(127)≈ ≈ 2.5× 10-12 m and having quarks with ordinary hbar and p-adically scaled down masses mq(dark)≈ v0mq at their ends define kind of rubber bands connecting nucleons. The p-adic length scale of exotic quarks differs by a factor 2 from that of dark valence quarks so that the length scales in question can couple naturally. This large length scale as also other p-adic length scales correspond to the size of the topologically quantized field body associated with system, be it quark, nucleon, or nucleus.

Valence quarks and even exotic quarks can be dark with respect to both color and weak interactions but not with respect to electromagnetic interactions. The model for binding energies suggests darkness with respect to weak interactions with weak boson masses scaled down by a factor v0. Weak interactions remain still weak. Quarks and nucleons as defined by their k=107 sea quark portions would condense at scaled up weak space-time sheet with keff=111 having p-adic size 10-14 meters. The estimate for the atomic number of the heaviest possible nucleus comes out correctly.

The wave functions of the nucleons fix the boundary values of the wave functionals of the color magnetic flux tubes idealizable as strings. In the terminology of M-theory nucleons correspond to small branes and color magnetic flux tubes to strings connecting them.

2. General features of strong interactions

This picture allows to understand the general features of strong interactions.

  • Quantum classical correspondence and the assumption that the relevant space-time surfaces have 2-dimensional CP2 projection implies Abelianization. Strong isospin group can be identified as the SU(2) subgroup of color group acting as isotropies of space-time surfaces, and the U(1) holonomy of color gauge potential defines a preferred direction of strong isospin. Exotic color isospin corresponds to strong isospin. The correlation of exotic color with weak isospin of the nucleon is strongly suggested by quantum classical correspondence.

  • Both color singlet spin 0 pion type bonds and colored spin 1 bonds are allowed and the color magnetic spin-spin interaction between the exotic quark and anti-quark is negative in this case. p-p and n-n bonds correspond to oppositely colored spin 1 bonds and p-n bonds to colorless spin 0 bonds for which the binding energy is 3 times higher. The presence of colored bonds forces the presence of neutralizing dark gluon condensate favoring states with N-P>0.

  • Shell model based on harmonic oscillator potential follows naturally from this picture in which the magnetic flux tubes connecting nucleons take the role of springs. Spin-orbit interaction can be understood in terms of the color force in the same way as it is understood in atomic physics.

3. Nuclear binding energies

  • The binding energies per nucleon for A=< 4 nuclei can be understood if they form closed string like structures, nuclear strings, so that only two color bonds per nucleon are possible. This could be understood if dark valence quarks and exotic quarks possessing much smaller mass behave as if they were identical fermions. p-Adic mass calculations support this assumption. Also the average behavior of binding energy for heavier nuclei is predicted correctly.

  • For nuclei with P=N all color bonds can be pion type bonds and they have thus maximal color magnetic spin-spin interaction energy. The increase of color Coulombic binding energy between colored exotic quark pairs and dark gluons however favors N>P and explains also the formation of neutron halo outside k=111 space-time sheet.

  • Spin-orbit interaction provides the standard explanation for magic numbers. If the maximum of the binding energy per nucleon is taken as a criterion for magic, also Z=N=4,6,12 are magic. The alternative TGD based explanation for magic numbers Z=N=4,6,8,12,20 would be in terms of regular Platonic solids. Experimentally also other magic numbers such as N=14,16,30,32 are known for neutrons. The linking of nuclear strings provides a possible mechanism producing new magic nuclei from lighter magic nuclei and could explain these magic numbers and provide an alternative explanation for higher shell model magic numbers 28,50,82,126.

4. Stringy description of nuclear reactions

The view about nucleus as a collection of linked nuclear strings suggests a stringy description of nuclear reactions. Microscopically the nuclear reactions would correspond to re-distribution of exotic quarks between the nucleons in reacting nuclei.

5. Anomalies and new nuclear physics

The TGD based explanation of neutron halo has been already mentioned. The recently observed tetra-neutron states are difficult to understand in the standard nuclear physics framework since Fermi statistics does not allow this kind of state. The identification of tetra-neutron as an alpha particle containing two negatively charged color bonds allows to circumvent the problem. A large variety of exotic nuclei containing charged color bonds is predicted.

The proposed model explains the anomaly associated with the tritium beta decay. What has been observed is that the spectrum intensity of electrons has a narrow bump near the endpoint energy. Also the maximum energy E0 of electrons is shifted downwards. I have considered two explanations for the anomaly. The original models are based on TGD variants of original models involving belt of dark neutrinos or antineutrinos along the orbit of Earth. Only recently (towards the end of year 2008) I realized that nuclear string model provides much more elegant explanation of the anomaly and has also the potential to explain much more general anomalies.

Cold fusion has not been taken seriously by the physics community but the situation has begun to change gradually. There is an increasing evidence for the occurrence of nuclear transmutations of heavier elements besides the production of 4He and 3H whereas the production rate of 3He and neutrons is very low. These characteristics are not consistent with the standard nuclear physics predictions. Also Coulomb wall and the absence of gamma rays and the lack of a mechanism transferring nuclear energy to the electrolyte have been used as an argument against cold fusion. TGD based model relying on the notion of charged color bonds explains the anomalous characteristics of cold fusion.

Back to the table of contents

Nuclear String Hypothesis

Nuclear string model in form discussed in this chapter allows now to understand both nuclear binding energies of both A>4 nuclei and A≤4 nuclei in terms of three fractal variants of QCD. The model also explains giant resonances and so called pygmy resonances in terms of de-coherence of Bose-Einstein condensates of exotic pion like color bosons to sub-condensates.

1. Background

Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion.

A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays.

2. A>4 nuclei as nuclear strings consisting of A< 4 nuclei

During last weeks a more refined version of nuclear string hypothesis has evolved.

  1. The first refinement of the hypothesis is that 4He nuclei and A<4 nuclei and possibly also nucleons appear as basic building blocks of nuclear strings instead of nucleons which in turn can be regarded as strings of nucleons. Large number of stable lightest isotopes of form A=4n supports the hypothesis that the number of 4He nuclei is maximal. Even the weak decay characteristics might be reduced to those for A<4 nuclei using this hypothesis.

  2. One can understand the behavior of nuclear binding energies surprisingly well from the assumptions that total strong binding energy associated with A≤ 4 building blocks is additive for nuclear strings and that the addition of neutrons tends to reduce Coulombic energy per string length by increasing the length of the nuclear string implying increase binding energy and stabilization of the nucleus.

  3. In TGD framework tetra-neutron is interpreted as a variant of alpha particle obtained by replacing two meson-like stringy bonds connecting neighboring nucleons of the nuclear string with their negatively charged variants. For heavier nuclei tetra-neutron is needed as an additional building brick and the local maxima of binding energy E_B per nucleon as function of neutron number are consistent with the presence of tetra-neutrons. The additivity of magic numbers 2, 8, 20, 28, 50, 82, 126 predicted by nuclear string hypothesis is also consistent with experimental facts and new magic numbers are predicted.

3. Bose-Einstein condensation of color bonds as a mechanism of nuclear binding

The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of pion like colored states associated with color flux tubes connecting 4He nuclei.

  1. The crucial element of the model is that color contribution to the binding energy is proportional to n2 where n is the number of color bonds. Fermi statistics explains the reduction of EB for the nuclei heavier than Fe. Detailed estimate favors harmonic oscillator model over free nucleon model with oscillator strength having interpretation in terms of string tension.

  2. Fractal scaling argument allows to understand 4He and lighter nuclei as strings formed from nucleons with nucleons bound together by color bonds. Three fractally scaled variants of QCD corresponding A>4 nuclei, A=4 nuclei and A<4 nuclei are thus involved. The binding energies of also lighter nuclei are predicted surprisingly accurately by applying simple p-adic scaling to the parameters of model for the electromagnetic and color binding energies in heavier nuclei.

4. Giant dipole resonance as de-coherence of Bose-Einstein condensate of color bonds

Giant (dipole) resonances and so called pygmy resonances interpreted in terms of de-coherence of the Bose-Einstein condensates associated with A≤ 4 nuclei and with the nuclear string formed from A≤ 4 nuclei provide a unique test for the model. The key observation is that the splitting of the Bose-Einstein condensate to pieces costs a precisely defined energy due to the n2 dependence of the total binding energy.

  1. For 4He de-coherence the model predicts singlet line at 12.74 MeV and triplet (25.48, 27.30,29.12) MeV at ≈ 27 MeV spanning 4 MeV wide range which is of the same order as the width of the giant dipole resonance for nuclei with full shells.

  2. The de-coherence at the level of nuclear string predicts 1 MeV wide bands 1.4 MeV above the basic lines. Bands decompose to lines with precisely predicted energies. Also these contribute to the width. The predictions are in a surprisingly good agreement with experimental values. The so called pygmy resonance appearing in neutron rich nuclei can be understood as a de-coherence for A=3 nuclei. A doublet (7.520,8.4600) MeV at ≈ 8 MeV is predicted. At least the prediction for the position is correct.

5. Dark nuclear strings as analogs of as analogs of DNA-, RNA- and amino-acid sequences and baryonic realization of genetic code

A speculative picture proposing a connection between homeopathy, water memory, and phantom DNA effect is discussed and on basis of this connection a vision about how the tqc hardware represented by the genome is actively developed by subjecting it to evolutionary pressures represented by a virtual world representation of the physical environment. The speculation inspired by this vision is that genetic code as well as DNA-, RNA- and amino-acid sequences should have representation in terms of nuclear strings. The model for dark baryons indeed leads to an identification of these analogs and the basic numbers of genetic code including also the numbers of aminoacids coded by a given number of codons are predicted correctly. Hence it seems that genetic code is universal rather than being an accidental outcome of the biological evolution.

Back to the table of contents

Cold Fusion Again

During years I have developed two models of cold fusion and in this chapter these models are combined together. The basic idea of TGD based model of cold is that cold fusion occurs in two steps. First dark nuclei (large heff=n× h) with much lower binding energy than ordinary nuclei are formed at magnetic flux tubes possibly carrying monopole flux. These nuclei can leak out the system along magnetic flux tubes. Under some circumstances these dark nuclei can transform to ordinary nuclei and give rise to detectable fusion products.

An essential additional condition is that the dark protons can decay to neutrons rapidly enough by exchanges of dark weak bosons effectively massless below atomic length scale. This allows to overcome the Coulomb wall and explains why final state nuclei are stable and the decay to ordinary nuclei does not yield only protons. Thus it seems that this model combined with the TGD variant of Widom-Larsen model could explain nicely the existing data.

I will describe the steps leading to the TGD inspired model for cold fusion combining the earlier TGD variant of Widom-Larsen model with the model inspired by the TGD inspired model of Pollack's fourth phase of water using as input data findings from laser pulse induced cold fusion discovered by Leif Holmlid and collaborators. I consider briefly also alternative options (models assuming surface plasma polariton and heavy electron). After that I apply TGD inspired model in some cases (Pons-Fleischman effect, bubble fusion, and LeClair effect). The model explains the strange findings about cold fusion - in particular the fact that only stable nuclei are produced - and suggests that also ordinary nuclear reactions might have more fundamental description in terms of similar model.

Back to the table of contents

Dark Nuclear Physics and Condensed Matter

In this chapter the possible effects of dark matter in nuclear physics and condensed matter physics are considered. The spirit of the discussion is necessarily rather speculative since the vision about the hierarchy of Planck constants is only 5 years old. The most general form of the hierarchy would involve both singular coverings and factors spaces of CD (causal diamond of M4) defined as intersection of future and past directed light-cones) and CP2. There are grave objections against the allowance of factor spaces. In this case Planck constant could be smaller than its standard value and there are very few experimental indications for this. Quite recently came the realization that the hierarchy of Planck constants might emerge from the basic quantum TGD as a consequence of the extreme non-linearity of field equations implying that the correspondence between the derivatives of imbedding space coordinates and canonical momentum is many-to-one. This makes natural to the introduction of covering spaces of CD and CP2. Planck constant would be effectively replaced with a multiple of ordinary Planck constant defined by the number of the sheets of the covering. The space-like 3-surfaces at the ends of the causal diamond and light-like 3-surfaces defined by wormhole throats carrying elementary particle quantum numbers would be quantum critical in the sense of being unstable against decay to many-sheeted structures. Charge fractionization could be understood in this scenario. Biological evolution would have the increase of the Planck constant as as one aspect. The crucial scaling of the size of CD by Planck constant can be justified by a simple argument. Note that primary p-adic length scales would scale as hbar1/2 rather than hbar as assumed in the original model.

1. What darkness means?

Dark matter is identified as matter with non-standard value of Planck constant. The weak form of darkness is that only some field bodies of the particle consisting of flux quanta mediating bound state interactions between particles become dark. One can assign to each interaction a field body (em, Z0, W, gluonic, gravitational) and p-adic prime and the value of Planck constant characterize the size of the particular field body. One might even think that particle mass can be assigned with its em field body and that Compton length of particle corresponds to the size scale of em field body. Complex combinations of dark field bodies become possible and the dream is that one could understand various phases of matter in terms of these combinations.

Nuclear string model suggests that the sizes of color flux tubes and weak flux quanta associated with nuclei can become dark in this sense and have size of order atomic radius so that dark nuclear physics would have a direct relevance for condensed matter physics. If this happens, it becomes impossible to make a reductionistic separation between nuclear physics and condensed matter physics and chemistry anymore.

2. What dark nucleons are?

The basic hypothesis is that nuclei can make a phase transition to dark phase in which the size of both quarks and nuclei is measured in Angstroms. For the less radical option this transition could happen only for the color, weak, and em field bodies. Proton connected by dark color bonds super-nuclei with inter-nucleon distance of order atomic radius might be crucial for understanding the properties of water and perhaps even the properties of ordinary condensed matter. Large hbar phase for weak field body of D and Pd nuclei with size scale of atom would explain selection rules of cold fusion.

3. Anomalous properties of water and dark nuclear physics

A direct support for partial darkness of water comes from the H1.5O chemical formula supported by neutron and electron diffraction in attosecond time scale. The explanation would be that one fourth of protons combine to form super-nuclei with protons connected by color bonds and having distance sufficiently larger than atomic radius.

The crucial property of water is the presence of molecular clusters. Tedrahedral clusters allow an interpretation in terms of magic Z=8 protonic dark nuclei. The icosahedral clusters consisting of 20 tedrahedral clusters in turn have interpretation as magic dark dark nuclei: the presence of the dark dark matter explains large portion of the anomalies associated with water and explains the unique role of water in biology. In living matter also higher levels of dark matter hierarchy are predicted to be present. The observed nuclear transmutation suggest that also light weak bosons are present.

4. Implications of the partial darkness of condensed matter

The model for partially dark condensed matter inspired by nuclear string model and the model of cold fusion inspired by it allows to understand the low compressibility of the condensed matter as being due to the repulsive weak force between exotic quarks, explains large parity breaking effects in living matter, and suggests a profound modification of the notion of chemical bond having most important implications for bio-chemistry and understanding of bio-chemical evolution.

Back to the table of contents

Dark Forces and Living Matter

The unavoidable presence of classical long ranged weak (and also color) gauge fields in TGD Universe has been a continual source of worries for more than two decades. The basic question has been whether Z0 charges of elementary particles are screened in electro-weak length scale or not. Same question msut be raised in the case of color charges. For a long time the hypothesis was that the charges are feeded to larger space-time sheets in this length scale rather than screened by vacuum charges so that an effective screening results in electro-weak length scale. This hypothesis turned out to be a failure and was replaced with the idea that the non-linearity of field equations (only topological half of Maxwell's equations holds true) implies the generation of vacuum charge densities responsible for the screening.

The weak form of electric-magnetic duality led to the identification of the long sought for mechanism causing the weak screening in electroweak scales. The basic implication of the duality is that Kähler electric charges of wormhole throats representing particles are proportional to Kähler magnetic charges so that the CP2 projections of the wormhole throats are homologically non-trivial. The Kähler magnetic charges do not create long range monopole fields if they are neutralized by wormhole throats carrying opposite monopole charges and weak isospin neutralizing the axial isospin of the particle's wormhole throat. One could speak of confinement of weak isospin. The weak field bodies of elementary fermions would be replaced with string like objects with a length of order W boson Compton length. Electro-magnetic flux would be feeded to electromagnetic field body where it would be feeded to larger space-time sheets. Similar mechanism could apply in the case of color quantum numbers. Weak charges would be therefore screened for ordinary matter in electro-weak length scale but dark electro-weak bosons correspond to much longer symmetry breaking length scale for weak field body. Large values of Planck constant would make it possible to zoop up elementary particles and study their internal structure without any need for gigantic accelerators.

In this chapter possible implications of the dark weak force for the understanding of living matter are discussed. The basic question is how classical Z0 fields could make itself visible. Large parity breaking effects in living matter suggests which direction one should look for the answer to the question. One possible answer is based on the observation that for vacuum extremals classical electromagnetic and Z0 fields are proportional to each other and this means that the electromagnetic charges of dark fermions standard are replaced with effective couplings in which the contribution of classical Z0 force dominates. This modifies dramatically the model for the cell membrane as a Josephson junction and raises the scale of Josephson energies from IR range just above thermal threshold to visible and ultraviolet. The amazing finding is that the Josephson energies for biologically important ions correspond to the energies assigned to the peak frequencies in the biological activity spectrum of photoreceptors in retina suggesting. This suggests that almost vacuum extremals and thus also classical Z0 fields are in a central role in the understanding of the functioning of the cell membrane and of sensory qualia. This would also explain the large parity breaking effects in living matter.

A further conjecture is that EEG and its predicted fractally scaled variants which same energies in visible and UV range but different scales of Josephson frequencies correspond to Josephson photons with various values of Planck constant. The decay of dark ELF photons with energies of visible photons would give rise to bunches of ordinary ELF photons. Biophotons in turn could correspond to ordinary visible photons resulting in the phase transition of these photons to photons with ordinary value of Planck constant. This leads to a very detailed view about the role of dark electromagnetic radiation in biomatter and also to a model for how sensory qualia are realized. The general conclusion might be that most effects due to the dark weak force are associated with almost vacuum extremals.

Back to the table of contents

Super-Conductivity in Many-Sheeted Space-Time

In this chapter a model for high Tc super-conductivity as quantum critical phenomenon is developed. The relies on the notions of quantum criticality, dynamical quantized Planck constant requiring a generalization of the 8-D imbedding space to a book like structure, and many-sheeted space-time. In particular, the notion of magnetic flux tube as a carrier of supra current of central concept.

With a sufficient amount of twisting and weaving these basic ideas one ends up to concrete model for high Tc superconductors as quantum critical superconductors consistent with the qualitative facts that I am personally aware. The following minimal model looks the most realistic option found hitherto.

  1. The general idea is that magnetic flux tubes are carriers of supra currents. In anti-ferromagnetic phases these flux tube structures form small closed loops so that the system behaves as an insulator. Some mechanism leading to a formation of long flux tubes must exist. Doping creates holes located around stripes, which become positively charged and attract electrons to the flux tubes.

  2. The higher critical temperature Tc1 corresponds to a formation local configurations of parallel spins assigned to the holes of stripes giving rise to a local dipole fields with size scale of the order of the length of the stripe. Conducting electrons form Cooper pairs at the magnetic flux tube structures associated with these dipole fields. The elongated structure of the dipoles favors angular momentum L=2 for the pairs. The presence of magnetic field favors Cooper pairs with spin S=1.

  3. Stripes can be seen as 1-D metals with delocalized electrons. The interaction responsible for the energy gap corresponds to the transversal oscillations of the magnetic flux tubes inducing oscillations of the nuclei of the stripe. These transverse phonons have spin and their exchange is a good candidate for the interaction giving rise to a mass gap. This could explain the BCS type aspects of high Tc super-conductivity.

  4. Above Tc supra currents are possible only in the length scale of the flux tubes of the dipoles which is of the order of stripe length. The reconnections between neighboring flux tube structures induced by the transverse fluctuations give rise to longer flux tubes structures making possible finite conductivity. These occur with certain temperature dependent probability p(T,L) depending on temperature and distance L between the stripes. By criticality p(T,L) depends on the dimensionless variable x=TL/hbar only: p=p(x). At critical temperature Tc transverse fluctuations have large amplitude and makes p(xc) so large that very long flux tubes are created and supra currents can run. The phenomenon is completely analogous to percolation.

  5. The critical temperature Tc = xchbar/L is predicted to be proportional to hbar and inversely proportional to L (, which is indeed to be the case). If flux tubes correspond to a large value of hbar, one can understand the high value of Tc. Both Cooper pairs and magnetic flux tube structures represent dark matter in TGD sense.

  6. The model allows to interpret the characteristic spectral lines in terms of the excitation energy of the transversal fluctuations and gap energy of the Cooper pair. The observed 50 meV threshold for the onset of photon absorption suggests that below Tc also S=0 Cooper pairs are possible and have gap energy about 9 meV whereas S=1 Cooper pairs would have gap energy about 27 meV. The flux tube model indeed predicts that S=0 Cooper pairs become stable below Tc since they cannot anymore transform to S=1 pairs. Their presence could explain the BCS type aspects of high Tc super-conductivity. The estimate for hbar/hbar0 = r from critical temperature Tc1 is about r=3 contrary to the original expectations inspired by the model of of living system as a super-conductor suggesting much higher value. An unexpected prediction is that coherence length is actually r times longer than the coherence length predicted by conventional theory so that type I super-conductor could be in question with stripes serving as duals for the defects of type I super-conductor in nearly critical magnetic field replaced now by ferromagnetic phase.

  7. TGD predicts preferred values for r=hbar/hbar0 and the applications to bio-systems favor powers of r=211. r=211 predicts that electron Compton length is of order atomic size scale. Bio-superconductivity could involve electrons with r=222 having size characterized by the thickness of the lipid layer of cell membrane.

At qualitative level the model explains various strange features of high Tc superconductors. One can understand the high value of Tc and ambivalent character of high Tc super conductors, the existence of pseudogap and scalings laws for observables above Tc, the role of stripes and doping and the existence of a critical doping, etc...

Back to the table of contents

Quantum Hall effect and Hierarchy of Planck Constants

In this chapter I try to formulate more precisely the recent TGD based view about fractional quantum Hall effect (FQHE). This view is much more realistic than the original rough scenario, which neglected the existing rather detailed understanding. The spectrum of ν, and the mechanism producing it is the same as in composite fermion approach. The new elements relate to the not so well-understood aspects of FQHE, namely charge fractionization, the emergence of braid statistics, and non-abelianity of braid statistics.

  1. The starting point is composite fermion model so that the basic predictions are same. Now magnetic vortices correspond to (Kähler) magnetic flux tubes carrying unit of magnetic flux. The magnetic field inside flux tube would be created by delocalized electron at the boundary of the vortex. One can raise two questions.

    Could the boundary of the macroscopic system carrying anyonic phase have identification as a macroscopic analog of partonic 2-surface serving as a boundary between Minkowskian and Euclidian regions of space-time sheet? If so, the space-time sheet assignable to the macroscopic system in question would have Euclidian signature, and would be analogous to blackhole or to a line of generalized Feynman diagram.

    Could the boundary of the vortex be identifiable a light-like boundary separating Minkowskian magnetic flux tube from the Euclidian interior of the macroscopic system and be also analogous to wormhole throat? If so, both macroscopic objects and magnetic vortices would be rather exotic geometric objects not possible in general relativity framework.

  2. Taking composite model as a starting point one obtains standard predictions for the filling fractions. One should also understand charge fractionalization and fractional braiding statistics. Here the vacuum degeneracy of Kähler action suggests the explanation. Vacuum degeneracy implies that the correspondence between the normal component of the canonical momentum current and normal derivatives of imbedding space coordinates is 1- to-n. These kind of branchings result in multi-furcations induced by variations of the system parameters and the scaling of external magnetic field represents one such variation.
  3. At the orbits of wormhole throats, which can have even macroscopic M4 projections, one has 1→ na correspondence and at the space-like ends of the space-time surface at light-like boundaries of causal diamond one has 1→ nb correspondence. This implies that at partonic 2-surfaces defined as the intersections of these two kinds of 3-surfaces one has 1→ na× nb correspondence. This correspondence can be described by using a local singular n-fold covering of the imbedding space. Unlike in the original approach, the covering space is only a convenient auxiliary tool rather than fundamental notion.
  4. The fractionalization of charge can be understood as follows. A delocalization of electron charge to the n sheets of the multi-furcation takes place and single sheet is analogous to a sheet of Riemann surface of function z1/n and carries fractional charge q=e/n, n=nanb. Fractionalization applies also to other quantum numbers. One can have also many-electron stats of these states with several delocalized electrons: in this case one obtains more general charge fractionalization: q= ν e.
  5. Also the fractional braid statistics can be understood. For ordinary statistics rotations of M4 rotate entire partonic 2-surfaces. For braid statistics rotations of M4 (and particle exchange) induce a flow braid ends along partonic 2-surface. If the singular local covering is analogous to the Riemann surface of z1/n, the braid rotation by Δ Φ=2π, where Φ corresponds to M4 angle, leads to a second branch of multi-furcation and one can give up the usual quantization condition for angular momentum. For the natural angle coordinate φ of the n-branched covering Δ φ=2π corresponds to Δ Φ=n× 2π. If one identifies the sheets of multi-furcation and therefore uses Φ as angle coordinate, single valued angular momentum eigenstates become in general n-valued, angular momentum in braid statistics becomes fractional and one obtains fractional braid statistics for angular momentum.
  6. How to understand the exceptional values ν=5/2,7/2 of the filling fraction? The non-abelian braid group representations can be interpreted as higher-dimensional projective representations of permutation group: for ordinary statistics only Abelian representations are possible. It seems that the minimum number of braids is n>2 from the condition of non-abelianity of braid group representations. The condition that ordinary statistics is fermionic, gives n>3. The minimum value is n=4 consistent with the fractional charge e/4.

    The model introduces Z4 valued topological quantum number characterizing flux tubes. This also makes possible non-Abelian braid statistics. The interpretation of this quantum number as a Z4 valued momentum characterizing the four delocalized states of the flux tube at the sheets of the 4-furcation suggests itself strongly. Topology would corresponds to that of 4-fold covering space of imbedding space serving as a convenient auxiliary tool. The more standard explanation is that Z4=Z2× Z2 such that Z2:s correspond to the presence or absence of neutral Majorana fermion in the two Cooper pair like states formed by flux tubes.

    What remains to be understood is the emergence of non-abelian gauge group realizing non-Abelian fractional statistics in gauge theory framework. TGD predicts the possibility of dynamical gauge groups and maybe this kind of gauge group indeed emerges. Dynamical gauge groups emerge also for stacks of N branes and the n sheets of multifurcation are analogous to the N sheets in the stack for many-electron states.

Back to the table of contents

A Possible Explanation of Shnoll Effect

Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to Earth-Sun and Earth-Moon gravitational interaction.

These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics.

The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyper-finite factors of type II1 (HFFs) and closely related quantum groups. Also p-adic physics -another key element of TGD- is expected to be involved. A modification of a given probability distribution P(n| λi) for a positive integer valued variable n characterized by rational-valued parameters λi is obtained by replacing n and the integers characterizing λi with so called quantum integers depending on the quantum phase qm=exp(i2π/m). Quantum integer nq must be defined as the product of quantum counterparts pq of the primes p appearing in the prime decomposition of n. One has pq= sin(2π p/m)/sin(2π/m) for p ≠ P and pq=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p.

The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as p-adic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking p-adic -1 to P and powers Pn to P-n and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ Pn)=∑ I(Pn). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n+n-, n+q>0 with fixed n-q< 0.

The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of Earth-Sun and Earth-Moon distances. The TGD inspired proposal is that the p-dic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ kpΔ R/R and Δ m/m≈ kmΔ R/R. The p-adic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions to have gigantic values and this allows p-adicity with small values of the p-adic prime P.

Back to the table of contents

To the index page