PART I: HYPER-FINITE FACTORS AND HIERARCHY OF PLANCK CONSTANTS
The work with TGD inspired model for topological quantum computation led to the realization that von Neumann algebras, in particular so called hyper-finite factors of type II1, seem to provide the mathematics needed to develop a more explicit view about the construction of S-matrix. The original discussion has transformed during years from free speculation reflecting in many aspects my ignorance about the mathematics involved to a more realistic view about the role of these algebras in quantum TGD. The discussions of this chapter have been restricted to the basic notions are discussed and only short mention is made to TGD applications discussed in second chapter.
The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation * and observables correspond to Hermitian operators. Any measurable function f(A) of operator A belongs to the algebra and one can say that non-commutative measure theory is in question.
The predictions of quantum theory are expressible in terms of traces of observables. Density matrix defining expectations of observables in ensemble is the basic example. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace: tr(Id)=1.
In the finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as the dimension of the projection.
The non-trivial implication of the fact that traces of projections are never larger than one is that the eigen spaces of the density matrix must be infinite-dimensional for non-vanishing projection probabilities. Quantum measurements can lead with a finite probability only to mixed states with a\ density matrix which is projection operator to infinite-dimensional subspace. The simple von Neumann algebras for which unit operator has unit trace are known as factors of type II1.
The definitions of adopted by von Neumann allow however more general algebras. Type In algebras correspond to finite-dimensional matrix algebras with finite traces whereas I∞ associated with a separable infinite-dimensional Hilbert space does not allow bounded traces. For algebras of type III non-trivial traces are always infinite and the notion of trace becomes useless being replaced by the notion of state which is generalization of the notion of thermodynamical state. The fascinating feature of this notion of state is that it defines a unique modular automorphism of the factor defined apart from unitary inner automorphism and the question is whether this notion or its generalization might be relevant for the construction of M-matrix in TGD. It however seems that in TGD framework based on Zero Energy Ontology identifiable as "square root" of thermodynamics a square root of thermodynamical state is needed.
The inclusions of hyper-finite factors define an excellent candidate for the description of finite measurement resolution with included factor representing the degrees of freedom below measurement resolution. The would also give connection to the notion of quantum group whose physical interpretation has remained unclear. This idea is central to the proposed applications to quantum TGD discussed in separate chapter.
The work with TGD inspired model for quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors, could provide the mathematics needed to develop a more explicit view about the construction of M-matrix generalizing the notion of S-matrix in zero energy ontology (ZEO). In this chapter I will discuss various aspects of hyper-finite factors and their possible physical interpretation in TGD framework.
1. Hyper-finite factors in quantum TGD
The following argument suggests that von Neumann algebras known as hyper-finite factors (HFFs) of type III1 appearing in relativistic quantum field theories provide also the proper mathematical framework for quantum TGD.
2. Hyper-finite factors and M-matrix
HFFs of type III1 provide a general vision about M-matrix.
3. Connes tensor product as a realization of finite measurement resolution
The inclusions N⊂ M of factors allow an attractive mathematical description of finite measurement resolution in terms of Connes tensor product but do not fix M-matrix as was the original optimistic belief.
4. Analogs of quantum matrix groups from finite measurement resolution?
The notion of quantum group replaces ordinary matrices with matrices with non-commutative elements. In TGD framework I have proposed that the notion should relate to the inclusions of von Neumann algebras allowing to describe mathematically the notion of finite measurement resolution.
In this chapter I will consider the notion of quantum matrix inspired by recent view about quantum TGD and it provides a concrete representation and physical interpretation of quantum groups in terms of finite measurement resolution. The basic idea is to replace complex matrix elements with operators expressible as products of non-negative hermitian operators and unitary operators analogous to the products of modulus and phase as a representation for complex numbers.
The condition that determinant and sub-determinants exist is crucial for the well-definedness of eigenvalue problem in the generalized sense. The weak definition of determinant meaning its development with respect to a fixed row or column does not pose additional conditions. Strong definition of determinant requires its invariance under permutations of rows and columns. The permutation of rows/columns turns out to have interpretation as braiding for the hermitian operators defined by the moduli of operator valued matrix elements. The commutativity of all sub-determinants is essential for the replacement of eigenvalues with eigenvalue spectra of hermitian operators and sub-determinants define mutually commuting set of operators.
The resulting quantum matrices define a more general structure than quantum group but provide a concrete representation and interpretation for quantum group in terms of finite measurement resolution if q is a root of unity. For q=+/- 1 (Bose-Einstein or Fermi-Dirac statistics) one obtains quantum matrices for which the determinant is apart from possible change by sign factor invariant under the permutations of both rows and columns. One could also understand the fractal structure of inclusion sequences of hyper-finite factors resulting by recursively replacing operators appearing as matrix elements with quantum matrices.
5. Quantum spinors and fuzzy quantum mechanics
The notion of quantum spinor leads to a quantum mechanical description of fuzzy probabilities. For quantum spinors state function reduction cannot be performed unless quantum deformation parameter equals to q=1. The reason is that the components of quantum spinor do not commute: it is however possible to measure the commuting operators representing moduli squared of the components giving the probabilities associated with "true" and "false". The universal eigenvalue spectrum for probabilities does not in general contain (1,0) so that quantum qbits are inherently fuzzy. State function reduction would occur only after a transition to q=1 phase and decoherence is not a problem as long as it does not induce this transition.
The quantization of Planck constant has been the basic them of TGD since 2005. The basic idea was stimulated by the finding of Nottale that planetary orbits could be seen as Bohr orbits with enormous value of Planck constant given by hbargr= GM1M2/v0, where the velocity parameter v0 has the approximate value v0≈ 2-11 for the inner planets. This inspired the ideas that quantization is due to a condensation of ordinary matter around dark matter concentrated near Bohr orbits and that dark matter is in macroscopic quantum phase in astrophysical scales. The second crucial empirical input were the anomalies associated with living matter. The recent version of the chapter represents the evolution of ideas about quantization of Planck constants from a perspective given by seven years's work with the idea. A very concise summary about the situation is as follows.
Basic physical ideas
The basic phenomenological rules are simple and there is no need to modify them.
Space-time correlates for the hierarchy of Planck constants
The hierarchy of Planck constants was introduced to TGD originally as an additional postulate and formulated as the existence of a hierarchy of imbedding spaces defined as Cartesian products of singular coverings of M4 and CP2 with numbers of sheets given by integers na and nb and hbar=nhbar0. n=nanb.
With the advent of zero energy ontology, it became clear that the notion of singular covering space of the imbedding space could be only a convenient auxiliary notion. Singular means that the sheets fuse together at the boundary of multi-sheeted region. The effective covering space emerges naturally from the vacuum degeneracy of Kähler action meaning that all deformations of canonically imbedded M4 in M4×CP2 have vanishing action up to fourth order in small perturbation. This is clear from the fact that the induced Kähler form is quadratic in the gradients of CP2 coordinates and Kähler action is essentially Maxwell action for the induced Kähler form. The vacuum degeneracy implies that the correspondence between canonical momentum currents ∂LK/∂(∂αhk) defining the modified gamma matrices and gradients ∂α hk is not one-to-one. Same canonical momentum current corresponds to several values of gradients of imbedding space coordinates. At the partonic 2-surfaces at the light-like boundaries of CD carrying the elementary particle quantum numbers this implies that the two normal derivatives of hk are many-valued functions of canonical momentum currents in normal directions.
Multi-furcation is in question and multi-furcations are indeed generic in highly non-linear systems and Kähler action is an extreme example about non-linear system. What multi-furcation means in quantum theory? The branches of multi-furcation are obviously analogous to single particle states. In quantum theory second quantization means that one constructs not only single particle states but also the many particle states formed from them. At space-time level single particle states would correspond to N branches bi of multi-furcation carrying fermion number. Two-particle states would correspond to 2-fold covering consisting of 2 branches bi and bj of multi-furcation. N-particle state would correspond to N-sheeted covering with all branches present and carrying elementary particle quantum numbers. The branches co-incide at the partonic 2-surface but since their normal space data are different they correspond to different tensor product factors of state space. Also now the factorization N= nanb occurs but now na and nb would relate to branching in the direction of space-like 3-surface and light-like 3-surface rather than M4 and CP2 as in the original hypothesis.
Multi-furcations relate closely to the quantum criticality of Kähler action. Feigenbaum bifurcations represent a toy example of a system which via successive bifurcations approaches chaos. Now more general multi-furcations in which each branch of given multi-furcation can multi-furcate further, are possible unless on poses any additional conditions. This allows to identify additional aspect of the geometric arrow of time. Either the positive or negative energy part of the zero energy state is "prepared" meaning that single n-sub-furcations of N-furcation is selected. The most general state of this kind involves superposition of various n-sub-furcations.
This chapter contains the purely mathematical speculations about the hierarchy of Planck constants (actually only effective hierarchy if the recent interpretation is correct) as separate from the material describing the physical ideas, key mathematical concepts, and the basic applications. These mathematical speculations emerged during the first stormy years in the evolution of the ideas about Planck constant and must be taken with a big grain of salt. I feel myself rather conservative as compared to the fellow who produced this stuff for 7 years ago. This all is of course very relative. Many readers might experience this recent me as a reckless speculator.
The first speculative question is about possible relationship between Jones inclusions of hyperfinite factors of type $II_1$ (hyper-finite factors are von Neuman algebras emerging naturally in TGD framework). The basic idea is that the discrete groups assignable to inclusions could correspond to discrete groups acting in the effective covering spaces of imbedding space assignable to the hierarchy of Planck constants.
There are also speculations relating to the hierarchy of Planck constants, Mc-Kay correspondence, and Jones inclusions. Even Farey sequences, Riemann hypothesis and and N-tangles are discussed. Depending on reader these speculations might be experienced as irritating or entertaining. It would be interesting to go this stuff through in the light of recent understanding of the effective hierarchy of Planck constants to see what portion of its survives.
Negentropy Maximization Principle and TGD Inspired Theory of Consciousness
In TGD Universe the moments of consciousness are associated with quantum jumps between quantum histories. The proposal is that the dynamics of consciousness is governed by Negentropy Maximization Principle (NMP), which states the information content of conscious experience is maximal. The formulation of NMP is the basic topic of this chapter.
NMP codes for the dynamics of standard state function reduction and states that the state function reduction process following U-process gives rise to a maximal reduction of entanglement entropy at each step. In the generic case this implies at each step a decomposition of the system to unique unentangled subsystems and the process repeats itself for these subsystems. The process stops when the resulting subsystem cannot be decomposed to a pair of free systems since energy conservation makes the reduction of entanglement kinematically impossible in the case of bound states. The natural assumption is that self loses consciousness when it entangles via bound state entanglement.
There is an important exception to this vision based on ordinary Shannon entropy. There exists an infinite hierarchy of number theoretical entropies making sense for rational or even algebraic entanglement probabilities. In this case the entanglement negentropy can be negative so that NMP favors the generation of negentropic entanglement (NE), which is not bound state entanglement in standard sense since the condition that state function reduction leads to an eigenstate of density matrix requires the final state density matrix to be a projection operator.
NE might serve as a correlate for emotions like love and experience of understanding. The reduction of ordinary entanglement entropy to random final state implies second law at the level of ensemble. For the generation of NE the outcome of the reduction is not random: the prediction is that second law is not a universal truth holding true in all scales. Since number theoretic entropies are natural in the intersection of real and p-adic worlds, this suggests that life resides in this intersection. The existence effectively bound states with no binding energy might have important implications for the understanding the stability of basic bio-polymers and the key aspects of metabolism. A natural assumption is that self experiences expansion of consciousness as it entangles in this manner. Quite generally, an infinite self hierarchy with the entire Universe at the top is predicted.
There are two options to consider. Strong form of NMP, which would demand maximal negentropy gain: this would not allow morally responsible free will if ethics is defined in terms of evolution as increase of NE resources. Weak form of NMP would allow self to choose also lower-dimensional sub-space of the projector defining the final state sub-space for strong form of NMP. Weak form turns out to have several highly desirable consequences: it favours dimensions of final state space coming as powers of prime, and in particular dimensions which are primes near powers of prime: as a special case, p-adic length scale hypothesis follows. Weak form of NMP allows also quantum computations, which halt unlike strong form of NMP.
Besides number theoretic negentropies there are also other new elements as compared to the earlier formulation of NMP.
Quantum criticality and dark matter
Quantum criticality is one of the corner stone assumptions of TGD. The value of Kähler coupling strength fixes quantum TGD and is analogous to critical temperature. TGD Universe would be quantum critical. What does this mean is however far from obvious and I have pondered the notion repeatedly both from the point of view of mathematical description and phenomenology. Superfluids exhibit rather mysterious looking effects such as fountain effect and what looks like quantum coherence of superfluid containers which should be classically isolated. These findings serve as a motivation for the proposal that genuine superfluid portion of superfluid corresponds to a large heff phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity.
About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff
Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In this chapter a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed. In ZEO the non-changing parts of zero energy states are assigned to the passive boundary of CD and β0 should be assigned to it.
There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind<LH. The identification β0= LM4/LH does not allow the identification of LH=LM4. LH would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD.
One can deduce an estimate for β0 by approximating the space-time surface as Robertson-Walker cosmology expected to be a good approximation near the passive light-like boundary of CD. The resulting formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses.
β0/4π is analogous to gravitational fine structure constant for heff=hgr. Could one see it as fundamental coupling parameter appearing also in other interactions at quantum criticality in which ordinary perturbation series diverges? Remarkably, the value of G does not appear at all in the perturbative expansion in this region! Could G have several values? This suggests the generalization G= lP2/hbar → G= R2/hbareff so that G would indeed have a spectrum and that Planck length lP would be equal to CP2 radius R so that only one fundamental length would be associated with twistorialization. Ordinary Newton's constant would be given by G= R2/heff with heff/h0 having value in the range 107-108.
The second topic of the chapter relates to the the fact that the measurements of G give differing results with differences between measurements larger than the measurement accuracy. This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107. The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.
In this chapter I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.
TGD View about Quasars
The work of Rudolph Schild and his colleagues Darryl Letier and Stanley Robertson (among others) suggests that quasars are not supermassive blackholes but something else - MECOs, magnetic eternally collapsing objects having no horizon and possessing magnetic moment. Schild et al argue that the same applies to galactic blackhole candidates and active galactic nuclei, perhaps even to ordinary blackholes as Abhas Mitra, the developer of the notion of MECO proposes.
In the sequel TGD inspired view about quasars relying on the general model for how galaxies are generated as the energy of thickened cosmic strings decays to ordinary matter is proposed. Quasars would not be be blackhole like objects but would serve as an analog of the decay of inflaton field producing the galactic matter. The energy of the string like object would replace galactic dark matter and automatically predict a flat velocity spectrum.
TGD is assumed to have standard model and GRT as QFT limit in long length scales. Could MECOs provide this limit? It seems that the answer is negative: MECOs represent still collapsing objects. The energy of inflaton field is replaced with the sum of the magnetic energy of cosmic string and negative volume energy, which both decrease as the thickness of flux tube increases. The liberated energy transforms to ordinary particles and their dark variants in TGD sense. Time reversal of blackhole would be more appropriate interpretation. One can of course ask, whether the blackhole candidates in galactic nuclei are time reversals of quasars in TGD sense.
The writing of the article led also to a considerable understanding of two key aspects of TGD. The understanding of twistor lift and p-adic evolution of cosmological constant improved considerably. Also the understanding of gravitational Planck constant and the notion of space-time as a covering space became much more detailed in turn allowing much more refined view about the anatomy of magnetic body.
Holography and Quantum Error Correcting Codes: TGD View
Preskill et all suggest a highly interesting representation of holography in terms of quantum error correction codes The idea is that time= constant section of AdS, which is hyperbolic space allowing tessellations, can define tensor networks. So called perfect tensors are building bricks of the tensor networks providing representation for holography and at the same time defining error correcting codes by mapping localized interior states (logical qubits) to highly entangled non-local boundary states (physical qubits).
There are three observations that put bells ringing and actually motivated this article.
These networks would also give rise to the counterpart of condensed matter physics of dark matter at the level of magnetic body: the replacement of lattices based on subgroups of translation group with infinite number of tesselations means that this analog of condensed matter physics describes quantum complexity.
PART II: P-ADIC LENGTH SCALE HIERARCHY AND DARK MATTER HIERARCY
Recent Status of Lepto-Hadron Hypothesis
TGD suggests strongly the existence of lepto-hadron physics. Lepto-hadrons are bound states of color excited leptons and the anomalous production of e+e- pairs in heavy ion collisions finds a nice explanation as resulting from the decays of lepto-hadrons with basic condensate level k=127 and having typical mass scale of one MeV . The recent indications on the existence of a new fermion with quantum numbers of muon neutrino and the anomaly observed in the decay of orto-positronium give further support for the lepto-hadron hypothesis. There is also evidence for anomalous production of low energy photons and e+e- pairs in hadronic collisions. The previous work (,which contained some errors) is summarized and developed further.
The identification of leptohadrons as a particular instance in the predicted hierarchy of dark matters interacting directly only via graviton exchange allows to circumvent the lethal counter arguments against the leptohadron hypothesis ( Z0 decay width and production of colored lepton jets in e+e- annihilation) even without assumption about the loss of asymptotic freedom.
PCAC hypothesis and its σ model realization lead to a model containing only the coupling of the lepto-pion to the axial vector current as a free parameter. The prediction for e+e- production cross section is of correct order of magnitude only provided one assumes that lepto-pions decay to lepto-nucleon pair eex+eex- first and that lepto-nucleons, having quantum numbers of electron and having mass only slightly larger than electron mass, decay to lepton and photon. The peculiar production characteristics are correctly predicted. There is some evidence that the resonances decay to a final state containing n>2 particle and the experimental demonstration that lepto-nucleon pairs are indeed in question, would be a breakthrough for TGD.
During 18 years after the first published version of the model also evidence for colored μ has emerged. Towards the end of 2008 CDF anomaly gave a strong support for the colored excitation of τ. The lifetime of the light long lived state identified as a charged τ-pion comes out correctly and the identification of the reported 3 new particles as p-adically scaled up variants of neutral τ-pion predicts their masses correctly. The observed muon jets can be understood in terms of the special reaction kinematics for the decays of neutral τ-pion to 3 τ-pions with mass scale smaller by a factor 1/2 and therefore almost at rest. A spectrum of new particles is predicted. The discussion of CDF anomaly led to a modification and generalization of the original model for lepto-pion production and the predicted production cross section is consistent with the experimental estimate.
TGD and Nuclear Physics
This chapter is devoted to the possible implications of TGD for nuclear physics. In the original version of the chapter the focus was in the attempt to resolve the problems caused by the incorrect interpretation of the predicted long ranged weak gauge fields. What seems to be a breakthrough in this respect came only quite recently (2005), more than a decade after the first version of this chapter, and is based on TGD based view about dark matter inspired by the developments in the mathematical understanding of quantum TGD. In this approach condensed matter nuclei can be either ordinary, that is behave essentially like standard model nuclei, or be in dark matter phase in which case they generate long ranged dark weak gauge fields responsible for the large parity breaking effects in living matter. This approach resolves trivially the objections against long range classical weak fields.
The basic criterion for the transition to dark matter phase having by definition large value of hbar is that the condition α Q1Q2≈1 for appropriate gauge interactions expressing the fact that the perturbation series does not converge. The increase of hbar makes perturbation series converging since the value of α is reduced but leaves lowest order classical predictions invariant.
This criterion can be applied to color force and inspires the hypothesis that valence quarks inside nucleons correspond to large hbar phase whereas sea quark space-time sheets correspond to the ordinary value of hbar. This hypothesis is combined with the earlier model of strong nuclear force based on the assumption that long color bonds with p-adically scaled down quarks with mass of order MeV at their ends are responsible for the nuclear strong force.
The basic assumptions are following.
The wave functions of the nucleons fix the boundary values of the wave functionals of the color magnetic flux tubes idealizable as strings. In the terminology of M-theory nucleons correspond to small branes and color magnetic flux tubes to strings connecting them.
This picture allows to understand the general features of strong interactions.
The view about nucleus as a collection of linked nuclear strings suggests a stringy description of nuclear reactions. Microscopically the nuclear reactions would correspond to re-distribution of exotic quarks between the nucleons in reacting nuclei.
The TGD based explanation of neutron halo has been already mentioned. The recently observed tetra-neutron states are difficult to understand in the standard nuclear physics framework since Fermi statistics does not allow this kind of state. The identification of tetra-neutron as an alpha particle containing two negatively charged color bonds allows to circumvent the problem. A large variety of exotic nuclei containing charged color bonds is predicted.
The proposed model explains the anomaly associated with the tritium beta decay. What has been observed is that the spectrum intensity of electrons has a narrow bump near the endpoint energy. Also the maximum energy E0 of electrons is shifted downwards. I have considered two explanations for the anomaly. The original models are based on TGD variants of original models involving belt of dark neutrinos or antineutrinos along the orbit of Earth. Only recently (towards the end of year 2008) I realized that nuclear string model provides much more elegant explanation of the anomaly and has also the potential to explain much more general anomalies.
Cold fusion has not been taken seriously by the physics community but the situation has begun to change gradually. There is an increasing evidence for the occurrence of nuclear transmutations of heavier elements besides the production of 4He and 3H whereas the production rate of 3He and neutrons is very low. These characteristics are not consistent with the standard nuclear physics predictions. Also Coulomb wall and the absence of gamma rays and the lack of a mechanism transferring nuclear energy to the electrolyte have been used as an argument against cold fusion. TGD based model relying on the notion of charged color bonds explains the anomalous characteristics of cold fusion.
Nuclear String Hypothesis
Nuclear string model in form discussed in this chapter allows now to understand both nuclear binding energies of both A>4 nuclei and A≤4 nuclei in terms of three fractal variants of QCD. The model also explains giant resonances and so called pygmy resonances in terms of de-coherence of Bose-Einstein condensates of exotic pion like color bosons to sub-condensates.
Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion.
A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays.
2. A>4 nuclei as nuclear strings consisting of A< 4 nuclei
During last weeks a more refined version of nuclear string hypothesis has evolved.
3. Bose-Einstein condensation of color bonds as a mechanism of nuclear binding
The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of pion like colored states associated with color flux tubes connecting 4He nuclei.
Giant (dipole) resonances and so called pygmy resonances interpreted in terms of de-coherence of the Bose-Einstein condensates associated with A≤ 4 nuclei and with the nuclear string formed from A≤ 4 nuclei provide a unique test for the model. The key observation is that the splitting of the Bose-Einstein condensate to pieces costs a precisely defined energy due to the n2 dependence of the total binding energy.
A speculative picture proposing a connection between homeopathy, water memory, and phantom DNA effect is discussed and on basis of this connection a vision about how the tqc hardware represented by the genome is actively developed by subjecting it to evolutionary pressures represented by a virtual world representation of the physical environment. The speculation inspired by this vision is that genetic code as well as DNA-, RNA- and amino-acid sequences should have representation in terms of nuclear strings. The model for dark baryons indeed leads to an identification of these analogs and the basic numbers of genetic code including also the numbers of aminoacids coded by a given number of codons are predicted correctly. Hence it seems that genetic code is universal rather than being an accidental outcome of the biological evolution.
Cold Fusion Again
During years I have developed two models of cold fusion and in this chapter these models are combined together. The basic idea of TGD based model of cold is that cold fusion occurs in two steps. First dark nuclei (large heff=n× h) with much lower binding energy than ordinary nuclei are formed at magnetic flux tubes possibly carrying monopole flux. These nuclei can leak out the system along magnetic flux tubes. Under some circumstances these dark nuclei can transform to ordinary nuclei and give rise to detectable fusion products.
An essential additional condition is that the dark protons can decay to neutrons rapidly enough by exchanges of dark weak bosons effectively massless below atomic length scale. This allows to overcome the Coulomb wall and explains why final state nuclei are stable and the decay to ordinary nuclei does not yield only protons. Thus it seems that this model combined with the TGD variant of Widom-Larsen model could explain nicely the existing data.
I will describe the steps leading to the TGD inspired model for cold fusion combining the earlier TGD variant of Widom-Larsen model with the model inspired by the TGD inspired model of Pollack's fourth phase of water using as input data findings from laser pulse induced cold fusion discovered by Leif Holmlid and collaborators. I consider briefly also alternative options (models assuming surface plasma polariton and heavy electron). After that I apply TGD inspired model in some cases (Pons-Fleischman effect, bubble fusion, and LeClair effect). The model explains the strange findings about cold fusion - in particular the fact that only stable nuclei are produced - and suggests that also ordinary nuclear reactions might have more fundamental description in terms of similar model.
Dark Nuclear Physics and Condensed Matter
In this chapter the possible effects of dark matter in nuclear physics and condensed matter physics are considered. The spirit of the discussion is necessarily rather speculative since the vision about the hierarchy of Planck constants is only 5 years old. The most general form of the hierarchy would involve both singular coverings and factors spaces of CD (causal diamond of M4) defined as intersection of future and past directed light-cones) and CP2. There are grave objections against the allowance of factor spaces. In this case Planck constant could be smaller than its standard value and there are very few experimental indications for this. Quite recently came the realization that the hierarchy of Planck constants might emerge from the basic quantum TGD as a consequence of the extreme non-linearity of field equations implying that the correspondence between the derivatives of imbedding space coordinates and canonical momentum is many-to-one. This makes natural to the introduction of covering spaces of CD and CP2. Planck constant would be effectively replaced with a multiple of ordinary Planck constant defined by the number of the sheets of the covering. The space-like 3-surfaces at the ends of the causal diamond and light-like 3-surfaces defined by wormhole throats carrying elementary particle quantum numbers would be quantum critical in the sense of being unstable against decay to many-sheeted structures. Charge fractionization could be understood in this scenario. Biological evolution would have the increase of the Planck constant as as one aspect. The crucial scaling of the size of CD by Planck constant can be justified by a simple argument. Note that primary p-adic length scales would scale as hbar1/2 rather than hbar as assumed in the original model.
1. What darkness means?
Dark matter is identified as matter with non-standard value of Planck constant. The weak form of darkness is that only some field bodies of the particle consisting of flux quanta mediating bound state interactions between particles become dark. One can assign to each interaction a field body (em, Z0, W, gluonic, gravitational) and p-adic prime and the value of Planck constant characterize the size of the particular field body. One might even think that particle mass can be assigned with its em field body and that Compton length of particle corresponds to the size scale of em field body. Complex combinations of dark field bodies become possible and the dream is that one could understand various phases of matter in terms of these combinations.
Nuclear string model suggests that the sizes of color flux tubes and weak flux quanta associated with nuclei can become dark in this sense and have size of order atomic radius so that dark nuclear physics would have a direct relevance for condensed matter physics. If this happens, it becomes impossible to make a reductionistic separation between nuclear physics and condensed matter physics and chemistry anymore.
2. What dark nucleons are?
The basic hypothesis is that nuclei can make a phase transition to dark phase in which the size of both quarks and nuclei is measured in Angstroms. For the less radical option this transition could happen only for the color, weak, and em field bodies. Proton connected by dark color bonds super-nuclei with inter-nucleon distance of order atomic radius might be crucial for understanding the properties of water and perhaps even the properties of ordinary condensed matter. Large hbar phase for weak field body of D and Pd nuclei with size scale of atom would explain selection rules of cold fusion.
3. Anomalous properties of water and dark nuclear physics
A direct support for partial darkness of water comes from the H1.5O chemical formula supported by neutron and electron diffraction in attosecond time scale. The explanation would be that one fourth of protons combine to form super-nuclei with protons connected by color bonds and having distance sufficiently larger than atomic radius.
The crucial property of water is the presence of molecular clusters. Tedrahedral clusters allow an interpretation in terms of magic Z=8 protonic dark nuclei. The icosahedral clusters consisting of 20 tedrahedral clusters in turn have interpretation as magic dark dark nuclei: the presence of the dark dark matter explains large portion of the anomalies associated with water and explains the unique role of water in biology. In living matter also higher levels of dark matter hierarchy are predicted to be present. The observed nuclear transmutation suggest that also light weak bosons are present.
4. Implications of the partial darkness of condensed matter
The model for partially dark condensed matter inspired by nuclear string model and the model of cold fusion inspired by it allows to understand the low compressibility of the condensed matter as being due to the repulsive weak force between exotic quarks, explains large parity breaking effects in living matter, and suggests a profound modification of the notion of chemical bond having most important implications for bio-chemistry and understanding of bio-chemical evolution.
Dark Forces and Living Matter
The unavoidable presence of classical long ranged weak (and also color) gauge fields in TGD Universe has been a continual source of worries for more than two decades. The basic question has been whether Z0 charges of elementary particles are screened in electro-weak length scale or not. Same question msut be raised in the case of color charges. For a long time the hypothesis was that the charges are feeded to larger space-time sheets in this length scale rather than screened by vacuum charges so that an effective screening results in electro-weak length scale. This hypothesis turned out to be a failure and was replaced with the idea that the non-linearity of field equations (only topological half of Maxwell's equations holds true) implies the generation of vacuum charge densities responsible for the screening.
The weak form of electric-magnetic duality led to the identification of the long sought for mechanism causing the weak screening in electroweak scales. The basic implication of the duality is that Kähler electric charges of wormhole throats representing particles are proportional to Kähler magnetic charges so that the CP2 projections of the wormhole throats are homologically non-trivial. The Kähler magnetic charges do not create long range monopole fields if they are neutralized by wormhole throats carrying opposite monopole charges and weak isospin neutralizing the axial isospin of the particle's wormhole throat. One could speak of confinement of weak isospin. The weak field bodies of elementary fermions would be replaced with string like objects with a length of order W boson Compton length. Electro-magnetic flux would be feeded to electromagnetic field body where it would be feeded to larger space-time sheets. Similar mechanism could apply in the case of color quantum numbers. Weak charges would be therefore screened for ordinary matter in electro-weak length scale but dark electro-weak bosons correspond to much longer symmetry breaking length scale for weak field body. Large values of Planck constant would make it possible to zoop up elementary particles and study their internal structure without any need for gigantic accelerators.
In this chapter possible implications of the dark weak force for the understanding of living matter are discussed. The basic question is how classical Z0 fields could make itself visible. Large parity breaking effects in living matter suggests which direction one should look for the answer to the question. One possible answer is based on the observation that for vacuum extremals classical electromagnetic and Z0 fields are proportional to each other and this means that the electromagnetic charges of dark fermions standard are replaced with effective couplings in which the contribution of classical Z0 force dominates. This modifies dramatically the model for the cell membrane as a Josephson junction and raises the scale of Josephson energies from IR range just above thermal threshold to visible and ultraviolet. The amazing finding is that the Josephson energies for biologically important ions correspond to the energies assigned to the peak frequencies in the biological activity spectrum of photoreceptors in retina suggesting. This suggests that almost vacuum extremals and thus also classical Z0 fields are in a central role in the understanding of the functioning of the cell membrane and of sensory qualia. This would also explain the large parity breaking effects in living matter.
A further conjecture is that EEG and its predicted fractally scaled variants which same energies in visible and UV range but different scales of Josephson frequencies correspond to Josephson photons with various values of Planck constant. The decay of dark ELF photons with energies of visible photons would give rise to bunches of ordinary ELF photons. Biophotons in turn could correspond to ordinary visible photons resulting in the phase transition of these photons to photons with ordinary value of Planck constant. This leads to a very detailed view about the role of dark electromagnetic radiation in biomatter and also to a model for how sensory qualia are realized. The general conclusion might be that most effects due to the dark weak force are associated with almost vacuum extremals.
Super-Conductivity in Many-Sheeted Space-Time
In this chapter a model for high Tc super-conductivity as quantum critical phenomenon is developed. The relies on the notions of quantum criticality, dynamical quantized Planck constant requiring a generalization of the 8-D imbedding space to a book like structure, and many-sheeted space-time. In particular, the notion of magnetic flux tube as a carrier of supra current of central concept.
With a sufficient amount of twisting and weaving these basic ideas one ends up to concrete model for high Tc superconductors as quantum critical superconductors consistent with the qualitative facts that I am personally aware. The following minimal model looks the most realistic option found hitherto.
At qualitative level the model explains various strange features of high Tc superconductors. One can understand the high value of Tc and ambivalent character of high Tc super conductors, the existence of pseudogap and scalings laws for observables above Tc, the role of stripes and doping and the existence of a critical doping, etc...
Quantum Hall effect and Hierarchy of Planck Constants
In this chapter I try to formulate more precisely the recent TGD based view about fractional quantum Hall effect (FQHE). This view is much more realistic than the original rough scenario, which neglected the existing rather detailed understanding. The spectrum of ν, and the mechanism producing it is the same as in composite fermion approach. The new elements relate to the not so well-understood aspects of FQHE, namely charge fractionization, the emergence of braid statistics, and non-abelianity of braid statistics.
A Possible Explanation of Shnoll Effect
Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to Earth-Sun and Earth-Moon gravitational interaction.
These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics.
The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyper-finite factors of type II1 (HFFs) and closely related quantum groups. Also p-adic physics -another key element of TGD- is expected to be involved. A modification of a given probability distribution P(n| λi) for a positive integer valued variable n characterized by rational-valued parameters λi is obtained by replacing n and the integers characterizing λi with so called quantum integers depending on the quantum phase qm=exp(i2π/m). Quantum integer nq must be defined as the product of quantum counterparts pq of the primes p appearing in the prime decomposition of n. One has pq= sin(2π p/m)/sin(2π/m) for p ≠ P and pq=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p.
The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as p-adic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking p-adic -1 to P and powers Pn to P-n and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ Pn)=∑ I(Pn). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n+n-, n+q>0 with fixed n-q< 0.
The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of Earth-Sun and Earth-Moon distances. The TGD inspired proposal is that the p-dic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ kpΔ R/R and Δ m/m≈ kmΔ R/R. The p-adic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions to have gigantic values and this allows p-adicity with small values of the p-adic prime P.