ABSTRACTS OF HYPER-FINITE FACTORS, P-ADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHY

## PART I: HYPER-FINITE FACTORS AND HIERARCHY OF PLANCK CONSTANTS

 Was von Neumann Right After All? The work with TGD inspired model for topological quantum computation led to the realization that von Neumann algebras, in particular so called hyper-finite factors of type II1, seem to provide the mathematics needed to develop a more explicit view about the construction of S-matrix. The original discussion has transformed during years from free speculation reflecting in many aspects my ignorance about the mathematics involved to a more realistic view about the role of these algebras in quantum TGD. The discussions of this chapter have been restricted to the basic notions are discussed and only short mention is made to TGD applications discussed in second chapter. The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation * and observables correspond to Hermitian operators. Any measurable function f(A) of operator A belongs to the algebra and one can say that non-commutative measure theory is in question. The predictions of quantum theory are expressible in terms of traces of observables. Density matrix defining expectations of observables in ensemble is the basic example. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace: tr(Id)=1. In the finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as the dimension of the projection. The non-trivial implication of the fact that traces of projections are never larger than one is that the eigen spaces of the density matrix must be infinite-dimensional for non-vanishing projection probabilities. Quantum measurements can lead with a finite probability only to mixed states with a\ density matrix which is projection operator to infinite-dimensional subspace. The simple von Neumann algebras for which unit operator has unit trace are known as factors of type II1. The definitions of adopted by von Neumann allow however more general algebras. Type In algebras correspond to finite-dimensional matrix algebras with finite traces whereas I∞ associated with a separable infinite-dimensional Hilbert space does not allow bounded traces. For algebras of type III non-trivial traces are always infinite and the notion of trace becomes useless being replaced by the notion of state which is generalization of the notion of thermodynamical state. The fascinating feature of this notion of state is that it defines a unique modular automorphism of the factor defined apart from unitary inner automorphism and the question is whether this notion or its generalization might be relevant for the construction of M-matrix in TGD. It however seems that in TGD framework based on Zero Energy Ontology identifiable as "square root" of thermodynamics a square root of thermodynamical state is needed. The inclusions of hyper-finite factors define an excellent candidate for the description of finite measurement resolution with included factor representing the degrees of freedom below measurement resolution. The would also give connection to the notion of quantum group whose physical interpretation has remained unclear. This idea is central to the proposed applications to quantum TGD discussed in separate chapter.

 Mathematical Speculations Inspired by the Hierarchy of Planck Constants This chapter contains the purely mathematical speculations about the hierarchy of Planck constants (actually only effective hierarchy if the recent interpretation is correct) as separate from the material describing the physical ideas, key mathematical concepts, and the basic applications. These mathematical speculations emerged during the first stormy years in the evolution of the ideas about Planck constant and must be taken with a big grain of salt. I feel myself rather conservative as compared to the fellow who produced this stuff for 7 years ago. This all is of course very relative. Many readers might experience this recent me as a reckless speculator. The first speculative question is about possible relationship between Jones inclusions of hyperfinite factors of type \$II_1\$ (hyper-finite factors are von Neuman algebras emerging naturally in TGD framework). The basic idea is that the discrete groups assignable to inclusions could correspond to discrete groups acting in the effective covering spaces of imbedding space assignable to the hierarchy of Planck constants. There are also speculations relating to the hierarchy of Planck constants, Mc-Kay correspondence, and Jones inclusions. Even Farey sequences, Riemann hypothesis and and N-tangles are discussed. Depending on reader these speculations might be experienced as irritating or entertaining. It would be interesting to go this stuff through in the light of recent understanding of the effective hierarchy of Planck constants to see what portion of its survives. Back to the table of contents

 Quantum criticality and dark matter Quantum criticality is one of the corner stone assumptions of TGD. The value of Kähler coupling strength fixes quantum TGD and is analogous to critical temperature. TGD Universe would be quantum critical. What does this mean is however far from obvious and I have pondered the notion repeatedly both from the point of view of mathematical description and phenomenology. Superfluids exhibit rather mysterious looking effects such as fountain effect and what looks like quantum coherence of superfluid containers which should be classically isolated. These findings serve as a motivation for the proposal that genuine superfluid portion of superfluid corresponds to a large heff phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity. Back to the table of contents

 About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In this chapter a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed. In ZEO the non-changing parts of zero energy states are assigned to the passive boundary of CD and β0 should be assigned to it. There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind

 TGD View about Quasars The work of Rudolph Schild and his colleagues Darryl Letier and Stanley Robertson (among others) suggests that quasars are not supermassive blackholes but something else - MECOs, magnetic eternally collapsing objects having no horizon and possessing magnetic moment. Schild et al argue that the same applies to galactic blackhole candidates and active galactic nuclei, perhaps even to ordinary blackholes as Abhas Mitra, the developer of the notion of MECO proposes. In the sequel TGD inspired view about quasars relying on the general model for how galaxies are generated as the energy of thickened cosmic strings decays to ordinary matter is proposed. Quasars would not be be blackhole like objects but would serve as an analog of the decay of inflaton field producing the galactic matter. The energy of the string like object would replace galactic dark matter and automatically predict a flat velocity spectrum. TGD is assumed to have standard model and GRT as QFT limit in long length scales. Could MECOs provide this limit? It seems that the answer is negative: MECOs represent still collapsing objects. The energy of inflaton field is replaced with the sum of the magnetic energy of cosmic string and negative volume energy, which both decrease as the thickness of flux tube increases. The liberated energy transforms to ordinary particles and their dark variants in TGD sense. Time reversal of blackhole would be more appropriate interpretation. One can of course ask, whether the blackhole candidates in galactic nuclei are time reversals of quasars in TGD sense. The writing of the article led also to a considerable understanding of two key aspects of TGD. The understanding of twistor lift and p-adic evolution of cosmological constant improved considerably. Also the understanding of gravitational Planck constant and the notion of space-time as a covering space became much more detailed in turn allowing much more refined view about the anatomy of magnetic body. Back to the table of contents

 Holography and Quantum Error Correcting Codes: TGD View Preskill et all suggest a highly interesting representation of holography in terms of quantum error correction codes The idea is that time= constant section of AdS, which is hyperbolic space allowing tessellations, can define tensor networks. So called perfect tensors are building bricks of the tensor networks providing representation for holography and at the same time defining error correcting codes by mapping localized interior states (logical qubits) to highly entangled non-local boundary states (physical qubits). There are three observations that put bells ringing and actually motivated this article. Perfect tensors define entanglement which TGD framework corresponds negentropic entanglement playing key role in TGD inspired theory of consciousness and of living matter. In TGD framework the hyperbolic tesselations are realized at hyperbolic spaces H3(a) defining light-cone proper time hyperboloids of M4 light-cone. TGD replaces AdS/CFT correspondence with strong form of holography. A very attractive idea is that in living matter magnetic flux tube networks defining quantum computational networks provide a realization of tensor networks realizing also holographic error correction mechanism: negentropic entanglement - perfect tensors - would be the key element. As I have proposed, these flux tube networks would define kind of central nervous system make it possible for living matter to experience consciously its biological body using magnetic body. These networks would also give rise to the counterpart of condensed matter physics of dark matter at the level of magnetic body: the replacement of lattices based on subgroups of translation group with infinite number of tesselations means that this analog of condensed matter physics describes quantum complexity. Back to the table of contents

 PART II: P-ADIC LENGTH SCALE HIERARCHY AND DARK MATTER HIERARCY

 Cold Fusion Again During years I have developed two models of cold fusion and in this chapter these models are combined together. The basic idea of TGD based model of cold is that cold fusion occurs in two steps. First dark nuclei (large heff=n× h) with much lower binding energy than ordinary nuclei are formed at magnetic flux tubes possibly carrying monopole flux. These nuclei can leak out the system along magnetic flux tubes. Under some circumstances these dark nuclei can transform to ordinary nuclei and give rise to detectable fusion products. An essential additional condition is that the dark protons can decay to neutrons rapidly enough by exchanges of dark weak bosons effectively massless below atomic length scale. This allows to overcome the Coulomb wall and explains why final state nuclei are stable and the decay to ordinary nuclei does not yield only protons. Thus it seems that this model combined with the TGD variant of Widom-Larsen model could explain nicely the existing data. I will describe the steps leading to the TGD inspired model for cold fusion combining the earlier TGD variant of Widom-Larsen model with the model inspired by the TGD inspired model of Pollack's fourth phase of water using as input data findings from laser pulse induced cold fusion discovered by Leif Holmlid and collaborators. I consider briefly also alternative options (models assuming surface plasma polariton and heavy electron). After that I apply TGD inspired model in some cases (Pons-Fleischman effect, bubble fusion, and LeClair effect). The model explains the strange findings about cold fusion - in particular the fact that only stable nuclei are produced - and suggests that also ordinary nuclear reactions might have more fundamental description in terms of similar model. Back to the table of contents

 Dark Nuclear Physics and Condensed Matter In this chapter the possible effects of dark matter in nuclear physics and condensed matter physics are considered. The spirit of the discussion is necessarily rather speculative since the vision about the hierarchy of Planck constants is only 5 years old. The most general form of the hierarchy would involve both singular coverings and factors spaces of CD (causal diamond of M4) defined as intersection of future and past directed light-cones) and CP2. There are grave objections against the allowance of factor spaces. In this case Planck constant could be smaller than its standard value and there are very few experimental indications for this. Quite recently came the realization that the hierarchy of Planck constants might emerge from the basic quantum TGD as a consequence of the extreme non-linearity of field equations implying that the correspondence between the derivatives of imbedding space coordinates and canonical momentum is many-to-one. This makes natural to the introduction of covering spaces of CD and CP2. Planck constant would be effectively replaced with a multiple of ordinary Planck constant defined by the number of the sheets of the covering. The space-like 3-surfaces at the ends of the causal diamond and light-like 3-surfaces defined by wormhole throats carrying elementary particle quantum numbers would be quantum critical in the sense of being unstable against decay to many-sheeted structures. Charge fractionization could be understood in this scenario. Biological evolution would have the increase of the Planck constant as as one aspect. The crucial scaling of the size of CD by Planck constant can be justified by a simple argument. Note that primary p-adic length scales would scale as hbar1/2 rather than hbar as assumed in the original model. 1. What darkness means? Dark matter is identified as matter with non-standard value of Planck constant. The weak form of darkness is that only some field bodies of the particle consisting of flux quanta mediating bound state interactions between particles become dark. One can assign to each interaction a field body (em, Z0, W, gluonic, gravitational) and p-adic prime and the value of Planck constant characterize the size of the particular field body. One might even think that particle mass can be assigned with its em field body and that Compton length of particle corresponds to the size scale of em field body. Complex combinations of dark field bodies become possible and the dream is that one could understand various phases of matter in terms of these combinations. Nuclear string model suggests that the sizes of color flux tubes and weak flux quanta associated with nuclei can become dark in this sense and have size of order atomic radius so that dark nuclear physics would have a direct relevance for condensed matter physics. If this happens, it becomes impossible to make a reductionistic separation between nuclear physics and condensed matter physics and chemistry anymore. 2. What dark nucleons are? The basic hypothesis is that nuclei can make a phase transition to dark phase in which the size of both quarks and nuclei is measured in Angstroms. For the less radical option this transition could happen only for the color, weak, and em field bodies. Proton connected by dark color bonds super-nuclei with inter-nucleon distance of order atomic radius might be crucial for understanding the properties of water and perhaps even the properties of ordinary condensed matter. Large hbar phase for weak field body of D and Pd nuclei with size scale of atom would explain selection rules of cold fusion. 3. Anomalous properties of water and dark nuclear physics A direct support for partial darkness of water comes from the H1.5O chemical formula supported by neutron and electron diffraction in attosecond time scale. The explanation would be that one fourth of protons combine to form super-nuclei with protons connected by color bonds and having distance sufficiently larger than atomic radius. The crucial property of water is the presence of molecular clusters. Tedrahedral clusters allow an interpretation in terms of magic Z=8 protonic dark nuclei. The icosahedral clusters consisting of 20 tedrahedral clusters in turn have interpretation as magic dark dark nuclei: the presence of the dark dark matter explains large portion of the anomalies associated with water and explains the unique role of water in biology. In living matter also higher levels of dark matter hierarchy are predicted to be present. The observed nuclear transmutation suggest that also light weak bosons are present. 4. Implications of the partial darkness of condensed matter The model for partially dark condensed matter inspired by nuclear string model and the model of cold fusion inspired by it allows to understand the low compressibility of the condensed matter as being due to the repulsive weak force between exotic quarks, explains large parity breaking effects in living matter, and suggests a profound modification of the notion of chemical bond having most important implications for bio-chemistry and understanding of bio-chemical evolution. Back to the table of contents

 Dark Forces and Living Matter The unavoidable presence of classical long ranged weak (and also color) gauge fields in TGD Universe has been a continual source of worries for more than two decades. The basic question has been whether Z0 charges of elementary particles are screened in electro-weak length scale or not. Same question msut be raised in the case of color charges. For a long time the hypothesis was that the charges are feeded to larger space-time sheets in this length scale rather than screened by vacuum charges so that an effective screening results in electro-weak length scale. This hypothesis turned out to be a failure and was replaced with the idea that the non-linearity of field equations (only topological half of Maxwell's equations holds true) implies the generation of vacuum charge densities responsible for the screening. The weak form of electric-magnetic duality led to the identification of the long sought for mechanism causing the weak screening in electroweak scales. The basic implication of the duality is that Kähler electric charges of wormhole throats representing particles are proportional to Kähler magnetic charges so that the CP2 projections of the wormhole throats are homologically non-trivial. The Kähler magnetic charges do not create long range monopole fields if they are neutralized by wormhole throats carrying opposite monopole charges and weak isospin neutralizing the axial isospin of the particle's wormhole throat. One could speak of confinement of weak isospin. The weak field bodies of elementary fermions would be replaced with string like objects with a length of order W boson Compton length. Electro-magnetic flux would be feeded to electromagnetic field body where it would be feeded to larger space-time sheets. Similar mechanism could apply in the case of color quantum numbers. Weak charges would be therefore screened for ordinary matter in electro-weak length scale but dark electro-weak bosons correspond to much longer symmetry breaking length scale for weak field body. Large values of Planck constant would make it possible to zoop up elementary particles and study their internal structure without any need for gigantic accelerators. In this chapter possible implications of the dark weak force for the understanding of living matter are discussed. The basic question is how classical Z0 fields could make itself visible. Large parity breaking effects in living matter suggests which direction one should look for the answer to the question. One possible answer is based on the observation that for vacuum extremals classical electromagnetic and Z0 fields are proportional to each other and this means that the electromagnetic charges of dark fermions standard are replaced with effective couplings in which the contribution of classical Z0 force dominates. This modifies dramatically the model for the cell membrane as a Josephson junction and raises the scale of Josephson energies from IR range just above thermal threshold to visible and ultraviolet. The amazing finding is that the Josephson energies for biologically important ions correspond to the energies assigned to the peak frequencies in the biological activity spectrum of photoreceptors in retina suggesting. This suggests that almost vacuum extremals and thus also classical Z0 fields are in a central role in the understanding of the functioning of the cell membrane and of sensory qualia. This would also explain the large parity breaking effects in living matter. A further conjecture is that EEG and its predicted fractally scaled variants which same energies in visible and UV range but different scales of Josephson frequencies correspond to Josephson photons with various values of Planck constant. The decay of dark ELF photons with energies of visible photons would give rise to bunches of ordinary ELF photons. Biophotons in turn could correspond to ordinary visible photons resulting in the phase transition of these photons to photons with ordinary value of Planck constant. This leads to a very detailed view about the role of dark electromagnetic radiation in biomatter and also to a model for how sensory qualia are realized. The general conclusion might be that most effects due to the dark weak force are associated with almost vacuum extremals. Back to the table of contents

 Super-Conductivity in Many-Sheeted Space-Time In this chapter a model for high Tc super-conductivity as quantum critical phenomenon is developed. The relies on the notions of quantum criticality, dynamical quantized Planck constant requiring a generalization of the 8-D imbedding space to a book like structure, and many-sheeted space-time. In particular, the notion of magnetic flux tube as a carrier of supra current of central concept. With a sufficient amount of twisting and weaving these basic ideas one ends up to concrete model for high Tc superconductors as quantum critical superconductors consistent with the qualitative facts that I am personally aware. The following minimal model looks the most realistic option found hitherto. The general idea is that magnetic flux tubes are carriers of supra currents. In anti-ferromagnetic phases these flux tube structures form small closed loops so that the system behaves as an insulator. Some mechanism leading to a formation of long flux tubes must exist. Doping creates holes located around stripes, which become positively charged and attract electrons to the flux tubes. The higher critical temperature Tc1 corresponds to a formation local configurations of parallel spins assigned to the holes of stripes giving rise to a local dipole fields with size scale of the order of the length of the stripe. Conducting electrons form Cooper pairs at the magnetic flux tube structures associated with these dipole fields. The elongated structure of the dipoles favors angular momentum L=2 for the pairs. The presence of magnetic field favors Cooper pairs with spin S=1. Stripes can be seen as 1-D metals with delocalized electrons. The interaction responsible for the energy gap corresponds to the transversal oscillations of the magnetic flux tubes inducing oscillations of the nuclei of the stripe. These transverse phonons have spin and their exchange is a good candidate for the interaction giving rise to a mass gap. This could explain the BCS type aspects of high Tc super-conductivity. Above Tc supra currents are possible only in the length scale of the flux tubes of the dipoles which is of the order of stripe length. The reconnections between neighboring flux tube structures induced by the transverse fluctuations give rise to longer flux tubes structures making possible finite conductivity. These occur with certain temperature dependent probability p(T,L) depending on temperature and distance L between the stripes. By criticality p(T,L) depends on the dimensionless variable x=TL/hbar only: p=p(x). At critical temperature Tc transverse fluctuations have large amplitude and makes p(xc) so large that very long flux tubes are created and supra currents can run. The phenomenon is completely analogous to percolation. The critical temperature Tc = xchbar/L is predicted to be proportional to hbar and inversely proportional to L (, which is indeed to be the case). If flux tubes correspond to a large value of hbar, one can understand the high value of Tc. Both Cooper pairs and magnetic flux tube structures represent dark matter in TGD sense. The model allows to interpret the characteristic spectral lines in terms of the excitation energy of the transversal fluctuations and gap energy of the Cooper pair. The observed 50 meV threshold for the onset of photon absorption suggests that below Tc also S=0 Cooper pairs are possible and have gap energy about 9 meV whereas S=1 Cooper pairs would have gap energy about 27 meV. The flux tube model indeed predicts that S=0 Cooper pairs become stable below Tc since they cannot anymore transform to S=1 pairs. Their presence could explain the BCS type aspects of high Tc super-conductivity. The estimate for hbar/hbar0 = r from critical temperature Tc1 is about r=3 contrary to the original expectations inspired by the model of of living system as a super-conductor suggesting much higher value. An unexpected prediction is that coherence length is actually r times longer than the coherence length predicted by conventional theory so that type I super-conductor could be in question with stripes serving as duals for the defects of type I super-conductor in nearly critical magnetic field replaced now by ferromagnetic phase. TGD predicts preferred values for r=hbar/hbar0 and the applications to bio-systems favor powers of r=211. r=211 predicts that electron Compton length is of order atomic size scale. Bio-superconductivity could involve electrons with r=222 having size characterized by the thickness of the lipid layer of cell membrane. At qualitative level the model explains various strange features of high Tc superconductors. One can understand the high value of Tc and ambivalent character of high Tc super conductors, the existence of pseudogap and scalings laws for observables above Tc, the role of stripes and doping and the existence of a critical doping, etc... Back to the table of contents

 Quantum Hall effect and Hierarchy of Planck Constants In this chapter I try to formulate more precisely the recent TGD based view about fractional quantum Hall effect (FQHE). This view is much more realistic than the original rough scenario, which neglected the existing rather detailed understanding. The spectrum of ν, and the mechanism producing it is the same as in composite fermion approach. The new elements relate to the not so well-understood aspects of FQHE, namely charge fractionization, the emergence of braid statistics, and non-abelianity of braid statistics. The starting point is composite fermion model so that the basic predictions are same. Now magnetic vortices correspond to (Kähler) magnetic flux tubes carrying unit of magnetic flux. The magnetic field inside flux tube would be created by delocalized electron at the boundary of the vortex. One can raise two questions. Could the boundary of the macroscopic system carrying anyonic phase have identification as a macroscopic analog of partonic 2-surface serving as a boundary between Minkowskian and Euclidian regions of space-time sheet? If so, the space-time sheet assignable to the macroscopic system in question would have Euclidian signature, and would be analogous to blackhole or to a line of generalized Feynman diagram. Could the boundary of the vortex be identifiable a light-like boundary separating Minkowskian magnetic flux tube from the Euclidian interior of the macroscopic system and be also analogous to wormhole throat? If so, both macroscopic objects and magnetic vortices would be rather exotic geometric objects not possible in general relativity framework. Taking composite model as a starting point one obtains standard predictions for the filling fractions. One should also understand charge fractionalization and fractional braiding statistics. Here the vacuum degeneracy of Kähler action suggests the explanation. Vacuum degeneracy implies that the correspondence between the normal component of the canonical momentum current and normal derivatives of imbedding space coordinates is 1- to-n. These kind of branchings result in multi-furcations induced by variations of the system parameters and the scaling of external magnetic field represents one such variation. At the orbits of wormhole throats, which can have even macroscopic M4 projections, one has 1→ na correspondence and at the space-like ends of the space-time surface at light-like boundaries of causal diamond one has 1→ nb correspondence. This implies that at partonic 2-surfaces defined as the intersections of these two kinds of 3-surfaces one has 1→ na× nb correspondence. This correspondence can be described by using a local singular n-fold covering of the imbedding space. Unlike in the original approach, the covering space is only a convenient auxiliary tool rather than fundamental notion. The fractionalization of charge can be understood as follows. A delocalization of electron charge to the n sheets of the multi-furcation takes place and single sheet is analogous to a sheet of Riemann surface of function z1/n and carries fractional charge q=e/n, n=nanb. Fractionalization applies also to other quantum numbers. One can have also many-electron stats of these states with several delocalized electrons: in this case one obtains more general charge fractionalization: q= ν e. Also the fractional braid statistics can be understood. For ordinary statistics rotations of M4 rotate entire partonic 2-surfaces. For braid statistics rotations of M4 (and particle exchange) induce a flow braid ends along partonic 2-surface. If the singular local covering is analogous to the Riemann surface of z1/n, the braid rotation by Δ Φ=2π, where Φ corresponds to M4 angle, leads to a second branch of multi-furcation and one can give up the usual quantization condition for angular momentum. For the natural angle coordinate φ of the n-branched covering Δ φ=2π corresponds to Δ Φ=n× 2π. If one identifies the sheets of multi-furcation and therefore uses Φ as angle coordinate, single valued angular momentum eigenstates become in general n-valued, angular momentum in braid statistics becomes fractional and one obtains fractional braid statistics for angular momentum. How to understand the exceptional values ν=5/2,7/2 of the filling fraction? The non-abelian braid group representations can be interpreted as higher-dimensional projective representations of permutation group: for ordinary statistics only Abelian representations are possible. It seems that the minimum number of braids is n>2 from the condition of non-abelianity of braid group representations. The condition that ordinary statistics is fermionic, gives n>3. The minimum value is n=4 consistent with the fractional charge e/4. The model introduces Z4 valued topological quantum number characterizing flux tubes. This also makes possible non-Abelian braid statistics. The interpretation of this quantum number as a Z4 valued momentum characterizing the four delocalized states of the flux tube at the sheets of the 4-furcation suggests itself strongly. Topology would corresponds to that of 4-fold covering space of imbedding space serving as a convenient auxiliary tool. The more standard explanation is that Z4=Z2× Z2 such that Z2:s correspond to the presence or absence of neutral Majorana fermion in the two Cooper pair like states formed by flux tubes. What remains to be understood is the emergence of non-abelian gauge group realizing non-Abelian fractional statistics in gauge theory framework. TGD predicts the possibility of dynamical gauge groups and maybe this kind of gauge group indeed emerges. Dynamical gauge groups emerge also for stacks of N branes and the n sheets of multifurcation are analogous to the N sheets in the stack for many-electron states. Back to the table of contents