What's new inHYPER-FINITE FACTORS, P-ADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHYNote: Newest contributions are at the top! |
Year 2017 |
How the ionization is possible in living matter?The appearance of ions in living matter looks mysterious. Same is true concerning ions in electrolytes. It is easy to talk about cold plasma but much more difficult to answer the question how this cold plasma can be created. Usually the formation of plasma involves ionization, which requires high temperature of order of the atomic binding energy for the valence electrons of the atom. For hydrogen atom the binding energy is around 13 eV, which corresponds to a temperature of roughly 130,000 Kelvin! This is three orders of magnitude higher than room temperature! In electrolyte the presence of rather weak electric fields cannot explain why the ionization takes place. For some reason chemists and biologists do not spend much time in pondering fundamentals and theoreticians enjoying monthly salary have a highly irreverent attitude to these disciplines as an intellectual entertainment of lower life-forms. Therefore also this question has been guided under the rug and stayed there. TGD based explanation for the paradox is simple. If the value of heff/h=n for valence electrons is high enough, the binding energy, which is proportional to 1/n2, becomes so high that a photon with rather low energy, say infrared (IR) photon, can be ionize the dark atom. One can say that the atoms in this state are quantum critical, a small perturbation can ionize them.
|
About the biological role of low valence ionsA comment about the role of biologically important ions is in order. As a rule they tend to have low valence, especially those whose cyclotron frequencies for Bend=.2 Gauss seem to be important biologically. The possibly existing valence bonds between atoms towards the left end of the rows of the periodic table (Li,Na,K,Ca, Mg,..) - if they even exist at all - have low valence and low value n satisfying n≥ 6 (note that the valence of the bond is the valence of the atom with higher valence).
|
Valence bond theory from the hierarchy of Planck constantsThe idea that valence bonds, or at least some of them, correspond to non-standard value of heff/h=n (see this) is very attractive. It could allow to understand what chemical bonds really are and allow a detailed view about how reductionism fails in the sequence of transitions from atomic physics to molecular physics to chemistry to biochemistry.
|
Misbehaving Ruthenium atomsThe understanding of dark matter in TGD sense has been evolving rapidly recently. Dark matter at magnetic flux tubes appears to be part of ordinary chemistry and even more a part of organic chemistry. Non-equilibrium-thermodynamics has popped up also as a natural application of tensor networks formed from flux tubes carrying dark matter perform quantum phase transitions. The ideas about how to generate systems with life-like properties are getting rather precise. Dark matter and flux tubes are suddenly everywhere! Also this piece of text relates to this revolution. In FB I received a link to a highly interesting article. The title of the article was "Breakthrough could launch organic electronics beyond cell phone screens" and is tailored to catch the attention of techno-oriented leader. My attention was however caught for different reasons. The proposed technology would rely on the observation that Ruthenium atoms do not behave as they are expected to behave. Ru atoms appear as dimers of two Ru atoms in the system considered. Free Ru atoms with one valence electron are however needed: they would become ions by giving up their valence electrons, and these electrons would serve as current carriers making the organic material in question semi-conductor. Irradiation by UV light was found to split Ruthenium dimers to single Ru atoms. If the total energy of Ru dimer is smaller than that for two Ru atoms, thermodynamics predicts that the Ru atoms recombine to dimers after the irradiation ceases. The did not however happen! Can one understand the mystery in TGD framework?
|
Positron anomaly nine years laterThe old PAMELA experiment and perhaps newer ones by Fermi-LAT and AMS-02 have discovered lots of positrons in the cosmic rays, whose flux is generally higher than expected. The energies of positrons show steady rise in the range [10,100] GeV and presumably the rise will continue. Such positrons may originate from dark matter and could amount to an "almost direct detection" of the particles that make up dark matter. There are also other interpretations. 1. Dark matter explanations for the positron excess Consider first new physics explanations postulating dark matter.
2. Standard physics explanation for the positron excess One of the standard physics explanations is that the positrons emerge from pulsars. The beams from pulsars contain electrons accelerated to very high energies in the gigantic magnetic field of pulsar. This beam collides with the matter surrounding the pulsar and both gamma rays and positrons are generated in these interactions. The standard physics proposal has been put to a test. One can predict the intensity of gamma rays coming from pulsars using standard model physics and deduce from it the density of electrons needed to generate it. Both positrons and gamma rays would be created when electrons from the pulsar are accelerated to very high energies in enormous magnetic field of the pulsar and collide with surrounding matter. This is like particle accelerator. The energies of the produced gamma rays and also positrons extend to TeV range, which corresponds to the energy range for LHC. It turns out that the flux of electrons implied by the gamma ray intensity is too low to explain the flux of positrons detected by PAMELA and some other experiments: see the popular article and the research article "Extended gamma-ray sources around pulsars constrain the origin of the positron flux at Earth" in Science. 3. TGD based model for positron excess Also TGD suggests an explanation for the positron excess (I learned about PAMELA experiment at my birth day and it was excellent birthday present!). TGD allows a hierarchy of scaled up copies of hadron physics labelled by ordinary Mersenne primes Mn= 2n-1 or by Gaussian Mersennes MG,n= (1+i)n-1 . Ordinary hadron physics would correspond to M107.
4. Other evidence for dark pion like states There is also other evidence for pion-like states dark in TGD sense.
|
Mysteriously disappearing valence electrons of rare Earth metals and hierarchy of Planck constantsThe evidence for the hierarchy of Planck constants heff/h=n labelling dark matter as phases with non-standard value of Planck constant is accumulating. The latest piece of evidence comes from the well-known mystery (not to me until now!) related to rare Earth metals. Some valence electrons of these atoms mystically "disappear" when the atom is heated. This transition is knonw as Lifshitz transition. The popular article Where did those electrons go? Decades-old mystery solved claims that the mystery of disappearing valence electrons is finally resolved. The popular article is inspired by the article Lifshitz transition from valence fluctuations in YbAl3 by Chatterjee et al published in Nature Communications. Dark matter and hierarchy of Planck constants The mysterious disappearance of valence electrons brings in mind dark atoms with Planck constant heff=n×h. Dark matter corresponds in TGD Universe to a hierarchy with levels labelled by the value of heff. One prediction is that the binding energy of dark atom is proportional to 1/heff2 and thus behaves like 1/n2 and decreases with n. n=1 is the first guess for ordinary atoms but just a guess. The claim of Randell Mills is that hydrogen has exotic ground states with larger binding energy. A closer examination suggests n=n0=6 for ordinary states of atoms. The exotic states would have n<6 and therefore higher binding energy scale (see this and this). This leads to a model of biocatalysis in which reacting molecules contain dark hydrogen atoms with non-standard value of n larger than usual so that their binding energy is lower. When dark atom or electron becomes ordinary binding energy is liberated and can kick molecules over the potential wall otherwise preventing the reaction to occur. After that the energy is returned and the atom becomes dark again. Dark atoms would be catalytic switches. Metabolic energy feed would take care of creating the dark states. In fact, heff/h=n serves as a kind of intelligence quotient for a system in TGD inspired theory of consciousness. Are the mysteriously dissappearing valence electrons in rare earth metals dark? Could the heating of the rare earth atoms transform some valence electrons to dark electrons with heff/h=n larger than for ordinary atom? The natural guess is that thermal energy kicks the valence electron to a dark orbital with a smaller binding energy? The prediction is that there should be critical temperatures behaving like Tcr= T0(1- n20/n2). Also transitions between different dark states are possible. These transitions might be also induced by irradiating the atom with photons with the transition energy between different dark states having same quantum numbers. ORMEs as one manner to end up with dark mattter in TGD sense I ended up to the discovery of dark matter hierarchy and eventually to adelic physics, where heff/h=n has number theoretic interpretation along several roads starting from anomalous findings. One of these roads began from the claim about the existence of strange form of matter by David Hudson. Hudson associated with these strange materials several names: White Gold, monoatomic elements, and ORMEs (orbitally re-arranged metallic elements). Any colleague without suicidical tendencies would of course refuse to touch anything like White Gold even with a 10 meter long pole but I had nothing to lose anymore. My question was how to explain these elements if they are actually real. If all valence electrons of this kind of element are dark these element have effectively full electron shells as far as ordinary electrons are considered and behave like noble gases with charge in short scales and do not form molecules. Therefore "monoatomic element" is justified. Of course, only the electrons in the outermost shell could be dark and in this case the element would behave chemically and also look like an atom with smaller atomic number Z. So called Rydberg atoms for which valence electrons are believed to reside at very large orbitals could be actually dark atoms in the proposed sense. Obviously also ORME is an appropriate term since some valence electrons have re-arranged orbitally. White Gold would be Gold but with dark valence electron. The electron configuration of Gold is [Xe] 4f14 5d10 6s1. There is single unpaired electron with principal quantum number m=6 and this would be dark for White Gold and chemically like Platinum (Pt), which indeed has white color. Biologically important ions as analogs of ORMEs In TGD inspired biology the biologically important atoms H+, Li+, Na+, K+, Ca++, Mg++ are assumed to be dark in the proposed sense. But I have not specified darkness in precise sense. Could these ions have dark valence electrons with scaled up Compton length and forming macroscopic quantum phases. For instance, Cooper pairs could become possible and make possible high Tc superconductivity with members of Cooper pair at parallel flux tubes. The earlier proposal that dark hydrogen atoms make possible biocatalysis becomes more detailed: at higher evolutionary levels also the heavier dark atoms behaving like noble gases would become important in bio-catalysis. Interestingly, Rydberg atoms have been proposed to be important for biology and they could be actually dark atoms. To sum up, if TGD view is correct , an entire spectroscopy of dark atoms and partially dark molecules is waiting to be discovered and irradiation by light with energies corresponding to excitation energies of dark states could be the manner to generate dark atomic matter. Huge progress in quantum biology could also take place. But are colleagues mature enough to check whether the TGD view is correct? See the chapter Quantum criticality and dark matter. |
Dark nuclear synthesis and stellar evolutionThe temperature of the solar core is rather near to the scale of dark nuclear binding energy. This co-incidence inspires interesting questions about the dark nucleosynthesis in the stellar evolution. 1. Some questions inspired by a numerical co-incidence The temperature at solar core is about T=1.5× 107 K corresponding to the thermal energy E= 3T/2= 2.25 keV obtained by a scaling factor 2-11 energy ∼ 5 MeV, which is the binding energy scale for the ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident. That the temperature in the stellar core is of the same order of magnitude as dark nuclear binding energy is a highly intriguing finding and encourages to ask whether dark nuclear fusion could be the key step in the production of ordinary nuclei and what is the relation of dark nucleosynthesis to ordinary nucleosynthesis.
The presence of dark nucleosynthesis could modify the views about star formation, in particular about energy production in protostars and pre-main-sequence stars (PMS) following protostars in stellar evolution. In protostars and PMSs the temperature is not yet high enough for the burning of hydrogen to 4He, and according to the standard model the energy radiated by the star consists of the gravitational energy liberated during the gravitational contraction. Could dark nucleosynthesis provide a new mechanism of energy production and could this energy be transferred from the protostar/PMS as dark energy along dark magnetic flux tubes? Can one imagine any empirical evidence for the presence of dark nucleosynthesis in protostars and PMSs?
|
Summary of the model of dark nucleosynthesisThe books of Steven Krivit (see Hacking the atom, Fusion fiasco, and Lost history ) have been of enormous help in polishing the details of the model of dark nucleosynthesis explaining the mysterious aspects of what has been called cold fusion or LENR (Low energy nuclear reactions). Here Summary of the model of dark nucleosynthesis model Recall the basic ideas behind dark nucleosynthesis.
One can raise interesting questions about the relation of dark nucleosynthesis to ordinary nucleosynthesis.
See the chapter Cold fusion again or the article with the same title. |
The lost history from TGD perspectiveThe third volume in " Explorations in Nuclear Research" is about lost history (see this): roughly the period 1910-1930 during which there was not yet any sharp distinction between chemistry and nuclear physics. After 1930 the experimentation became active using radioactive sources and particle accelerators making possible nuclear reactions. The lost history suggests that the methods used determine to unexpected degree what findings are accepted as real. After 1940 the hot fusion as possible manner to liberate nuclear energy became a topic of study but we are still waiting the commercial applications. One can say that the findings about nuclear transmutations during period 1912-1927 became lost history although most of these findings were published in highly respected journals and received also media attention. Interested reader can find in the book detailed stories about persons involved. This allows also to peek to the kitchen side of science and to realize that the written history can contain surprising misidentifications of the milestones in the history of science. Author discusses in detail an example about this: Rutherford is generally regarded as tje discover of the first nuclear transmutation but even Rutherford himself did not make this claim. It is interesting to look what the vision about the anomalous nuclear effects based on dark nucleosynthesis can say about the lost history and whether these findings can provide new information to tighten up the TGD based model, which is only qualitative. Therefore I go through the list given in the beginning of book from the perspective of dark nucleosynthesis. Before continuing it is good to recall the first the basic ideas behind dark nucleosynthesis.
During period 1912-1914 several independent scientists discovered the production of noble gases 4He, neon (Ne), and argon (Ar) using high voltage electrical discharges in vacuum or r through hydrogen gas at low pressures in cathode-ray tubes. Also an unidentified element with mass number 3 was discovered. It was later identified as tritium. Two of the researchers were Nobel laureates. 1922 two researchers in University of Chicago reported production of 4He. Sir Joseph John Thomson explained the production of 4He using occlusion hypothesis. In understand occlusion as a contamination of 4He to the tungsten wire. The question is why not also hydrogen. Why noble gases would have been produced? It is known that noble gases tend to stay near surfaces. In one experiment it was found that 4He production stopped after few days, maybe kind of saturation was achieved. This suggests that isotopes with relatively high mass numbers were produced from dark proton sequences (possibly containing also neutrons resulting in the dark weak decays). The resulting noble gases were caught near the electrodes and therefore only their production was observed. Production of 4He in experiments of Wendle and Irion In 1222 Wendle and Irion published results from the study of exploding current wires. Their arrangement involved high voltage of about 3× 104 V and di-electric breakdown through air gap between the electrodes producing sudden current peak in a current wire made of tungsten (W with (Z,A)=(74,186) for the most abundant isotope) at temperature about T=2× 104 C, which corresponds to a thermal energy 3kT/2 of about 3 eV. Production of 4He was detected. Remark: The temperature at solar core is about 1.5× 107 K corresponding to energy about 2.25 keV and 3 orders of magnitude higher than the temperature used. This temperature is obtained by scaling factor 2-11 from 5 MeV which is binding energy scale for ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident. The interpretation of the experimentalists was that the observed 4He was from the decay of tungsten in the hot temperature making it unstable. This explanation is of course not consistent with what we known at about nuclear physics. No error in the experimental procedure was found. Three trials to replicate the experiment of Wendle and Irion were made with a negative result. The book discusses these attempts in detail and demonstrates that they were not faithful to the original experimental arrangement. Rutherford explained the production of 4He in terms of 4He occlusion hypothesis of Thomson. In the explosion the 4He contaminate would have liberated. But why just helium contamination, why not hydrogen? By above argument one could argue that 4He as noble gas could indeed form stable contaminates. 80 yeas later Urutskoev repeated the experiment with exploding wires and observed besides 4He also other isotopes. The experiments of Urutskoev demonstrated that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes. How dark nucleosynthesis could explain the findings? The simplest model relies on a modification of the occlusion hypothesis: a hydrogen contaminate was present and the formation of dark nuclei from the protons of hydrogen at flux tubes took place in the exploding wire. The nuclei of noble gases tended to remain in the system and 4He was observed. Production of Au and Pt in arc discharges in Mercury vapor In 1924 German chemist Miethe, better known as the discoverer of 3-color photography found trace amount of Gold (Au) and possibly Platinum (Pt) in Mercury (Hg) vapor photography lamp. Scientists in Amsterdam repeated the experiment but using lead (Pb) instead of Hg and observed production of Hg and Thallium (Tl). The same year a prominent Japanese scientist Nagaoka reported production of Au and something having the appearance of Pt. Nagaoka used a an electric arc discharge between tungsten (W) electrodes bathed in dielectric liquid "laced" with liquid Hg. The nuclear charges and atomic weights for isotopes involved are given in the table below. The nuclear charge and mass number (Z,A) for the most abundant isotopes of W, Pt, Au,Hg, Tl and Pb.
Could dark nucleosynthesis explain the observations? Two mechanisms for producing heavier nuclei relying one the formation of dark nuclei from the nuclei of the electrode metal and dark protons and their subsequent transformation to ordinary nuclei.
In 1926 German chemists Paneth and Peters pumped hydrogen gas into a chamber with finely divided palladium powder and reported the transmutation of hydrogen to helium. This experiment resembles the "cold fusion" experiment of Pons and Fleischman in 1989. The explanation would be the formation of dark 4He nuclei consisting of dark protons and transformation to ordinary 4He nuclei. See the chapter Cold fusion again or the article with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?. |
What is the IQ of neutron star?" Humans and Supernova-Born Neutron Stars Have Similar Structures, Discover Scientists" is the title of a popular article about the finding that neutron stars and eukaryotic (not only human) cells contain geometrically similar structures. In cells the cytoplasma between cell nucleus and cell membrane contains a complex highly folded membrane structure known as endoplasmic reticulum (ER). ER in turn contains stacks of evenly spaced sheets connected by helical ramps. They resemble multistory parking garages (see the illustration of the popular article). These structures are referred to as parking places for ribosomes, which are the machinery for the translation of mRNA to amino-acids. The size scale of these structures must be in the range 1-100 microns. Computer simulations for neutron stars predict geometrically similar structures, whose size is however million times larger and therefore must be in the range of 1-100 meters.The soft condensed-matter physicist Greg Huber from U.C. Santa Barbara and nuclear physicist Charles Horowitz from Indiana University have worked together to explore the shapes (see this and this). The physical principles leading to these structures look quite different. At nuclear physics side one has strong and electromagnetic interaction at microscopic level and in the model used they give rise to these geometric structures in macroscopic scales. In living matter the model assumes basically entropic forces and the basic variational principle is minimization of the free energy of the system - second law of thermodynamics for a system coupled to thermal bath at constant temperature. The proposal is that some deeper principle might be behind these intriguing structural similarities. In TGD framework one is forced to challenge the basic principles behind these models as really fundamental principles and to consider deeper reasons for the geometric similarity. One ends up challenging even the belief that neutron stars are just dead matter.
|
More about dark nucleosynthesisIn the sequel a more detailed view about dark nucleosynthesis is developed using the information provided by the first book of Krivit. This information allows to make also the nuclear string model much more detailed and connect CF/LENR with co called X boson anomaly and other nuclear anomalies. 1. Not only sequences of dark protons but also of dark nucleons are involved Are only dark protons sequences at magnetic flux tubes involved or can these sequences consists of nuclei so that one would have nucleus consisting of nuclei? From the first book I learned, that the experiments of Urutskoev demonstrate that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.
2. How dark nuclei are transformed to ordinary nuclei? What happens in the transformation of dark nuclei to ordinary ones? Nuclear binding energy is liberated but how does this occur? If gamma rays generated, one should invent also now a mechanism transforming gamma rays to thermal radiation. The findings of Holmlid provide valuable information here and lead to a detailed qualitative view about process and also allow to sharpen the model for ordinary nuclei.
See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?
|
Comparison of Widom-Larsen model with TGD inspired models of CF/LENR or whatever it isI cannot avoid the temptation to compare WL to my own dilettante models for which also WL has served as an inspiration. I have two models explaining these phenomena in my own TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this and this ) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence is longer scales than usually. The hierarchy of Planck constants heff=n× h has now rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect. 1. Simple modification of WL does not work The first model is a modification of WL and relies on dark variant of weak interactions. In this case LENR would be appropriate term.
2. Dark nucleosynthesis Also second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.
One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of also ordinary nucleosynthesis outside stellar interiors explain how elements heavier than iron are produced, might be more appropriate term. See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis? |
Three books about cold fusion/LENRSteven Krivit has written three books or one book in three parts - as you wish - about cold fusion (shortly CF in the sequel) - or low energy nuclear reaction (LENR) - which is the prevailing term nowadays and preferred by Krivit. The term "cold fusion" can be defended only by historical reasons: the process cannot be cold fusion. LENR relies on Widom-Larsen model (WL) trying to explain the observations using only the existing nuclear and weak interaction physics. Whether LENR is here to stay is still an open question. TGD suggests that even this interpretation is not appropriate: the nuclear physics involved would be dark and associated with heff=n× h phases of ordinary matter having identification as dark matter. Even the term "nuclear transmutation" would be challenged in TGD framework and "dark nuclear synthesis" looks a more appropriate term. The books were a very pleasant surprise for many reasons, and I have been able to develop my own earlier overall view by adding important details and missing pieces and allowing to understand the relationship to Widom-Larsen model (WL). 1. What the books are about? There are three books.
For instance, while reading the book, I realized that my own references to the literature have been somewhat random and not always appropriate. I do not have any systematic overall view about what has been done in the field: here the book makes wonderful service. It was a real surprise to find that first evidence for transmutation/isotope shifts emerged already for about century ago and also how soon isotope shifts were re-discovered after Pons-Fleischman discovery. The insistence on D+D→ 4He fusion model remains for an outsider as mysterious as the refusal of mainstream nuclear physicists to consider the possibility of new nuclear physics. One new valuable bit of information was the evidence that it is the cathode material that transforms to the isotope shifted nuclei: this helped to develop my own model in more detail. Remark: A comment concerning the terminology. I agree with the author that cold fusion is not a precise or even correct term. I have myself taken CF as nothing more than a letter sequence and defended this practice to myself as a historical convention. My conviction is that the phenomenon in question is not a nuclear fusion but I am not at all convinced that it is LENR either. Dark nucleosynthesis is my won proposal. What did I learn from the books? Needless to say, the books are extremely interesting, for both layman and scientist - say physicist or chemist. The books provide a very thorough view about the history of the subject. There is also an extensive list of references to the literature. Since I am not an experimentalist and feel myself a dilettante in this field as a theoretician, I am unable to check the correctness and reliability of the data represented. In any case, the overall view is consistent with what I have learned about the situation during years. My opinion about WL is however different. I have been working with ideas related to CF/LENR (or nuclear transmutations) but found books provided also completely new information and I became aware about some new critical points. I have had a rather imbalanced view about transmutations/isotopic shifts and it was a surprise to see that they were discovered already 1989 when Fleisch and Pons published their work. Even more, the premature discovery of transmutations for century ago (1910-1930) interpreted by Darwin as a collective effect, was new to me. Articles about transmutations were published in prestigious journals like Nature and Naturwissenschaften. The written history is however history of winners and all traces of this episode disappeared from the history books of physics after the standard model of nuclear physics assuming that nuclear physics and condensed matter physics are totally isolated disciplines. The developments after the establishment of standard model relying on GUT paradigm looks to me surprisingly similar. Sternglass - still a graduate student - wrote around 1947 to Einstein about his preliminary ideas concerning the possibility to transform protons to neutrons in strong electric fields. It became as a surprise to Sternglass that Einstein supported his ideas. I must say that this increased my respect of Einstein even further. Einstein's physical intuition was marvellous. In 1951 Sternglass found that in strong voltages in keV range protons could be transformed to neutrons with unexpectedly high rate. This is strange since the process is kinematically impossible for free protons: it however can be seen as support for WL model. Also scientists are humans with their human weaknesses and strengths and the history of CF/LENR is full of examples of both light and dark sides of human nature. Researchers are fighting for funding and the successful production of energy was also the dream of many people involved. There were also people, who saw CF/LENR as a quick manner to become millionaire. Getting a glimpse about this dark side was rewarding. The author knows most of the influential people, who have worked in the field and this gives special authenticity to the books. It was a great service for the reader the basic view about what happened was stated clearly in the introduction. I noticed also that with some background one can pick up any section and start to read: this is a service for a reader like me. I would have perhaps divided the material into separate parts but probably your less bureaucratic choice leaving room for surprise is better after all. Who should read these books? The books would be a treasure for any physicist ready to challenge the prevailing prejudices and learn about what science is as seen from the kitchen side. Probably this period will be seen in future as very much analogous to the period leading to the birth of atomic physics and quantum theory. Also layman could enjoy reading the books, especially the stories about the people involved - both scientists and those funding the research and academic power holders - are fascinating. The history of cold fusion is a drama in which one can see as fight between Good and Evil and eventually realize that also Good can divide into Good and Evil. This story teaches about a lot about the role of egos in all branches of sciences and in all human activities. Highly rationally behaving science professionals can suddenly start to behave completely irrationally when their egos feel being under threat. My hope is that the books could wake up the mainstream colleague to finally realize that CF/LENR or - whatever you wish to call it - is not pseudoscience. Most workers in the field are highly competent, intellectually honest, an have had so deep passion for understanding Nature that they have been ready to suffer all the humiliations that the academic hegemony can offer for dissidents. The results about nuclear transmutations are genuine and pose a strong challenge for the existing physics, and to my opinion force to give up the naive reductionistic paradigm. People building unified theories of physics should be keenly aware of these phenomena challenging the reductionistic paradigm even at the level of nuclear and condensed matter physics. 2. The problems of WL For me the first book representing the state of CF/LENR as it was around 2004 was the most interesting. In his first book Krivit sees 1990-2004 period as a gradual transition from the cold fusion paradigm to the realization that nuclear transmutations occur and the fusion model does not explain this process. The basic assumption of the simplest fusion model was that the fusion D+D → 4He explains the production of heat. This excluded the possibility that the phenomenon could take place also in light water with deuterium replaced with hydrogen. It however turned out that also ordinary water allows the process. The basic difficulty is of course Coulomb wall but the model has also difficulties with the reaction signatures and the production rate of 4He is too low to explain heat production. Furthermore, gamma rays accompanying 4He production were not observed. The occurrence of transmutations is a further problem. Production of Li was observed already in 1989, and later russia trio Kucherov, Savvatinova, Karabut detected tritium, 4He, and of heavy elements. They also observed modifications at the surface of the cathode down to depth of .1-1 micrometers. Krivit sees LENR as a more realistic approach to the phenomena involved. In LENR Widom-Larsen model (WL) is the starting point. This would involve no new nuclear physics. I also see WL as a natural starting point but I am skeptic about understanding CF/LENR in term of existing physics. Some new physics seems to be required and I have been doing intense propaganda for a particular kind of new physics colfusion again (see this). WL assumes that weak process proton (p) → neutron (n) occurring via e+ p→ n+ν (e denotes electron and ν for neutrino) is the key step in cold fusion. After this step neutron finds its way to nucleus easily and the process continues in conventional sense as analog of r-process assumed to give rise to elements heavier than iron in supernova explosions and leads to the observed nuclear transmutations. Essentially one proton is added in each step decomposing to four sub-steps involving beta decay n→ p and its reversal. There are however problems.
See the chapter Cold fusion again "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis? |
How to demonstrate quantum superposition of classical gravitational fields?There was rather interesting article in Nature (see this) by Marletto and Vedral about the possibility of demonstrating the quantum nature of gravitational fields by using weak measurement of classical gravitational field affecting it only very weakly. There is also an article in arXiv by the same authors (see this). The approach relies on quantum information theory. The gravitational field would serve as a measurement interaction and the weak measurements would be applied to gravitational witness serving as probe - the technical term is ancilla. Authors claim that weak measurements giving rise to analog of Zeno effect could be used to test whether the quantum superposition of classical gravitational fields (QSGR) does take place. One can however argue that the extreme weakness of gravitation implies that other interactions and thermal perturbations mask it completely in standard physics framework. Also the decoherence of gravitational quantum states could be argued to make the test impossible. One must however take these objections with a big grain of salt. After all, we do not have a theory of quantum gravity and all assumptions made about quantum gravity might not be correct. For instance, the vision about reduction to Planck length scale might be wrong. There is also the mystery of dark matter, which might force considerable motivation of the views about dark matter. Furthermore, General Relativity itself has conceptual problems: in particular, the classical conservation laws playing crucial role in quantum field theories are lost. Superstrings were a promising candidate for a quantum theory of gravitation but failed as a physical theory. In TGD, which was born as an attempt to solve the energy problem of TGD and soon extended to a theory unifying gravitation and standard model interactions and also generalizing string models, the situation might however change. In zero energy ontology (ZEO) the sequence of weak measurements is more or less equivalent to the existence of self identified as generalized Zeno effect! The value of heff/h=n characterizes the flux tubes mediating various interactions and can be very large for gravitational flux tubes (proportional to GMm/v0, where v0<c has dimensions of velocity, and M and m are masses at the ends of the flux tube) with Mm> v0mPl2 (mPl denotes Planck mass) at their ends. This means long coherence time characterized in terms of the scale of causal diamond (CD). The lifetime T of self is proportional to heff so that for gravitational self T is very long as compared to that for electromagnetic self. Selves could correspond sub-selves of self identifiable as sensory mental images so that sensory perception would correspond to weak measurements and for gravitation the times would be long: we indeed feel the gravitational force all the time. Consciousness and life would provide a basic proof for the QSGR (note that large neutron has mass of order Planck mass!). See the article How to demonstrate quantum superposition of classical gravitational fields? or the chapter Quantum criticality and dark matter. |
Anomalous neutron production from an arc current in gaseous hydrogenI learned about nuclear physics anomaly new to me (actually the anomaly is 64 years old) from an article of Norman and Dunning-Davies in Research Gate (see this). Neutrons are produced from an arc current in hydrogen gas with a rate exceeding dramatically the rate predicted by the standard model of electroweak interactions, in which the production should occur through e-+p→ n+ν by weak boson exchange. The low electron energies make the process also kinematically impossible. Additional strange finding due to Borghi and Santilli is that the neutron production can in some cases be delayed by several hours. Furthermore, according to Santilli neutron production occurs only for hydrogen but not for heavier nuclei. In the following I sum up the history of the anomaly following closely the representation of Norman and Dunning-Davies (see this): this article gives references and details and is strongly recommended. This includes the pioneering work of Sternglass in 1951, the experiments of Don Carlo Borghi in the late 1960s, and the rather recent experiments of Ruggiero Santilli (see this). The pioneering experiment of Sternglass The initial anomalously large production of neutrons using an current arc in hydrogen gas was performed by Earnest Sternglass in 1951 while completing his Ph.D. thesis at Cornell. He wrote to Einstein about his inexplicable results, which seemed to occur in conditions lacking sufficient energy to synthesize the neutrons that his experiments had indeed somehow apparently created. Although Einstein firmly advised that the results must be published even though they apparently contradicted standard theory, Sternglass refused due to the stultifying preponderance of contrary opinion and so his results were preemptively excluded under orthodox pressure within discipline leaving them unpublished. Edward Trounson, a physicist working at the Naval Ordnance Laboratory repeated the experiment and again gained successful results but they too, were not published. One cannot avoid the question, what physics would look like today, if Sternglass had published or managed to publish his results. One must however remember that the first indications for cold fusion emerged also surprisingly early but did not receive any attention and that cold fusion researchers were for decades labelled as next to criminals. Maybe the extreme conservatism following the revolution in theoretical physics during the first decades of the previous century would have prevented his work to receive the attention that it would have deserved. The experiments of Don Carlo Borghi Italian priest-physicist Don Carlo Borghi in collaboration with experimentalists from the University of Recife, Brazil, claimed in the late 1960s to have achieved the laboratory synthesis of neutrons from protons and electrons. C. Borghi, C. Giori, and A. Dall'Olio published 1993 an article entitled "Experimental evidence of emission of neutrons from cold hydrogen plasma" in Yad. Fiz. 56 and Phys. At. Nucl. 56 (7). Don Borghi's experiment was conducted via a cylindrical metallic chamber (called "klystron") filled up with a partially ionized hydrogen gas at a fraction of 1 bar pressure, traversed by an electric arc with about 500V and 10mA as well as by microwaves with 1010 Hz frequency. Note that the energies of electrons would be below .5 keV and non-relativistic. In the cylindrical exterior of the chamber the experimentalists placed various materials suitable to become radioactive when subjected to a neutron flux (such as gold, silver and others). Following exposures of the order of weeks, the experimentalists reported nuclear transmutations due to a claimed neutron flux of the order of 104 cps, apparently confirmed by beta emissions not present in the original material. Don Borghi's claim remained un-noticed for decades due to its incompatibility with the prevailing view about weak interactions. The process e-+p→ n+ν is also forbidden by conservation of energy unless the total cm energy of proton and the electron have energy larger than Δ E= mn-mp-me=0.78 MeV. This requires highly relativistic electrons. Also the cross section for the reaction proceeding by exchange of W boson is extremely small at low energies (about 10-20 barn: barn=10-28 m2 represents the natural scale for cross section in nuclear physics). Some new physics must be involved if the effect is real. Situation is strongly reminiscent of cold fusion (or low energy nuclear reactions (LENR), which many main stream nuclear physicists still regard as a pseudoscience. Santilli's experiments Ruggero Santilli (see this) replicated the experiments of Don Borghi. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. Santilli analyzes several alternative proposals explaining the anomalyn and suggests that new spin zero bound state of electron and proton with rest mass below the sum of proton and electron masses and absorbed by nuclei decaying then radioactively could explain the anomaly. The energy needed to overcome the kinematic barrier could come from the energy liberated by electric arc. The problem of the model is that it has no connection with standard model. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. According to Santilli: According to Santilli: " A first series of measurements was initiated with Klystron I on July 28,2006, at 2 p.m. Following flushing of air, the klystron was filled up with commercial grale hydrogen at 25 psi pressure. We first used detector PM1703GN to verify that the background radiations were solely consisting of photon counts of 5-7 μR/h without any neutron count; we delivered a DC electric arc at 27 V and 30 A (namely with power much bigger than that of the arc used in Don Borghi's tests...), at about 0.125" gap for about 3 s; we waited for one hour until the electrodes had cooled down, and then placed detector PM1703GN against the PVC cylinder. This resulted in the detection of photons at the rate of 10 - 15 μR/hr expected from the residual excitation of the tips of the electrodes, but no neutron count at all. However, about three hours following the test, detector PM1703GN entered into sonic and vibration alarms, specifically, for neutron detections off the instrument maximum of 99 cps at about 5' distance from the klystron while no anomalous photon emission was measured. The detector was moved outside the laboratory and the neutron counts returned to zero. The detector was then returned to the laboratory and we were surprised to see it entering again into sonic and vibrational alarms at about 5' away from the arc chamber with the neutron count off scale without appreciable detection of photons, at which point the laboratory was evacuated for safety. After waiting for 30 minutes (double neutron's lifetime), we were surprised to see detector PMl703GN go off scale again in neutron counts at a distance of 10' from the experimental set up, and the laboratory was closed for the day." TGD based model The basic problems to be solved are following.
TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics. Could this model explain the anomalous neuron production and its strange features?
|
Non-local production of photon pairs as support for heff/h=n hypothesisAgain a new anomaly! Photon pairs have been created by a new mechanism. Photons emerge at different points! See this. Could this give support for the TGD based general model for elementary particle as a string like object (flux tube) with first end (wormhole contact) carrying the quantum numbers - in the case of gauge boson fermion and antifermion at opposite throats of the contact. Second end would carry neutrino-right-handed neutrino pair neutralizing the possible weak isospin. This would give only local decays. Also emissions of photons from charged particle would be local. Could the bosonic particle be a mixture of two states. For the first state flux tube would have fermion and antifermion at the same end of the fluxtube: only local decays. For the second state fermion and antifermion would reside at the ends of the flux tubes residing at throats associated with different wormhole contacts. This state in state would give rise to non-local two-photon emissions. Mesons of hadron physics would correspond to this kind of states and in old-fashioned hadron physics one speaks about photon-vector meson mixing in the description of the photon-hadron interactions. If the Planck constant heff/h=n of the emitting particle is large, the distance between photon emissions would be long. The non-local days could make the visible both exotic decay and allow to deduce the value of n! This would how require the transformation of emitted dark photon to ordinary (same would happen when dark photons transform to biophotons). Can one say anything about the length of fux tube? Magnetic flux tube contains fermionic string. The length of this string is of order Compton length and of the order of p-adic length scale. What about photon itself - could it have non-local fermion-antifermion decays based on the same mechanism? What the length of photonic string is is not clear. Photon is massless, no scales! One identification of length would be as wavelength defining also the p-adic length scale. To sum up: the nonlocal decays and emissions could lend strong support for both flux tube identification of particles and for hierarchy of Planck constants. It might be possible to even measure the value of n associated with quantum critical state by detecting decays of this kind. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. For details see the chapter Quantum criticality and dark matter. |
Hierarchy of Planck constants, space-time surfaces as covering spaces, and adelic physicsFrom the beginning it was clear that heff/h=n corresponds to the number of sheets for a covering space of some kind. First the covering was assigned with the causal diamonds. Later I assigned it with space-time surfaces but the details of the covering remained unclear. The final identification emerged only in the beginning of 2017. Number theoretical universality (NTU) leads to the notion of adelic space-time surface (monadic manifold) involving a discretization in an extension of rationals defining particular level in the hierarchy of adeles defining evolutionary hierarchy. The first formulation was proposed here and more elegant formulation here. The key constraint is NTU for adelic space-time containing sheets in the real sector and various padic sectors, which are extensions of p-adic number fields induced by an extension of rationals which can contain also powers of a root of e inducing finite-D extension of p-adic numbers (ep is ordinary p-adic number in Qp). One identifies the numbers in the extension of rationals as common for all number fields and demands that imbedding space has a discretization in an extension of rationals in the sense that the preferred coordinates of imbedding space implied by isometries belong to extension of rationals for the points of number theoretic discretization. This implies that the versions of isometries with group parameters in the extension of rationals act as discrete versions of symmetries. The correspondence between real and p-adic variants of the imbedding space is extremely discontinuous for given adelic imbedding space (there is hierarchy of them with levels characterized by extensions of rationals). Space-time surfaces typically contain rather small set of points in the extension (xn+yn2=zn contains no rationals for n>2!). Hence one expects a discretization with a finite cutoff length at space-time level for sufficiently low space-time dimension D=4 could be enough. After that one assigns in the real sector an open set to each point of discretization and these open sets define a manifold covering. In p-adic sector one can assign 8:th Cartesian power of ordinary p-adic numbers to each point of number theoretic discretization. This gives both discretization and smooth local manifold structure. What is important is that Galois group of the extension acts on these discretizations and one obtains from a given discretization a covering space with the number of sheets equal to a factor of the order of Galois group, typically equal to the order of Galois. heff/h=n was identified from the beginning as the dimension of poly-sheeted covering assignable to space-time surface. The number n of sheets would naturally a factor of the order of Galois group implying that heff/h=n is bound to increase during number theoretic evolution so that the algebraic complexity increases. Note that WCW decomposes into sectors corresponding to the extensions of rationals and the dimension of the extension is bound to increase in the long run by localizations to various sectors in self measurements (see this). Dark matter hierarchy represents number theoretical/adelic physics and therefore has now rather rigorous mathematical justification. It is however good to recall that heff/h=n hypothesis emerged from an experimental anomaly: radiation at ELF frequencies had quantal effects of vertebrate brain impossible in standard quantum theory since the energies E=hf of photons are ridiculously small as compared to thermal energy. Indeed, since n is positive integer evolution is analogous to a diffusion in half-line and n unavoidably increases in the long run just as the particle diffuses farther away from origin (by looking what gradually happens near paper basket one understands what this means). The increase of n implies the increase of maximal negentropy and thus of negentropy. Negentropy Maximization Principle (NMP) follows from adelic physics alone and there is no need to postulate it separately. Things get better in the long run although we do not live in the best possible world as Leibniz who first proposed the notion of monad proposed! For details see the chapter Quantum criticality and dark matter. |
Time crystals, macroscopic quantum coherence, and adelic physicsTime crystals were (see this) were proposed by Frank Wilzek in 2012. The idea is that there is a periodic collective motion so that one can see the system as analog of 3-D crystal with time appearing as fourth lattice dimension. One can learn more about real life time crystals here. The first crystal was created by Moore et al (see this) and involved magnetization. By adding a periodic driving force it was possible to generate spin flips inducing collective spin flip as a kind of domino effect. The surprise was that the period was twice the original period and small changes of the driving frequency did not affect the period. One had something more than forced oscillation - a genuine time crystal. The period of the driving force - Floquet period- was 74-75 μs and the system is measured for N=100 Floquet periods or about 7.4-7.5 milliseconds (1 ms happens to be of same order of magnitude as the duration of nerve pulse). I failed to find a comment about the size of the system. With quantum biological intuition I would guess something like the size of large neuron: about 100 micrometers. Second law does not favor time crystals. The time in which single particle motions are thermalized is expected to be rather short. In the case of condensed matter systems the time scale would not be much larger than that for a typical rate of typical atomic transition. The rate for 2P → 1S transition of hydrogen atom estimated here gives a general idea. The decay rate is proportional to ω3d2, where ω= Δ E/hbar is the frequency difference corresponding to the energy difference between the states, d is dipole moment proportional to α a0, a0 Bohr radius and α∼ 1/137 fine structure constant. Average lifetime as inverse of the decay rate would be 1.6 ns and is expected to give a general order of magnitude estimate. The proposal is that the systems in question emerge in non-equilibrium thermodynamics, which indeed predicts a master-slave hierarchy of time and length scales with masters providing the slowly changing background in which slaves are forced to move. I am not a specialist enough to express any strong opinions about thermodynamical explanation. What does TGD say about the situation?
For details see the chapter Quantum criticality and dark matter. |
Why metabolism and what happens in bio-catalysis?TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology. Why metabolic energy is needed? The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive. We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n.
Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts. What does catalysis demand?
Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis. The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom. Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP. Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario. It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n. The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane. For details see the chapter Quantum criticality and dark matter. |
NMP and selfThe preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.
The view about Negentropy Maximization Principle (NMP) has co-evolved with the notion of self and I have considered many variants of NMP.
Number theoretical Shannon entropy can serve as a measure for genuine information assignable to a pair of entanglement systems. Entanglement with coefficients in the extension is always negentropic if entanglement negentropy comes from p-adic sectors only. It can be negentropic if negentropy is defined as the difference of p-adic negentropy and real entropy. The diagonalized density matrix need not belong to the algebraic extension since the probabilities defining its diagonal elements are eigenvalues of the density matrix as roots of N:th order polynomial, which in the generic case requires n-dimensional algebraic extension of rationals. One can argue that since diagonalization is not possible, also state function reduction selecting one of the eigenstates is impossible unless a phase transition increasing the dimension of algebraic extension used occurs simultaneously. This kind of NE could give rise to cognitive entanglement. There is also a special kind of NE, which can result if one requires that density matrix serves a universal observable in state function reduction. The outcome of reduction must be an eigen space of density matrix, which is projector to this subspace acting as identity matrix inside it. This kind NE allows all unitarily related basis as eigenstate basis (unitary transformations must belong to the algebraic extension). This kind of NE could serve as a correlate for "enlightened" states of consciousness. Schrödingers cat is in this kind of state stably in superposition of dead and alive and state basis obtained by unitary rotation from this basis is equally good. One can say that there are no discriminations in this state, and this is what is claimed about "enlightened" states too. The vision about number theoretical evolution suggests that NMP forces the generation of NE resources as NE assignable to the "passive boundary of CD for which no changes occur during sequence of state function reductions defining self. It would define the unchanging self as negentropy resources, which could be regarded as kind of Akashic records. During the next "re-incarnation after the first reduction to opposite boundary of CD the NE associated with the reduced state would serve as new Akashic records for the time reversed self. If NMP reduces to the statistical increase of heff/h=n the consciousness information contents of the Universe increases in statistical sense. In the best possible world of SNMP it would increase steadily. Does NMP reduce to number theory? The heretic question that emerged quite recently is whether NMP is actually needed at all! Is NMP a separate principle or could NMP reduced to mere number theory? Consider first the possibility that NMP is not needed at all as a separate principle.
Hitherto I have postulated NMP as a separate principle. Strong form of NMP (SNMP) states that Negentropy does not decrease in "big" state function reductions corresponding to death and re-incarnations of self. One can however argue that SNMP is not realistic. SNMP would force the Universe to be the best possible one, and this does not seem to be the case. Also ethically responsible free will would be very restricted since self would be forced always to do the best deed that is increase maximally the negentropy serving as information resources of the Universe. Giving up separate NMP altogether would allow to have also "Good" and "Evil". This forces to consider what I christened weak form of NMP (WNMP). Instead of maximal dimension corresponding to N-dimensional projector self can choose also lower-dimensional sub-spaces and 1-D sub-space corresponds to the vanishing entanglement and negentropy assumed in standard quantum measurement theory. As a matter fact, this can also lead to larger negentropy gain since negentropy depends strongly on what is the large power of p in the dimension of the resulting eigen sub-space of density matrix. This could apply also to the purely number theoretical reduction of NMP. WNMP suggests how to understand the notions of Good and Evil. Various choices in the state function reduction would correspond to Boolean algebra, which suggests an interpretation in terms of what might be called emotional intelligence . Also it turns out that one can understand how p-adic length scale hypothesis - actually its generalization - emerges from WNMP.
For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness. |
WCW and the notion of intentional free willThe preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.
The original definition of self was as a subsystem able to remain unentangled under state function reductions associated with subsequent quantum jumps. The density matrix was assumed to define the universal observable. Note that a density matrix, which is power series of a product of matrices representing commuting observables has in the generic case eigenstates, which are simultaneous eigenstates of all observables. Second aspect of self was assumed to be the integration of subsequent quantum jumps to coherent whole giving rise to the experienced flow of time. The precise identification of self allowing to understand both of these aspects turned out to be difficult problem. I became aware the solution of the problem in terms of ZEO (ZEO) only rather recently (2014).
For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness. |
Anomalies of water as evidence for dark matter in TGD senseThe motivation for this brief comment came from a popular article telling that a new phase of water has been discovered in the temperature range 50-60 oC (see this ). Also Gerald Pollack (see this ) has introduced what he calls the fourth phase of water. For instance, in this phase water consists of hexagonal layers with effective H1.5O stoichiometry and the phase has high negative charge. This phase plays a key role in TGD based quantum biology. These two fourth phases of water could relate to each other if there exist a deeper mechanism explaining both these phases and various anomalies of water. Martin Chaplin (see this ) has an extensive web page about various properties of water. The physics of water is full of anomalous features and therefore the page is a treasure trove for anyone ready to give up the reductionistic dogma. The site discusses the structure, thermodynamics, and chemistry of water. Even academically dangerous topics such as water memory and homeopathy are discussed. One learns from this site that the physics of water involves numerous anomalies (see this ). The structural, dynamic and thermodynamic anomalies form a nested in density-temperature plane. For liquid water at atmospheric pressure of 1 bar the anomalies appear in the temperature interval 0-100 oC. Hydrogen bonding creating a cohesion between water molecules distinguishes water from other substances. Hydrogen bonds induce the clustering of water molecules in liquid water. Hydrogen bonding is also highly relevant for the phase diagram of H2O coding for various thermodynamical properties of water (see this ). In biochemistry hydrogen bonding is involved with hydration. Bio-molecules - say amino-acids - are classified to hydrophobic, hydrophilic, and amphiphilic ones and this characterization determines to a high extent the behavior of the molecule in liquid water environment. Protein folding represents one example of this. Anomalies are often thought to reduce to hydrogen bonding. Whether this is the case, is not obvious to me and this is why I find water so fascinating substance. TGD indeed suggests that water decomposes into ordinary water and dark water consisting of phases with effective Planck constant heff=n× h residing at magnetic flux tubes. Hydrogen bonds would be associated with short and rigid flux tubes but for larger values of n the flux tubes would be longer by factor n and have string tension behaving as 1/n so that they would softer and could be loopy. The portional of water molecules connected by flux tubes carrying dark matter could be identified as dark water and the rest would be ordinary water. This model allows to understand various anomalies. The anomalies are largest at the physiological temperature 37 C, which conforms with the vision about the role of dark matter and dark water in living matter since the fraction of dark water would be highest at this temperature. The anomalies discussed are density anomalies, anomalies of specific heat and compressibility, and Mpemba effect. I have discussed these anomalies already for decade ago. The recent view about dark matter allows however much more detailed modelling. For details see the chapter Dark Nuclear Physics and Condensed Matter or the article The anomalies of water as evidence for the existence of dark matter in TGD sense. |
About number theoretic aspects of NMPThere is something in NMP that I still do not understand: every time I begin to explain what NMP is I have this unpleasant gut feeling. I have the habit of making a fresh start everytime rather than pretending that everything is crystal clear. I have indeed considered very many variants of NMP. In the following I will consider two variants of NMP. Second variant reduces to a pure number theory in adelic framework inspired by number theoretic vision. It is certainly the simplest one since it says nothing explicit about negentropy. Second variant says essentially the as "strong form of NMP", when the reduction occurs to an eigen-space of density matrix. I will not consider zero energy ontology (ZEO) related aspects and the aspects related to the hierarchy of subsystems and selves since I dare regard these as "engineering" aspects. What NMP should say? What NMP should state?
The notion of entanglement negentropy
State function reduction as universal measurement interaction between any two systems
NMP as a purely number theoretic constraint? Let us consider the possibility that NMP reduces to the number theoretic condition tending to stabilize generic entanglement.
For background see the chapter Negentropy Maximization Principle. or the article About number theoretic aspects of NMP. |