DNA, speech, music, and ordinary sound

Peter Gariaev's group has made rather dramatic claims about DNA during years. The reported findings have served as inspiration in the development of TGD based view about living matter (see this, this, this, this and this).

  1. The group has proposed that the statistical distributions of nucleotides and codons in the intronic portion of DNA resemble the distribution of letters and words in the natural languages. For instance, it is proposed that Zipf law applying to natural languages applies to the distributions of codons in the intronic portion of DNA. One can study the popularity of the words in natural languages and order them against their popularity. Zipf law states that the integer characterizing popularity is in constant proportion to the number of times it appears in given long enough text.
  2. It has been also claimed that DNA can be reprogrammed using modulated laser light or even radio waves. I understand that reprogramming means a modified gene expression. Gariaev's group indeed proposes that the meaning of the third nucleotide (having a rather low significance in the DNA-aminoacid correspondence) in the genetic codon depends on the context giving rise to a context dependent translation to amino-acids. This is certainly a well-known fact for certain variants of the genetic code. This context dependence might make possible the re-programming. The notion of dark DNA allows to consider much more radical possibility based on the transcription of dark DNA to mRNA followed by translation to aminoacds. This could effectively replaced genes with new ones.
  3. Also the modulation of the laser light by speech is claimed to have the re-programming effect. The broad band em wave spectrum resulting in the scattering of red laser light on DNA is reported to have rather dramatic biological effects. The long wave length part of this spectrum can be recorded and transformed to sound waves and these sound waves are claimed to have the same biological effects as the light. The proposal is that acoustic solitons propagating along DNA represent this effect on DNA.
I do not have the competence to make statements about the plausibility of these claims. TGD view about quantum biology makes also rather strong claims. The natural question is however whether a justification for the claims of Gariaev and collaborators could be found in TGD framework? In particular, can one say about possible effects of sound on DNA. One intriguing fact about sound perception is that music and speech have meaning whereas generic sounds to not. Could one say something interesting about how this meaning is generated at the level of DNA?

Basic picture

Before continuing it is good to restate the basic TGD inspired ideas about the generation of meaning.

  1. The generation of the negentropic entanglement is the correlate for the experience of the meaning. In the model inspired by Becker's findings discussed in the earlier posting, the generation of negentropic entanglement involves a generation of supra currents along flux tubes moving in the electric field parallel to them. This is a critical phenomenon taking place when the voltage along the flux tube is near critical value. The generation of nerve pulse near critical value of the resting potential is one example of this criticality. Becker's direct currents involved with the healing of wounds is another example.

    The flow of the supra current gives rise to the acceleration of charges along the flux tubes and generation of Cooper pairs or even many-electrons systems at smaller space-time sheets in negentropically entangled state and carrying metabolic energy quantum as zero point kinetic energy. The period of negentropic entanglement gives rise to a conscious experience to which one can assign various attributes such as understanding, attention, and so on. Negentropic entanglement would measure the information contained by a rule having as instances the state pairs in the quantum superposition defining the entangled state. When the period of negentropic entanglement ceases, the metabolic energy is liberated.

  2. Remote activation of DNA by analogs of laser beams is another essential piece of TGD inspired quantum biology (see this). In the proposed addressing mechanism a collection of frequencies serves as a password activating intronic portions of DNA. This would take place via a resonance for the proposed interaction between photons and dark supra currents flowing along magnetic flux tubes and perhaps also along DNA strands or flux tubes parallel to them. The superposition of electric fields of photons (massless extremals) with the electric fields parallel to flux tubes (so that massless extremals serving as correlates for laser beams would traverse the flux tube in orthogonal direction).
  3. The flux tubes, and more generally flux sheets labelled by the value of Planck constant, and along which the radiation arrives would be transversal to DNA and contain DNA strands. This kind of flux tubes and sheets also define the connections to the magnetic body, and form parts of it. A given flux sheet would naturally select the portion of DNA, which is activated by the radiation: it could be a portion of intronic part of DNA activating in turn a gene. These flux tubes and sheets could be connected to the lipids of nuclear and cell membranes - also cell membranes of other cells - as assumed in the model of DNA as topological quantum computer. The sheets could also give rise to a hierarchy of genomes - besides genome one would have super-genome in which genomes are organelles are integrated by flux sheets to a large coherently expressed structure containing individual genomes like page of a book contains lines of text. These pages would be in turn organized to a book - hyper-genome as I called it. One could have also libraries, etc... There would fractal flux quanta inside flux quanta structure.

Phonons and photons In TGD Universe

Consider next phonons and their coupling to photons in TGD Universe.

  1. Sound waves could quite well transform to electromagnetic radiation since living matter is piezo-crystal transforming sound to radiation and vice versa. Microwave hearing represents an example of this kind transformation. This would require that photons of given energy and varying value of Planck constant couple to phonons with the same energy, Planck constant, and frequency.
  2. Whether one can assign to phonons a non-standard value of Planck constant is not quite clear, but there seems to be no reason preventing this. If so, even photons of audible sounds would have energies above thermal threshold and have direct quantal effects on living matter if they have same Planck constant as the photons with same frequency.
  3. Acoustic phonons represent longitudinal waves and this would require longitudinal photons. In Maxwell's electrodynamics they are not possible but in TGD framework photon is predicted to have a small mass and also longitudinal photons are possible.
  4. For general condensed matter systems one can have also optical phonons for which the polarization is orthogonal to the wave vector and these could couple to ordinary photons. The motion of the charged particles in the electromagnetic field of massless extremal (topological light ray) would be a situation in which phonons and photons accompany each other. This would make possible the piezo-electric mechanism.
Under these assumptions the collections of audible frequencies could also represent passwords activating the intronic portion of the genome and lead to gene expression or some other activities. If one believes on the hypothesis that DNA acts like topological quantum computer based on the braid strand connections between nucleotides in the intronic portion of DNA with the lipids of the nuclear and/or cell membranes, also topological quantum computation type processes could be activated by the collections of sound frequencies (see this).

What distinguishes speech and music from sounds without meaning?

Speech and music ares very special form of sound in that they have direct meaning. The more one thinks about these facts, the more non-trivial they look. For music - say singing - the frequency of the carrier wave is piecewise constant whereas for speech it remains constant and the amplitude modulation is important. In fact, by slowing down the recorded speech, one gets the impression that carrier frequency is actually modulated like in chirp (frequency goes down and covers a range of frequencies). What is the mechanism giving to speech and music its meaning and in this manner distinguishes them from other sounds?

Besides the frequency also phase is important for both speech and music experience. Speech and reverse speech sound quite different the intensity in frequency space is same. Therefore the relative phases associated with the Fourier coefficients of various frequencies must be important. For music simple rational multiples of the fundamental define the scale. Could it be that also the frequencies relevant to the comprehension of speech correspond to these rational multiples?

Suppose that one indeed believes on the proposed vision based on the fundamental role of negentropic entanglement in generation of meaning and takes seriously the proposed mechanisms for generating it. Can one understand why music and speech differ from general sounds and what distinguishes between them?

  1. With these assumptions suitable collections of frequencies sound wave would indee activates the intronic portion of DNA by generating negentropic entanglement. Also other dark flux tubes than those assignable to DNA are involved. For instance, hair cells responsible for hearing of sounds around particular frequencies could involved flux tubes and utilize similar mechanism. Allowing only hair cells would define the conservative option. On the other hand, one could well claim that what happens in ear has nothing to do with the understanding of the speech and music, it could take place only at the level of neuronal nuclei.
  2. Could the direct interaction of sound waves with magnetic flux tubes generate the experiences of speech and music? In other words, assign meaning to sounds? The criterion for sound to have an interpretation as speech or music would be that it contains the resonance frequencies needed to activate the DNA, or more generally generate dark super currents generating Cooper pairs in this manner loading metabolic energy storages. This would apply to both speech and musical sounds.
  3. The pitch of the speech and musical sound can vary. We are aware of the key of the music piece and of modulations of the key and remember the starting key, and it is highly satisfactory to make a return to "home" defined by the original key. This would imply that the overall scale of the collection of frequencies can be varied and that the pitch of the speech defines a natural expectation value of this scale. For persons possessing so called absolute ear this scaling symmetry would be broken in a well-defined sense.
  4. Musical scales involve frequencies coming as rational multiples of the basic frequency. Octaves - power of two multiples- of the frequency can be said to be equivalent as far musical experience is considered. One might understand the special role of rational multiples of the basic frequency if the Fourier components have same phase periodically so that the experience is invariant under discrete time translations. This requires commensurable frequencies expressible as rational multiples of the same fundamental frequency. The preferred role of p-adic primes comings as powers of two could relate to the octave phenomenon.
  5. Are the relative phases of different Fourier components important for music experience? If one requires a periodical occurrence of maximal possible intensity (maximal constructive interference) then the relative phases must vanish at the values of time for maximal possible intensity. What seems essential that the presence of commensurate frequencies gives rise to time translation invariant sensation whereas speech consists of pulses.

Are speech and music quantum duals like position and momentum?

Frequencies are crucial for music experience. In the case of of speech the relative phases are very important as the example of reverse speech demonstrates. How a given phoneme is heard is determined to high degree by the frequency spectrum in the beginning of the phoneme (this distinguishes between consonants). Vowels are nearer to notes in vocalization. Speech consists of pulses and destructive interference between different frequencies is required to generate pulses and different pulse shapes so that phase information is important. At least the harmonics of the basic rational multiples of the fundamental are necessary for speech.

One can criticize the previous discussion in that it has been completely classical. Phase and frequency are in wave mechanics canonically conjugate variables analogous to position and momentum. Is it really possible to understand the difference between music and speech purely classically by assuming that one can assign to sound waves both frequencies and phases simultaneously - just like one assigns to a particle sharp values of both momentum and position? Or should one use either representation either in terms numbers of phonons in different modes labelled by frequencies or as coherent states of phonons with ill defined phonon numbers but well defined amplitudes? Could the coherent states serve as the analogs of classical sound waves. Speech would be as near as possible to classical sound and music would be quantal. Of course, there is a large variety of alternative choices of basis states between these two extremes as a specialist in quantum optics could tell.

Suppose that this picture is more or less correct. What could be the minimal scenario allowing to understand the differences between speech and music?

  1. Only a subset of frequencies could activate DNA (or if one wants to be conservative, the hair cells) also in the case of speech. One could still pick up important frequencies for which the ratios are simples rational numbers as in the case of musical scale plus their harmonics If this assumption is correct, then speech from which all frequencies except for the harmonics of the simple rational multiples of the fundamental are removed, should be still be comprehensible as speech. The pitch of the speech would determine a good candidate for the fundamental frequency.
  2. The harmonics of frequencies activating DNA would be crucial for speech. Harmonics are present also in music and their distribution allows to distinguish between different instruments and persons. The deviation of musical notes from ideal Fock states would correspond to this.
  3. The naive guess is that the simple rational multiples of fundamental and the possibility of having their harmonics could be reflected in the structure of intronic portions of DNA as repetitive structures of various sizes. This cannot be the case since the wavelengths of ordinary photons would be so small that the energies would be in keV range. Neither is this expected to be the case. It is magnetic flux tubes and sheets traversing the DNA which carry the radiation and the natural lengths assignable to these flux quanta should correspond to the wave lengths. The larger, the flux quantum, the lower the frequency and the larger the value of Planck constant. Harmonics of the fundamental would appear for given flux tube length naturally.

    The DNA strands and flux tubes and sheets form a kind of electromagnetic music instrument with flux quanta taking the role of guitar strings and DNA strands and other structures such as lipids and possible other molecules to which flux tubes get attached taking the role of frets in guitar. This analogy suggests that for wave lengths measured in micrometers the basic frequencies correspond to the distances between "frets" defined by cell and nuclear membranes in the tissue in the scale of organism. This would relate the spectrum of resonance frequencies to the spectrum of distances between DNAs in the tissue.

    For wavelengths corresponding to very large values of Planck constant giving rise to frequencies in VLF and ELF range and corresponding also to audible frequencies, the preferred wave lengths would correspond to lengths of flux quanta in Earth size scale. One should understand whether the quantization of these lengths in simple rational ratios could take place for the preferred extremals.

  4. Could the pulse shape associated with massless extremals (MEs, topological light rays) allow to distinguish classically between speech and music at the level of space-time correlates? Linear superposition of Fourier components in the direction of ME is possible and this allows to speak about pulse shape. It allows also the notions of coherent state and Fock state for given direction of wave vector. Essential would be the restriction of the superposition of fields in single direction of propagation to be distinguished from the superposition of the effects of fields associated with different space-time sheets on multiply topologically condensed particle. Maybe this would allow to make testable predictions.

This text can be found at my homepage from an article with title Quantum Model for the Direct Currents of Becker. See also the chapter Quantum Mind, Magnetic Body, and Biological Body.