What's new in

TGD Inspired Theory of Consciousness

Note: Newest contributions are at the top!



Year 2018



Two manners to learn and what goes wrong with vulgar skeptics?

I had an "entertaining" discussion with two fellows - l call them A and B -, which taught a lot, I hope also for A and B -, and actually gave a good example of two kinds of learning. Learning by conditioning and learning by discovery. It also led to a possible understanding about what goes wrong in what I would call ultra-skeptic cognitive syndrome.

[This discussion by the way gave me good laughs. A and B - first summarized his academic background by "studied strings" and second was Bachelor in computer science but pretending to be M-theorist. They tried to demonstrate that I am a crackpot. They carried out an "investigation" following the principles of investigations made for witch candidates at middle ages. The victim had two options: she drowns or not in which case she is burned at stake.]

The highly emotional discussion was initiated by a totally non-sense hype about transferring consciousness of C Elegance to computer program (see this). I told that the news was hype and this raised the rage of A and B. The following considerations have very little to do with this article. Note however that I have done some work AI in general and even with with the basic ideas of deep learning. For instance, we had two years ago a collaboration about AI, IIT approach to consciousness, and about a possible connection with remote mental interactions together with Lian Sidorov and Ben Goertzel, who is behind Sophia robot. There two chapters related to this (see this and this). I think that the latter chapter is published in a book by Goertzel. There is also a critical article inspired by Sophia robot about which Ben Goertzel wrote an enthusiastic article and sent to Lian Sidorov and me (this).

1. Learning by conditioning

Returning to learning. The first kind of learning is learning by conditioning, which deep learning algorithms try to mechanize. Second kind of learning is learning by discovery. The latter is impossible for computers because they obey deterministic algorithm and are unable to do anything creative.

Emotions play a strong role in the learning by conditioning in the case of living systems and in the simplest form it is learning of X-good and X-bad type associations helping C elegance to survive in the cruel world. In case of humans this kind of associations can be extremely dangerous as for instance the course of events in USA has shown.

Very large part of our learning is just forming of associations: this is what Pavlov's dogs did. In school we learn to associate to "2×3=" symbol "6". In our youth we learned also algorithms for sum, division, multiplication and division, and even for finding the roots second order polynomial. Often this is called learning of mathematics. Later some mathematically gifted ones however discovered that this is just simple conditioning of an algorithm, and has very little to do with genuine mathematical thinking. The discovery of the algorithm itself would be mathematical thinking. The skill to code for algorithm - usually given - is also an algorithm and it can be also coded in AI.

If we are good enough in getting conditioned we get a studentship in University and learn science. This involves also learning of simple conditionings of type X-good and X-bad. In this learning social feedback from others reinforces learning: who would not like to earn the respect of the others!

For X-bad conditionings X can be homeopathy, water memory, cold fusion, telepathy, remote viewing, non-reductionistic/non-physicalistic world view, quantum theories of consciousness, TOEs other than M-theory, etc... For X-good conditionings X can be physicalism, reductionism, strong AI, superstrings, Witten, etc...

The student learns also to utter simple sentences demonstrating that he has learned the desired conditionings. This is important for career. Proud parents who hear the baby say their first word encourage the child. In the same manner environment reinforces the learning of "correct" opinions by a positive feedback. The discussion with A and B ga a quite a collection of these simple sentences. "I guessed that he is a crank" from A is a good example intended to express the long he life experience and wisdom of the youngster.

These conditionings make it also easy "recognize" whether someone is a crank/crackpot/etc... and even to carry out personal investigations - analogous with witchcraft investigations at middle ages - whether some-one is a crank or not. This is what A and B in their young and foolish arrogance indeed decided to carry out.

2. Learning by Eureka experience

There is also second kind of learning. Learning by discovery. Computers are not able to do this. I mentioned in the discussion what happens when you look certain kind of image consisting of mere random looking spots in plane. After enough staring suddenly a beautiful 3-D patterns emerges. This is a miracle like phenomenon, Eureka experience. Quantum consciousness based explanation is the emergence of quantum coherence in the scale of the neuronal cognitive representation in visual cortex at least. New 3-D mental image emerges from purely 2-D one. One goes outside of the context.

The increase of dimension might provide an important hint about what happens more generally: and this would indeed occur for the dimension of extension of rationals in Eureka quantum jump in TGD based model of what could occur. Physically this would correspond to the increase of the effective Planck constant heff= n×h0, h=6×h0 assignable to the mental image created by the image. n is indeed the dimension of extension of rationals and would increase and also scale of quantum coherence would increase from that of single spot to that for the entire pictures.

This kind of learning by Eureka is probably very common for children: they are said to be genii. Later the increasing dominance on the learning by conditioning often eliminates this mode of learning and the worst outcome is a mainstream scientist who is hard-nosed skeptic. Solving genuine problems is the manner to gain these learning experiences but they come only now and then. Some of them are really big: during my professional career there have been - I would guess about 10 really big experiences of this kind involving discovery of a new principle or totally new physical idea.

3. How to understand what is wrong with vulgar skeptics?

The discussion was very interesting since it led me to ponder why it is so hopeless to explain something extremely simple for skeptics. There is a beautiful connection with a learning based on Eureka experience. Physically this corresponds in TGD to a phase transition increasing scale of quantum coherence and algebraic complexity: more technically effective Planck constant heff increases at some levels. More intelligent mental images become possible and Eureka experience happens as in the situation when chaotic 2-D set of points becomes beautiful 3-D object.

Biological evolution at the level of species is based on this: we humans are more intelligent than banana flies. This evolution occurs at all levels - also at the level of individuals but it is not politically correct to say this aloud. Some of us are in their intellectual evolution at higher level than others, either congenitally or by our own efforts or both. This creates of cause bitter feelings. Intellectual superiority irritates and induces hatred. This is why so many intellectuals spend most of their life in jail.

Take seeing as an example. If person has become blind at adult age, he understands that he is blind and also what it feels to see. Also congenitally blind person believes that he is blind: this because most people in his environment tell that it is possible to see and that he is blind. He does not however feel what it is to see. Suppose now that most of us are blind and then comes some-one and tells that he sees. How many would believe him? They cannot feel what it to see. Very probably they conclude that this fellow is a miserable crank.

Suppose now that certain person - call him MP - has used 4 decades to develop a TOE based on generalization of superstring model made 5 years before the first superstring evolution and explaining also consciousness. MP tries his best to explain his TOE to a couple of skeptics but finds it hopeless. They even arrange "investigation" following the best traditions of witch hunt to demonstrate his crackpotness. And indeed, they conclude that they were correct: all that this person writes is totally incoherent non-sense just as this 2-D set of random points.

These two young fellows are simply intellectually blind since their personal hierarchy of Planck constants does not contain the required higher values. A Eureka experience would be required. MP could of course cheat and tell that he believes in superstrings and give a hint that the is a good friend of Witten. This would help but would only lead to pretended understanding. The fellows would take MP seriously only because MP agrees with Witten and claims to be a friend of Witten but still they would not have a slightest idea what TGD is. They cannot feel what it is to understand TGD.

The only hope is personal intellectual evolution increasing the needed Planck constants in the personal hierarchy. This is possible only if these fellows admit that they are intellectually blind in some respects but if they are young arrogant skeptics they furiously deny this and therefore also the possibility of personal intellectual evolution.

For background see the chapter Conscious Information and Intelligence. See also the article Two manners to learn and what goes wrong with vulgar skeptics?.



Maxwell's demon from TGD viewpoint

In Facebook I received a link to an interesting popular Science News article titled A New Information Engine is Pushing the Boundaries of Thermodynamics. The article told about the progress in generalizing the conventional second law of thermodynamics to take information as an additional parameter.

Carnot engine is the standard practical application. One has two systems A and B, both in thermal equilibrium but with different temperatures TA and TB ≥ TA. By second law one has heat flow Q from A to B the two systems, and Carnot's engine transforms some of this heat to work. Carnot's law gives an upper bound for the efficiency of the engine as η= W/Q ≤ (T2-T1)/T2. The possibility to transform information to work forces to generalize Carnot's law.

Since information is basically conscious information, this generalization is highly interesting from the point of view of quantum theories of consciousness and quantum biology. Certainly the generalization is highly non-trivial. Especially so in standard physics framework, where only entropy is defined at fundamental level and is regarded as ensemble entropy and basically has very little to do with conscious information. Therefore the argumentation is kind of artwork.

1. Maxwell's demon in its original form

Maxwell's demon appears in a thought experiment in which one considers a system consisting of two volumes A and B of gas in thermal equilibrium at same temperature. At the boundary between A and B having a small hole sits a demon checking whether a molecule coming from A has velocity above some threshold: if so it allows the molecule to go to B. Demon monitors also the molecules coming from B and if the velocity is below the threshold it allows the molecule to continue to A. As a consequence, temperature and pressure differences develop between A and B. Pressure difference can do work very much voltage between the cathode and anode of battery. One can indeed add a tube analogous to wire between ends of the entire system and pressure difference causes a flow of mass doing thus work: one has pump.

The result is in conflict with the second law and one can ask what goes wrong. From the Wikipedia article one learns that a lot of arguments have been represented con and pro Maxwell's demon. Biologist might answer immediately. Demon must measure the states of molecules and this requires cognition and memory, which is turn require metabolic energy. When one takes this into account this, paradox should disappear and second law should remain true in a generalized form in which one takes into account the needed metabolic energy.

2. Experimental realization of Maxwell's demon

The popular article describes an experiment actualizing Maxwell's demon carried out by Govind Paneru, Dong Yun Lee, Tsvi Tlusty, and Hyuk Kyu Pak . Below is the abstract of the article Lossless Brownian Information Engine published in Phys Rev Letters (see this).

We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.

Unfortunately, the article is behind paywall and I failed to find it in arXiv. The popular article uses notions like "particle trapped by light at room temperature" and photodiode as "light trap" without really defining what these expressions mean. For instance, it is said that the light trap would follow particles moving in definite direction (from A to B in Maxwell's thought experiment). I must admit that I am not at all sure what the precise meaning of this statement is.

3. TGD view about the situation

TGD inspired theory of consciousness can be regarded as a quantum measurement theory based on zero energy ontology (ZEO) and it it is interesting to try to analyze the experiment in this conceptual framework.

3.1 TGD view about the experiment

The natural quantum interpretation is that the photodiode following the photon is performing repeated quantum measurements, which in standard quantum theory do not affect the state of the particle after the first measurement. From the viewpoint of TGD inspired consciousness, which can be regarded as a generalization of quantum measurement theory forced by zero energy ontology (ZEO), the situation could be as follows.

  1. Photo-diode following the particle by would be like conscious entity directing attention its to the particle and keeping it in focus. In TGD Universe directed attention has as classical space-time correlates flux tubes connecting attendee and target of attention: in ER-EPR correspondence the flux tubes are replaced with wormholes, which suit better to GRT based framework. Flux tubes make also possible entanglement between attendee and target. The two systems become single system during the period of attention and one could say that the attention separates the particle from the rest.
  2. Directed attention costs metabolic energy. Same would be true also now - photo-diode indeed requires energy feed. Directed attention creates mental image the conscious entity associated with the mental images can be regarded as a generalized Zeno effect or as a sequence of weak measurements.

    Tracking would thus mean that particle's momentum is measured repeatedly so that the particle is forced to continue with the same momentum. Gradually this would affect the thermal distribution and generate temperature and pressure gradients. Directed attention could be also seen as a mechanism of volition in quantum biology.

  3. This looks nice but one can ask what about the collisions of the particle with other molecules of gas: don't they interfere with the Zeno effect? If the period between repeated measurements is shorter than the average time between the collisions of particles, this is not a problem. But is there any effect in this case? The directed attention or a sequence of quantum measurements could separate the particle from the environment by de-entangling it from the envirobment. Could it be that collisions would not occur during this period so that attendee and target would form a subsystem de-entangled from rest of the world?
3.2 ZEO variant of Maxwell's demon

Zero energy ontology (ZEO) forces to consider different arrangement producing energy somewhat like in perpetuum mobile but not breaking the conservation of energy in any obvious manner. The idea pops into my mind occasionally and I reject it every time and will do so again.

  1. Zero energy states (ZESs) are like physical events: pairs of positive and negative energy state with energy E and -E: this codes for energy conservation.
  2. One can have quantum superposition of ZESs with different values of energy E and with average value < E> of energy. In state function reduction < E> can change and in principle this does not break conservation of energy since one has still superposition of pairs with energies E and -E.
  3. For instance, the probabilities for states with energy E could be given by thermal distribution parameterized by temperature parameter T: one would have "square root" of thermodynamic distribution for energies. "Square root" of thermodynamics is indeed forced by ZEO. One would have essentially entanglement in time direction. Single particle states would realize square root of thermodynamical ensemble, which would not be a fictive notion anymore.

    The coefficients for the state pairs would have also phases and these phases would bring in something new and very probably very important in living matter. System characterized by temperature T would not be so uninteresting as we think, there could be hidden phase information.

If T increases in reduction then < E> increases in state function reduction. Reduction could also measure the value of E. Could the system increase its < E> in state function reductions? My proposal for an answer is "No".

In ordinary thermodynamics energy should be fed from environment to increase < E>: how environment would enter into the game now?

  1. State function reduction always reduces the entanglement of system S with environment, call it Senv. Could the increase of < E> be compensated by compensating change of -< E> in Senv. Indeed, the conservation of energy for single state is expected have statistical counterpart: energy would come from environment as a kind of metabolic energy. Therefore also the "square root of thermodynamics would prevent perpetuum mobile.
  2. This would be the case if the reduction measures the energy of the entire system Stot=S + Senv - so that Stot is always in energy eigenstate with eigenvalue Etot and Etot does not change in reductions and in unitary evolutions between them. Can one pose this condition?
3.3 Time reversal and apparent breaking of second law in zero energy ontology (ZEO)

ZEO based theory of consciousness (see this) forces to consider also a genuine breaking of the second law.

  1. In ZEO self as a conscious entity corresponds to a generalized Zeno effect or equivalently a sequence of analogs of weak measurements as "small" state function reductions. The state at passive boundary of CD is unaffected as also the members of state pairs at it.

    Second boundary of CD (active boundary) shifts farther away from the passive one and the members of state pairs at it change giving rise to the conscious experience of self. Clock time time identified as temporal distance between the tips of CD increases. This gives rise to the correspondence between clock time and subjective time identified as sequence of weak reductions.

  2. Also "large" state function reductions are possible and also unavoidable. The roles of active and passive boundary are changed and time reversal occurs for the clock time. One can say that self dies and re-incarnates as a time-reversed self.

    At the next re-incarnation self with the original arrow of clock time would be reborn and continue life from time value shifted towards future from the moment of death: its identity as a physical could be however very different. One can of course wonder whether sleep could mean a life in opposite direction of clock time and wake-up a reincarnation in the usual sense.

    The time-reversed self need not have conscious memories about its former life cycle: only the collections of un-entangled subsystems at passive boundary carry information about this period. A continuation of conscious experience could however take place in different sense: the contents of consciousness associated with the magnetic body of self could survive the death as near-death-experiences indeed suggest.

  3. The time reversed system obeys second law but with opposite time direction as normally. Already Italian physicist Fantappie proposed that this occurs routinely in living matter and christened the entropy for time reversed systems syntropy. Processes like spontaneous assembly of complex molecules from their building bricks could be controlled by time reversed selves.

    In TGD inspired biology motor actions could be seen as generation of signal propagating backwards in time and defining sub-system with revered arrow of time and inducing the activity preceding motor activity before the conscious decision leading to it is made: this with respect to geometric time. There are many effects supporting the occurrence of these time reversals.

  4. How the possibility of time reversals relates to the second law? One might argue that second law emerges from the non-determinism of state function reduction alone. Second law would transform to its temporal mirror image when one looks the system from outside with unchanged arrow of clock time.

    But does the second law continue to hold in statistical sense as one takes average over several incarnations? One might think that this is the case since generalized Zeno effect generalizes ordinary Zeno effect and at the limit of positive energy ontology one would effectively have a sequence of ordinary state function reductions leading leading to second law.

3.4 Negentropy Maximation Principle (NMP)

TGD also predicts what I call Negentropy Maximization Principle (NMP) .

  1. Entanglement coefficients belong to extension of rationals allowing interpretation as both real and p-adic numbers in the extension of p-adics induced by the extension of rationals defining the adele.

    One can assign ordinary entanglement entropy to the real sector of adele and entanglement negentropy with the p-adic sectors of adelic physics: for latter the analog of ordinary Shannon entropy is negative and thus the interpretation as conscious information is possible. The information is assigned with the pairing defined by entanglement whereas entropy is associated with the loss of precise knowledge about the state of particle in entangled state.

  2. One can also consider the difference of sum of p-adic entanglement negentropies and real entanglement entropy as the negentropy. This quantity can be positive for algebraic extensions of rationals and its maximal value increases with the complexity of the extension and with p-adic prime.

    Also the information defined in this manner would increase during evolution assignable to the gradual increase of dimension of algebraic extension of rationals, which can take place in "large" state function reductions (re-incarnations of self): if the eigenvalues of density matrix are algebraic numbers in an extension of the extension of rationals, the "large" state function must take place.

  3. NMP would hold true in statistical sense - and mathematically very much analogous to second law - and would relate to evolution. In particular, one can understand why the emergence of intelligent systems is - rather paradoxically - accompanied by the generation of entropy. To have large entanglement negentropy in p-adic sectors one must have large entanglement entropy in real sector since same entanglement defines both.
3.5 Dark matter as phases of matter labelled by the hierarchy of Planck constants

The hierarchy of Planck constants heff/h=n is a further key notion in TGD inspired quantum biology.

  1. The hierarchy of Planck constants heff/h=n implied by adelic physics as physics of both sensory experience (real numbers) and cognition (p-adic number fields) is basic prediction of TGD (see this). Planck constant characterizes the dimension of the algebraic extension of rationals characterizing the cognitive representations, and is bound to increase since the number of extensions with dimension larger than given dimension is infinite whereas those with smaller dimension is finite.
  2. The ability to generate negentropy increases during evolution. System need not however generate negentropy and can even reduce it. In statistical sense negentropic resources however increase: things get better in the long run. In biology metabolic energy feed brings to system molecules having valence bonds with heff/h=n larger than that for atoms (see this), and this increases the ability of the system to generate negentropy and in statistical sense this leads to the increase of negentropy.
For details see the chapter About the Nature of Time or the article Maxwell's demon from TGD viewpoint.



To the index page