What's new in

Physics in Many-Sheeted Space-Time

Note: Newest contributions are at the top!



Year 2012



Could correlation functions, S-matrix, and coupling constant evolution be coded the statistical properties of preferred extremals?

Quantum classical correspondence states that all aspects of quantum states should have correlates in the geometry of preferred extremals. In particular, various elementary particle propagators should have a representation as properties of preferred extremals. This would allow to realize the old dream about being able to say something interesting about coupling constant evolution although it is not yet possible to calculate the M-matrices and U-matrix. Hitherto everything that has been said about coupling constant evolution has been rather speculative arguments except for the general vision that it reduces to a discrete evolution defined by p-adic length scales. General first principle definitions are much more valuable than ad hoc guesses even if the latter give rise to explicit formulas.

In quantum TGD and also at its QFT limit various correlation functions in given quantum state code for its properties. These correlation functions should have counterparts in the geometry of preferred extremals. Even more: these classical counterparts for a given preferred extremal ought to be identical with the quantum correlation functions for the superposition of preferred extremals.

  1. The marvelous implication of quantum ergodicity would be that one could calculate everything solely classically using the classical intuition - the only intuition that we have. Quantum ergodicity would also solve the paradox raised by the quantum classical correspondence for momentum eigenstates. Any preferred extremal in their superposition defining momentum eigenstate should code for the momentum characterizing the superposition itself. This is indeed possible if every extremal in the superposition codes the momentum to the properties of classical correlation functions which are identical for all of them.
  2. The only manner to possibly achieve quantum ergodicity is in terms of the statistical properties of the preferred extremals. It should be possible to generalize the ergodic theorem stating that the properties of statistical ensemble are represented by single space-time evolution in the ensemble of time evolutions. Quantum superposition of classical worlds would effectively reduce to single classical world as far as classical correlation functions are considered. The notion of finite measurement resolution suggests that one must state this more precisely by adding that classical correlation functions are calculated in a given UV and IR resolutions meaning UV cutoff defined by the smallest CD and IR cutoff defined by the largest CD present.
  3. The skeptic inside me immediately argues that TGD Universe is 4-D spin glass so that this quantum ergodic theorem must be broken. In the case of the ordinary spin classes one has not only statistical average for a fixed Hamiltonian but a statistical average over Hamiltonians. There is a probability distribution over the coupling parameters appearing in the Hamiltonian. Maybe the quantum counterpart of this is needed to predict the physically measurable correlation functions.

    Could this average be an ordinary classical statistical average over quantum states with different classical correlation functions? This kind of average is indeed taken in density matrix formalism. Or could it be that the square root of thermodynamics defined by ZEO actually gives automatically rise to this average? The eigenvalues of the "hermitian square root " of the density matrix would code for components of the state characterized by different classical correlation functions. One could assign these contributions to different "phases".

  4. Quantum classical correspondence in statistical sense would be very much like holography (now individual classical state represents the entire quantum state). Quantum ergodicity would pose a rather strong constraint on quantum states. This symmetry principle could actually fix the spectrum of zero energy states to a high degree and fix therefore the M-matrices given by the product of hermitian square root of density matrix and unitary S-matrix and unitary U-matrix having M-matrices as its orthonormal rows.
  5. In TGD inspired theory of consciousness the counterpart of quantum ergodicity is the postulate that the space-time geometry provides a symbolic representation for the quantum states and also for the contents of consciousness assignable to quantum jumps between quantum states. Quantum ergodicity would realize this strongly self-referential looking condition. The positive and negative energy parts of zero energy state would be analogous to the initial and final states of quantum jump and the classical correlation functions would code for the contents of consciousness like written formulas code for the thoughts of mathematician and provide a sensory feedback.
How classical correlation functions should be defined?
  1. General Coordinate Invariance and Lorentz invariance are the basic constraints on the definition. These are achieved for the space-time regions with Minkowskian signature and 4-D M4 projection if linear Minkowski coordinates are used. This is equivalent with the contraction of the indices of tensor fields with the space-time projections of M4 Killing vector fields representing translations. Accepting ths generalization, there is no need to restrict oneself to 4-D M4 projection and one can also consider also Euclidian regions identifiable as lines of generalized Feynman diagrams.

    Quantum ergodicity very probably however forces to restrict the consideration to Minkowskian and Euclidian space-time regions and various phases associated with them. Also CP2 Killing vector fields can be projected to space-time surface and give a representation for classical gluon fields. These in turn can be contracted with M4 Killing vectors giving rise to gluon fields as analogs of graviton fields but with second polarization index replaced with color index.

  2. The standard definition for the correlation functions associated with classical time evolution is the appropriate starting point. The correlation function GXY(τ) for two dynamical variables X(t) and Y(t) is defined as the average GXY(τ)=∫T X(t)Y(t+τ)dt/T over an interval of length T, and one can also consider the limit T→ ∞. In the recent case one would replace kenotau with the difference m1-m2=m of M4 coordinates of two points at the preferred extremal and integrate over the points of the extremal to get the average. The finite time interval T is replaced with the volume of causal diamond in a given length scale. Zero energy state with given quantum numbers for positive and negative energy parts of the state defines the initial and final states between which the fields appearing in the correlation functions are defined.
  3. What correlation functions should be considered? Certainly one could calculate correlation functions for the induced spinor connection given electro-weak propagators and correlation functions for CP2 Killing vector fields giving correlation functions for gluon fields using the description in terms of Killing vector fields. If one can uniquely separate from the Fourier transform uniquely a term of form Z/(p2-m2) by its momentum dependence, the coefficient Z can be identified as coupling constant squared for the corresponding gauge potential component and one can in principle deduce coupling constant evolution purely classically. One can imagine of calculating spinorial propagators for string world sheets in the same manner. Note that also the dependence on color quantum numbers would be present so that in principle all that is needed could be calculated for a single preferred extremal without the need to construct QFT limit and to introduce color quantum numbers of fermions as spin like quantum numbers (color quantum numbers corresponds to CP2 partial wave for the tip of the CD assigned with the particle).
  4. What about Higgs like field? TGD in principle allows scalar and pseudo-scalars which could be called Higgs like states. These states are however not necessary for particle massivation although they can represent particle massivation and must do so if one assumes that QFT limit exist. p-Adic thermodynamics however describes particle massivation microscopically.

    The problem is that Higgs like field does not seem to have any obvious space-time correlate. The trace of the second fundamental form is the obvious candidate but vanishes for preferred extremals which are both minimal surfaces and solutions of Einstein Maxwell equations with cosmological constant. If the string world sheets at which all spinor components except right handed neutrino are localized for the general solution ansatz of the modified Dirac equation, the corresponding second fundamental form at the level of imbedding space defines a candidate for classical Higgs field. A natural expectation is that string world sheets are minimal surfaces of space-time surface. In general they are however not minimal surfaces of the imbedding space so that one might achieve a microscopic definition of classical Higgs field and its vacuum expectation value as an average of one point correlation function over the string world sheet.

For details and background see the chapter Coupling Constant Evolution in Quantum TGD, or the article with the title "Do geometric invariants of preferred extremals define topological invariants of space-time surface and code for quantum physics?".



Could hyperbolic 3-manifolds and hyperbolic lattices be relevant in zero energy ontology?

In zero energy ontology (ZEO) lattices in the 3-D hyperbolic manifold defined by H3 (t2-x2-y2-z2=a2) (and known as hyperbolic space to distinguish it from other hyperbolic manifolds emerge naturally. The interpretation of H3 as a cosmic time=constant slice of space-time of sub-critical Robertson-Walker cosmology (giving future light-cone of M4 at the limit of vanishing mass density) is relevant now. ZEO leads to an argument stating that once the position of the "lower" tip of causal diamond (CD) is fixed and defined as origin, the position of the "upper" tip located at H3 is quantized so that it corresponds to a point of a lattice H3/G, where G is discrete subgroup of SL(2,C) (so called Kleinian group). There is evidence for the quantization of cosmic redshifts: a possible interpretation is in terms of hyperbolic lattice structures assignable to dark matter and energy. Quantum coherence in cosmological scales could be in question. This inspires several questions. How does the crystallography in H3 relate to the standard crystallography in Eucdlidian 3-space E3? Are there general results about tesselations H3? What about hyperbolic counterparts of quasicrystals? In this article standard facts are summarized and some of these questions are briefly discussed.

For details see the article Could hyperbolic 3-manifolds and hyperbolic lattices be relevant in zero energy ontology? or the chapter TGD and Cosmology.



Do blackholes and Hawking evaporation have TGD counterparts?

The blackhole information paradox is often believed to have solution in terms of holography stating in the case of blackholes that blackhole horizon can serve as a holographic screen representing the information about the surrounding space as a hologram. The situation is however far from settled. The newest challenge is so called firewall paradox proposed by hrefhttp://http://arxiv.org/abs/1207.3123">Polchinsky et al. Lubos Motl has written several postings about firewall paradox and they inspired me to look the situation in TGD framework.

These paradoxes strengthen the overall impression that the blackhole physics indeed represent the limit at which GRT fails and the outcome is recycling of old arguments leading nowhere. Something very important is lacking. On the other hand, some authors like Susskind claim that the physics of this century more or less reduces to that for blackholes. I however see this endless tinkering with blackholes as a decline of physics. If super string had been a success as a physical theory, we would have got rid of blackholes.

If TGD is to replace GRT, it must also provide new insights to blackholes, blackhole evaporation, information paradox and firewall paradox. This inspired me to look for what blackholes and blackhole evaporation could mean in TGD framework and whether TGD can avoid the paradoxes. This kind of exercises allow also to sharpen the TGD based view about space-time and quantum and build connections to the mainstream views.

For more details see the chapter TGD and Astrophysics or the little article "Do blackholes and Hawking evaporation have TGD counterparts?".



Do galaxies have preferred handedness?

New Scientist tells that spiral galaxies which seem to have tendency to be left handed along two lines which have angle of 85 degrees with respect to each other. Galaxies would be therefore like biomolecules which also have preferred handedness in living matter.

Handedness in geometric sense requires that the mirror image of the galaxy is not identical with galaxy itself. In good approximation galaxies are however rotationally symmetric around the spin axis. In dynamical sense handedness results if the total angular momentum of galaxy is non-vanishing. Spiral galaxies indeed have spin.

What has been observed that along these two lines of sight there are more left- than right handed galaxies. The length for the light of sight was 1.2 billion ly in the survey of Michael Longo and 3.4 billion ly in the survey of Lior Shamir. The scale of our large void is about .1 billion light years so that cosmic length scales are in question. The findings could of course be statistical flukes. Future surveys will resolve this issue.

The existence of a preferred axes of symmetry in cosmic scales does not fit well with isotropy and homogenuity assumptions of the standard cosmology. The TGD based proposal for the formation of galaxies and other astrophysical structures relies on a fractal network of string like objects defined by Kähler magnetic flux tubes. These magnetic flux tubes were present in primordial cosmology and had 1-D M4 projection at that time: they indeed defined string world sheets in M4. During the cosmic expansion the thickness of their M4 projections has increased gradually. These string like objects carry dark energy as magnetic energy and also the magnetic fields have become weaker during expansion. These flux tubes could also correspond to a gigantic value of Planck constant. Various astrophysical structures consisting of ordinary and dark matter would have formed via the decay of the magnetic energy of the flux tubes to ordinary and dark particles. The basic difference with respect to the inflationary scenario is that the energy of inflaton field is replaced with Kähler magnetic field and identified as dark energy.

Galaxies would be like pearls in a necklace. The dark matter and energy along the galactic necklaces causes a logarithmic 2-D gravitational potential producing constant velocity spectrum for distant stars. The basic prediction is that the galaxies can move freely along the flux tubes: this could explain the observed systematic motions in cosmic scale also challenging the basic assumptions of standard cosmology. Galaxies moving along different flux tubes can also collide if the flux tubes go near each other: this could be caused by their gravitational attraction already during the primordial period. One can imagine a cosmic highway network consisting of flux tubes intersecting at nodes and formed during the primordial period. Galaxies not obeying cosmic traffic rules could collide at crossings;-).

Since the necklace would have been much shorter during the primordial period, the proto galaxies possibly existing already at that time would have very near to each other and dynamically strongly coupled. Therefore the correlation of the directions of the angular momenta of proto galaxies - roughly in the direction of the long string like flux tube - could be a remnant from this time. This remnant manifesting itself as a definite handedness would be stabilized by the conservation of angular momentum after the decoupling of the galaxies from each other. The large value of Planck constant could also make possible quantum coherence in astrophysical scales for dark matter and energy and in this manner explain the correlations.

That there are two axes of this kind would suggest that our galaxy resides at junction of cosmic highways as a victim of cosmic traffic accident: that is in the node at which to cosmic necklaces touch. This is what I suggested in the earlier posting inspired by one particular finding challenging the assumption that galactic dark matter forms a spherical halo.

The finding was that near galactic center there is a distribution of satellite galaxies and star clusters, which rotate around Milky Way in a plane orthogonal to the plane of Milky Way. The observation could be interpreted by assuming that two orthogonal magnetic flux tubes (90 degrees is not far form 85 degrees) containing galaxies along them intersect at our galaxy. The newly found distribution of matter would correspond to a matter rotating around the flux tube - call it B - in the same way as the matter of our own galaxy rotates around the second flux tube - call it A. These flux tubes could correspond to the lines of sight found in the two surveys.

For background see the chapter Cosmic Strings.



About deformations of known extremals of Kähler action

I have done a considerable amount of speculative guesswork to identify what I have used to call preferred extremals of Kähler action. The problem is that the mathematical problem at hand is extremely non-linear and that there is no existing mathematical literature. One must proceed by trying to guess the general constraints on the preferred extremals which look physically and mathematically plausible. The hope is that this net of constraints could eventually chrystallize to Eureka! Certainly the recent speculative picture involves also wrong guesses. The need to find explicit ansatz for the deformations of known extremals based on some common principles has become pressing. The following considerations represent an attempt to combine the existing information to achieve this.

What might be the common features of the deformations of known extremals?

The dream is to discover the deformations of all known extremals by guessing what is common to all of them. One might hope that the following list summarizes at least some common features.

Effective three-dimensionality at the level of action

  1. Holography realized as effective 3-dimensionality also at the level of action requires that it reduces to 3-dimensional effective boundary terms. This is achieved if the contraction jαAα vanishes. This is true if jα vanishes or is light-like, or if it is proportional to instanton current in which case current conservation requires that CP2 projection of the space-time surface is 3-dimensional. The first two options for j have a realization for known extremals. The status of the third option - proportionality to instanton current - has remained unclear.
  2. As I started to work again with the problem, I realized that instanton current could be replaced with a more general current j=*B∧J or concretely: jα= εαβγδBβJγδ, where B is vector field and CP2 projection is 3-dimensional, which it must be in any case. The contractions of j appearing in field equations vanish automatically with this ansatz.
  3. Almost topological QFT property in turn requires the reduction of effective boundary terms to Chern-Simons terms: this is achieved by boundary conditions expressing weak form of electric magnetic duality. If one generalizes the weak form of electric magnetic duality to J=Φ *J one has B=dΦ and j has a vanishing divergence for 3-D CP2 projection. This is clearly a more general solution ansatz than the one based on proportionality of j with instanton current and would reduce the field equations in concise notation to Tr(THk)=0.
  4. Any of the alternative properties of the Kähler current implies that the field equations reduce to Tr(THk)=0, where T and Hk are shorthands for Maxwellian energy momentum tensor and second fundamental form and the product of tensors is obvious generalization of matrix product involving index contraction.

Could Einstein's equations emerge dynamically?

For jα satisfying one of the three conditions, the field equations have the same form as the equations for minimal surfaces except that the metric g is replaced with Maxwell energy momentum tensor T.

  1. This raises the question about dynamical generation of small cosmological constant Λ: T= Λ g would reduce equations to those for minimal surfaces. For T=Λ g modified gamma matrices would reduce to induced gamma matrices and the modified Dirac operator would be proportional to ordinary Dirac operator defined by the induced gamma matrices. One can also consider weak form for T=Λ g obtained by restricting the consideration to sub-space of tangent space so that space-time surface is only "partially" minimal surface but this option is not so elegant although necessary for other than CP2 type vacuum extremals.
  2. What is remarkable is that T= Λ g implies that the divergence of T which in the general case equals to jβJβα vanishes. This is guaranteed by one of the conditions for the Kähler current. Since also Einstein tensor has a vanishing divergence, one can ask whether the condition to T= κ G+Λ g could the general condition. This would give Einstein's equations with cosmological term besides the generalization of the minimal surface equations. GRT would emerge dynamically from the non-linear Maxwell's theory although in slightly different sense as conjectured (see this)! Note that the expression for G involves also second derivatives of the imbedding space coordinates so that actually a partial differential equation is in question. If field equations reduce to purely algebraic ones, as the basic conjecture states, it is possible to have Tr(GHk)=0 and Tr(gHk)=0 separately so that also minimal surface equations would hold true.

    What is amusing that the first guess for the action of TGD was curvature scalar. It gave analogs of Einstein's equations as a definition of conserved four-momentum currents. The recent proposal would give the analog of ordinary Einstein equations as a dynamical constraint relating Maxwellian energy momentum tensor to Einstein tensor and metric.

  3. Minimal surface property is physically extremely nice since field equations can be interpreted as a non-linear generalization of massless wave equation: something very natural for non-linear variant of Maxwell action. The theory would be also very "stringy" although the fundamental action would not be space-time volume. This can however hold true only for Euclidian signature. Note that for CP2 type vacuum extremals Einstein tensor is proportional to metric so that for them the two options are equivalent. For their small deformations situation changes and it might happen that the presence of G is necessary. The GRT limit of TGD discussed in kenociteallb/tgdgrt kenocitebtart/egtgd indeed suggests that CP2 type solutions satisfy Einstein's equations with large cosmological constant and that the small observed value of the cosmological constant is due to averaging and small volume fraction of regions of Euclidian signature (lines of generalized Feynman diagrams).
  4. For massless extremals and their deformations T= Λ g cannot hold true. The reason is that for massless extremals energy momentum tensor has component Tvv which actually quite essential for field equations since one has Hkvv=0. Hence for massless extremals and their deformations T=Λ g cannot hold true if the induced metric has Hamilton-Jacobi structure meaning that guu and gvv vanish. A more general relationship of form T=κ G+Λ G can however be consistent with non-vanishing Tvv but require that deformation has at most 3-D CP2 projection (CP2 coordinates do not depend on v).
  5. The non-determinism of vacuum extremals suggest for their non-vacuum deformations a conflict with the conservation laws. In, also massless extremals are characterized by a non-determinism with respect to the light-like coordinate but like-likeness saves the situation. This suggests that the transformation of a properly chosen time coordinate of vacuum extremal to a light-like coordinate in the induced metric combined with Einstein's equations in the induced metric of the deformation could allow to handle the non-determinism.

Are complex structure of CP2 and Hamilton-Jacobi structure of M4 respected by the deformations?

The complex structure of CP2 and Hamilton-Jacobi structure of M4 could be central for the understanding of the preferred extremal property algebraically.

  1. There are reasons to believe that the Hermitian structure of the induced metric ((1,1) structure in complex coordinates) for the deformations of CP2 type vacuum extremals could be crucial property of the preferred extremals. Also the presence of light-like direction is also an essential elements and 3-dimensionality of M4 projection could be essential. Hence a good guess is that allowed deformations of CP2 type vacuum extremals are such that (2,0) and (0,2) components the induced metric and/or of the energy momentum tensor vanish. This gives rise to the conditions implying Virasoro conditions in string models in quantization:

    gξiξj=0 , gξ*iξ*j=0 , i,j=1,2 .

    Holomorphisms of CP2 preserve the complex structure and Virasoro conditions are expected to generalize to 4-dimensional conditions involving two complex coordinates. This means that the generators have two integer valued indices but otherwise obey an algebra very similar to the Virasoro algebra. Also the super-conformal variant of this algebra is expected to make sense.

    These Virasoro conditions apply in the coordinate space for CP2 type vacuum extremals. One expects similar conditions hold true also in field space, that is for M4 coordinates.

  2. The integrable decomposition M4(m)=M2(m)+E2(m) of M4 tangent space to longitudinal and transversal parts (non-physical and physical polarizations) - Hamilton-Jacobi structure- could be a very general property of preferred extremals and very natural since non-linear Maxwellian electrodynamics is in question. This decomposition led rather early to the introduction of the analog of complex structure in terms of what I called Hamilton-Jacobi coordinates (u,v,w,w*) for M4. (u,v) defines a pair of light-like coordinates for the local longitudinal space M2(m) and (w,w*) complex coordinates for E2(m). The metric would not contain any cross terms between M2(m) and E2(m): guw=gvw= guw* =gvw*=0.

    A good guess is that the deformations of massless extremals respect this structure. This condition gives rise to the analog of the constraints leading to Virasoro conditions stating the vanishing of the non-allowed components of the induced metric. guu= gvv= gww=gw*w* =guw=gvw= guw* =gvw*=0. Again the generators of the algebra would involve two integers and the structure is that of Virasoro algebra and also generalization to super algebra is expected to make sense. The moduli space of Hamilton-Jacobi structures would be part of the moduli space of the preferred extremals and analogous to the space of all possible choices of complex coordinates. The analogs of infinitesimal holomorphic transformations would preserve the modular parameters and give rise to a 4-dimensional Minkowskian analog of Virasoro algebra. The conformal algebra acting on CP2 coordinates acts in field degrees of freedom for Minkowskian signature.

Field equations as purely algebraic conditions

If the proposed picture is correct, field equations would reduce basically to purely algebraically conditions stating that the Maxwellian energy momentum tensor has no common index pairs with the second fundamental form. For the deformations of CP2 type vacuum extremals T is a complex tensor of type (1,1) and second fundamental form Hk a tensor of type (2,0) and (0,2) so that Tr(THk)= is true. This requires that second light-like coordinate of M4 is constant so that the M4 projection is 3-dimensional. For Minkowskian signature of the induced metric Hamilton-Jacobi structure replaces conformal structure. Here the dependence of CP2 coordinates on second light-like coordinate of M2(m) only plays a fundamental role. Note that now Tvv is non-vanishing (and light-like). This picture generalizes to the deformations of cosmic strings and even to the case of vacuum extremals.

For background see the chapter Basic Extremals of Kähler action. See also the article About deformations of known extremals of Kähler action.



What small deformations of CP2 type vacuum extremals could be?

I became again interested in finding preferred extremals of Kähler action, which would have 4-D CP2 and perhaps also M4 projections. This would correspond to Maxwell phase that I conjectured long time ago. Deformations of CP2 type vacuum extremals would correspond also to these extremals. The signature of the induced metric might be also Minkowskian. It however turns out that the solution ansatz requires Euclidian signature and that M4 projection is 3-D so that original hope is not realized.

I proceed by the following arguments to the ansatz.

  1. Effective 3-dimensionality for action (holography) requires that action decomposes to vanishing jαAα term + total divergence giving 3-D "boundary" terms. The first term certainly vanishes (giving effective 3-dimensionality and therefore holography) for

    DβJαβ=jα=0 .

    Empty space Maxwell equations, something extremely natural. Also for the proposed GRT limit these equations are true.

  2. How to obtain empty space Maxwell equations jα=0? Answer is simple: assume self duality or its slight modification:

    J=*J

    holding for CP2 and CP2 type vacuum extremals or a more general condition

    J=k*J ,

    k some constant not far from unity. * is Hodge dual involving 4-D permutation symbol.k=constant requires that the determinant of the induced metric is apart from constant equal to that of CP2 metric. It does not require that the induced metric is proportional to the CP2 metric, which is not possible since M4 contribution to metric has Minkowskian signature and cannot be therefore proportional to CP2 metric.

  3. Field equations reduce with these assumptions to equations differing from minimal surfaces equations only in that metric g is replaced by Maxwellian energy momentum tensor T. Schematically:

    Tr(THk)=0 ,

    where T is Maxwellian energy momentum tensor and Hk is the second fundamental form - asymmetric 2-tensor defined by covariant derivative of gradients of imbedding space coordinates.

  4. It would be nice to have minimal surface equations since they are the non-linear generalization of massless wave equations. This is achieved if one has

    T= Λ g .

    Maxwell energy momentum tensor would be proportional to the metric! One would have dynamically generated cosmological constant! This begins to look really interesting since it appeared also at the proposed GRT limit of TGD.

  5. Very skematically and forgetting indices and being sloppy with signs, the expression for T reads as

    T= JJ -g/4 Tr(JJ) .

    Note that the product of tensors is obtained by generalizing matrix product. This should be proportional to metric.

    Self duality implies that Tr(JJ) is just the instanton density and does not depend on metric and is constant.

    For CP2 type vacuum extremals one obtains

    T= -g+g=0 .

    Cosmological constant would vanish in this case.

  6. Could it happen that for deformations a small value of cosmological constant is generated? The condition would reduce to

    JJ= (Λ-1)g .

    Λ must relate to the value of parameter k appearing in the generalized self-duality condition. This would generalize the defining condition for Kähler form

    JJ=-g (i2=-1 geometrically)

    stating that the square of Kähler form is the negative of metric. The only modification would be that index raising is carried out by using the induced metric containing also M4 contribution rather than CP2 metric.

  7. Explicitly:

    Jαμ Jμβ = (Λ-1)gαβ .

    Cosmological constant would measure the breaking of Kähler structure.

One could try to develop ansatz to a more detailed form. The most obvious guess is that the induced metric is apart from constant conformal factor the metric of CP2. This would guarantee self-duality apart from constant factor and jα=0. Metric would be in complex CP2 coordinates tensor of type (1,1) whereas CP2 Riemann connection would have only purely holomorphic or anti-holomorphic indices. Therefore CP2 contributions in Tr(THk) would vanish identically. M4 degrees of freedom however bring in difficulty. The M4 contribution to induced metric should be proportional to CP2 metric and this is impossible due to the different signatures. The M4 contribution to the induced metric breaks its Kähler property.

A more realistic guess based on the attempt to construct deformations of CP2 type vacuum extremals is following.

  1. Physical intuition suggests that M4 coordinates can be chosen so that one has integrable decomposition to longitudinal degrees of freedom parametrized by two light-like coordinates u and v and to transversal polarization degrees of freedom parametrized by complex coordinate w and its conjugate. M4 metric would reduce in these coordinates to a direct sum of longitudinal and transverse parts. I have called these coordinates Hamilton Jacobi coordinates.
  2. w would be holomorphic function of CP2 coordinates and therefore satisfy massless wave equation. This would give hopes about rather general solution ansatz. u and v cannot be holomorphic functions of CP2 coordinates. Unless wither u or v is constant, the induced metric would have contributions of type (2,0) and (0,2) coming from u and v which would break Kähler structure and complex structure. These contributions would give no-vanishing contribution to all minimal surface equations. Therefore either u or v is constant: the coordinate line for non-constant coordinate -say u- would be analogous to the M4 projection of CP2 type vacuum extremal.
  3. With these assumptions the induced metric would remain (1,1) tensor and one might hope that Tr(THk) contractions vanishes for all variables except u because the there are no common index pairs (this if non-vanishing Christoffel symbols for H involve only holomorphic or anti-holomorphic indices in CP2 coordinates). For u one would obtain massless wave equation expressing the minimal surface property.
  4. The induced metric would contain only the contribution from the transversal degrees of freedom besides CP2 contribution. Minkowski contribution has however rank 2 as CP2 tensor and cannot be proportional to CP2 metric. It is however enough that its determinant is proportional to the determinant of CP2 metric with constant proportionality coefficient. This condition gives an additional non-linear condition to the solution. One would have wave equation for u (also w and its conjugate satisfy massless wave equation) and determinant condition as an additional condition.

    The determinant condition reduces by the linearity of determinant with respect to its rows to sum of conditions involved 0,1,2 rows replaced by the transversal M4 contribution to metric given if M4 metric decomposes to direct sum of longitudinal and transversal parts. Derivatives with respect to derivative with respect to particular CP2 complex coordinate appear linearly in this expression they can depend on u via the dependence of transversal metric components on u. The challenge is to show that this equation has non-trivial solutions.

What makes the ansatz attractive is that a special solutions of Euclidian Maxwell empty space equations are in question, equations reduces to non-linear generalizations of Euclidian massless wave equations Minkowskian coordinate variables, and cosmological constant pops up dynamically. What makes the ansatz attractive is that special solutions of Maxwell empty space equations are in question, equations reduces to non-linear generalizations of Euclidian massless wave equations, and cosmological constant pops up dynamically. These properties are true also for the GRT limit of TGD that I discussed in
here.

For background see the chapter Basic Extremals of Kähler action.



Does thermodynamics have a representation at the level of space-time geometry?

R. Kiehn has proposed what he calls Topological Thermodynamics (TTD) as a new formulation of thermodynamics. The basic vision is that thermodynamical equations could be translated to differential geometric statements using the notions of differential forms and Pfaffian system. That TTD differs from TGD by a single letter is not enough to ask whether some relationship between them might exist. Quantum TGD can however in a well-defined sense be regarded as a square root of thermodynamics in zero energy ontology (ZEO) and this leads leads to ask seriously whether TTD might help to understand TGD at deeper level. The thermodynamical interpretation of space-time dynamics would obviously generalize black hole thermodynamics to TGD framework and already earlier some concrete proposals have been made in this direction.

One can raise several questions. Could the preferred extremals of Kähler action code for the square root of thermodynamics? Could induced Kähler gauge potential and Kähler form (essentially Maxwell field) have formal thermodynamic interpretation? The vacuum degeneracy of Kähler action implies 4-D spin glass degeneracy and strongly suggests the failure of strict determinism for the dynamics of Kähler action for non-vacuum extremals too. Could thermodynamical irreversibility and preferred arrow of time allow to characterize the notion of preferred extremal more sharply?

It indeed turns out that one can translate Kiehn's notions to TGD framework rather straightforwardly.

  1. Kiehn's work 1- form corresponds to induced Kähler gauge potential implying that the vanishing of instanton density for Kähler form becomes a criterion of reversibility and irreversibility is localized on the (4-D) "lines" of generalized Feyman diagrams, which correspond to space-like signature of the induced metric. The localization of heat production to generalized Feynman diagrams conforms nicely with the kinetic equations of thermodynamics based on reaction rates deduced from quantum mechanics. It also conforms with Kiehn's vision that dissipation involves topology change.
  2. Heat produced in a given generalized Feynman diagram is just the integral of instanton density and the condition that the arrow of geometric time has definite sign classically fixes the sign of produced heat to be positive. In this picture the preferred extremals of Kähler action would allow a trinity of interpretations as non-linear Maxwellian dynamics, thermodynamics, and integrable hydrodynamics.
  3. The 4-D spin glass degeneracy of TGD breaking of ergodicity suggests that the notion of global thermal equilibrium is too naive. The hierarchies of Planck constants and of p-adic length scales suggests a hierarchical structure based on CDs withing CDs at imbedding space level and space-time sheets topologically condensed at larger space-time sheets at space-time level. The arrow of geometric time for quantum states could vary for sub-CDs and would have thermodynamical space-time correlates realized in terms of distributions of arrows of geometric time for sub-CDs, sub-sub-CDs, etc...
The hydrodynamical character of classical field equations of TGD means that field equations reduce to local conservation laws for isometry currents and Kähler gauge current. This requires the extension of Kiehn's formalism to include besides forms and exterior derivative also induced metric, index raising operation transforming 1-forms to vector fields, duality operation transforming k-forms to n-k-forms, and divergence which vanishes for conserved currents.

For background see the chapter Basic Extermals of K\"ahler action or the article Does thermodynamics have a representation at the level of space-time geometry?.



Three blows against standard view about galactic dark matter

The standard view about dark matter is in grave difficulties.

  1. The assumption is that galactic dark matter forms a spherical halo around the galaxy: with a suitable distribution this would explain constant velocity distribution of distant stars. Sometime ago NASA reported that Fermi telescope does not find support for dark matter in this sense in small faint galaxies that orbit our own.
  2. Another blow against standard view came now. A team using the MPG/ESO 2.2-metre telescope at the European Southern Observatory's La Silla Observatory, along with other telescopes, has mapped the motions of more than 400 stars up to 13,000 light-years from the Sun. Also in this case the signature would have been the gravitational effects of dark matter. No evidence for dark matter has been found in this volume. The results will be published in an article entitled "Kinematical and chemical vertical structure of the Galactic thick disk II. A lack of dark matter in the solar neighborhood," by Moni-Bidin et al. to appear in The Astrophysical Journal.
These findings support the TGD based model for galactic dark matter (to be carefully distinguished from dark matter as large hbar phases appearing in much smaller amounts and essential for life in TGD inspired quantum biology). TGD based model for the galactic dark matter postulates that the dominating contribution is along long magnetic flux tubes resulting from these during cosmic expansion and containing galaxies around them like pearls in a necklace.

The distribution of dark matter would be concentrated around this string rather than forming a spherical halo around galaxy. This would give rise to a gravitational acceleration behaving like 1/ρ, where ρ is transversal distance from the string, explaining constant velocity spectrum for distant stars. The killer prediction is that galaxies could move along the string direction freely. Large scale motions difficult to understand in standard cosmology has been indeed observed. It has been also known for a long time that galaxies tend to concentrate on linear structures.

The third blow against the theory comes from the observation that Milky Way has a distribution of satellite galaxies and star clusters, which rotate around the Milky Way in plane orthogonal to Milky Way's plane. One can visualize the situation in terms of two orthogonal planes such that the second plane contains Milky Way and second one the satellite galaxies and globular clusters. The Milky Way itself has size scale of .1 million light years whereas the newly discovered structure extends from about 33,000 light years to 1 million light years. The study is carried out by astronomers in Bonn University and will be published in journal Monthly Notices of the Royal Astronomical Society. The lead author is Ph. D. student Marcel Pawlowski.

According to the authors, it is not possible to understand the structure in terms of the standard model for dark matter. This model assumes that galactic dark matter forms a spherical halo around galaxy. The problem is the planarity of the newly discovered matter distribution. Not only satellite galaxies and star clusters but also the long streams of material left - stars and also gas - behind them as they orbit around Milky Way move in this plane. Planarity seems to be a basic aspect of the internal dynamics of the system. As a matter fact, quantum view about formation of also galaxies predicts planarity and this allows also to understand approximate planarity of solar system: common quantization axis of angular momentum defined by the direction of string like object in the recent case with a gigantic value of gravitational Planck constant defining the unit of angular momentum would provide a natural explanation for planarity.

The proposal of the researchers is that the situation is an outcome of a collision of two galaxies.

  1. An amusing co-incidence is that the original TGD inspired model for the formation of spiral galaxies assumed that they result when two primordial cosmic strings intersect each other. This would be nothing but the counterpart of closed string vertex giving also rise to reconnection of magnetic flux tubes. Later I gave up this assumption and introduced the model in which galaxies are like pearls in necklace defined by primordial cosmic strings which since then have thickened to magnetic flux tubes. These pearls could themselves correspond to closed string like objects or their decay products. Magnetic energy would transform to matter and would be the analog for the decay of inflaton field energy to particles in inflationary scenarios.
  2. As already noticed, in TGD Universe galactic dark matter would correspond to the matter assignable to the magnetic flux tube defining the necklace creating 1/ρ gravitational accelerating explaining constant velocity spectrum of distant stars in galactic plane.
Could one interpret the findings by assuming two big cosmic strings which have collided and decayed after that to matter? Or should one assume that the galaxies existed before the collision?
  1. The collision would have induced the decay of portions of these cosmic strings to ordinary and dark matter with large value of Planck constant. The magnetic energy of the cosmic strings identifiable as dark energy would have produced the matter. It is however not clear why the decay products would have remained in the planes orthogonal to the colliding orthogonal flux tubes. According to the researchers the planar structures must have existed before the collision.
  2. This suggests that the two flux tubes pass near each other and the galaxies have moved along the flux tubes and collided and remained stuck to each other by gravitational attraction. The probability of this kind of galactic collisions depends on what one assumes about the distribution of string like objects. Due to their mutual gravitational attraction the flux tubes could be attracted towards each other to form web like structures forming a network of cosmic highways. Milky Way would represent on particular node at which two highways form a cross-road. In this kind of situation the collisions resulting s cross-road crashes could be more frequent than those resulting from encounters of randomly moving strings. The galaxies arriving to this kind of nodes would tend to form a bound state and remain in the node. It could also happen that the second galaxy continues its journey but leaves matter behind in the form of satellite galaxies and globular clusters.

It is encouraging that the TGD based explanation for galactic dark matter survives all these three discoveries meaning grave difficulties for the halo model.

For background see the chapter Cosmic Strings.



Icarus refutes Opera

Icarus collaboration has replicated the measurement of the neutrino velocity. The abstract summarizes the outcome.

The CERN-SPS accelerator has been briefly operated in a new, lower intensity neutrino mode with about 1012 p.o.t. /pulse and with a beam structure made of four LHC-like extractions, each with a narrow width of about 3 ns, separated by 524 ns. This very tightly bunched beam structure represents a substantial progress with respect to the ordinary operation of the CNGS beam, since it allows a very accurate time-of-flight measurement of neutrinos from CERN to LNGS on an event-to-event basis. The ICARUS T600 detector has collected 7 beam-associated events, consistent with the CNGS delivered neutrino flux of 2.2× 1016 p.o.t. and in agreement with the well known characteristics of neutrino events in the LAr-TPC. The time of flight difference between the speed of light and the arriving neutrino LAr-TPC events has been analyzed. The result is compatible with the simultaneous arrival of all events with equal speed, the one of light. This is in a striking difference with the reported result of OPERA that claimed that high energy neutrinos from CERN should arrive at LNGS about 60 ns earlier than expected from luminal speed.

The TGD based explanation for the anomaly would not have been super-luminality but the dependence of the maximal signal velocity on space-time sheet (see this): the geodesics in induced metric are not geodesics of the 8-D imbedding space. In principle the time taken to move from A (say CERN) to point B (say Gran Sasso) depends on space-time sheets involved. One of these space-time sheets would be that assignable to particle beam - a good guess is "massless extremal": along this the velocity is in in the simplest case (cylindrical "massless extremals") the maximal signal velocity in M4×CP2.

Other space-space-time sheets involved can be assigned to various systems such as Earth, Sun, Galaxy and they contribute to the effect (see this). It is important to understand how the physics of test particle depends on the presence of parallel space-times sheets. Simultaneous topological condensation to all the sheets is extremely probable so that at classical level forces are summed. Same happens at quantum level. The superposition of various fields assignable to parallel space-time sheets is not possible in TGD framework and is replaced with the superposition of their effects. This allows to resolve one of the strongest objections against the notion induced gauge field.

The outcome of ICARUS experiment is not able to kill this prediction since at this moment I am not able to fix the magnitude of the effect. It is really a pity that such a fantastic possibility to wake up the sleeping colleagues is lost. I feel like a parent in a nightmare seeing his child to drown and being unable to do anything.

There are other well-established effects in which the dependence of maximal signal velocity on space-time sheet is visible: one such effect is the observed slow increase of the time spend by light ray to propagate moon and back. The explanation is that the effect is not real but due to the change of the unit for velocity defined by the light-velocity assignable to the distant stars. The maximal signal velocity is for Robertson-Walker cosmology gradually increasing and the anomaly emerges as an apparent anomaly when one assumes that the natural coordinate system assignable to the solar system (Minkowski coordinates) is the natural coordinate system in cosmological scales. The size of the effect is predicted correctly. Since the cosmic signal velocity defining the unit increases, the local maximal signal velocity which is constant seems to be reducing and the measured distance to the Moon seems to be increasing.

For background see the chapter TGD and GRT of "Physics in Many-Sheeted Space-time".



Tachyonic models for neutrino superluminality killed

New Scientist reported about the sad fate of the tachyonic explanation of neutrino superluminality. The argument is extremely simple.

  1. You start by assuming that a tachyon having negative mass squared: m(ν)2<0 and assume that super-luminal velocity is in question. The point is that you know the value of the superluminal velocity v(1+ε)c, ε≈ 10-5. You can calculate the energy of the neutrino as

    E= |m(ν)|[-1+ v2/(v2-1)]1/2,

    |m(ν)|=(-m(ν)2)1/2 is the absolute value of formally imaginary neutrino mass.

  2. In good approximation you can write

    E= |m(ν)|[-1+ (2ε-1/2]1/2 ≈|m(ν)| (2ε)-1/2.

    The order of magnitude of |m(ν)| is not far from one eV - this irrespective of whether neutrino is tachyonic or not. Therefore the energy of neutrino is very small: not larger than keV. This is in a grave contradiction whith what is known: the energy is measured using GeV as a natural unit so that there is discrepancy of 6 orders of magnitude at least. One can also apply energy conservation to the decay of pion to muon and neutrino and this implies that muon has gigantic energy: another contradiction.

What is amusing that this simple kinematic fact was not noticed from beginning. In any case, this finding kills all tachyonic models of neutrino super-luminality assuming energy conservation, and gives additional support for the TGD based explanation in terms of maximal signal velocity, which depends on pair of points of space-time sheet connected by signal and space-time sheet itself characterizing also particular kind of particle.

Even better, one can understand also the jitter in the spectrum of the arrival times which has width of about 50 ns in terms of an effect caused fluctuations in gravitational fields to the maximal signal velocity expressible in terms of the induced metric. The jitter could have interpretation in terms of gravitational waves inducing fluctuation of the maximal signal velocity c#, which in static approximation equals to c#=c(1+Φgr)1/2, where Φgr is gravitational potential.

Suprisingly, effectively super-luminal neutrinos would make possible gravitational wave detector! The gravitational waves would be however gravitational waves in TGD sense having fractal structure since they would correspond to bursts of gravitons resulting from the decays of large hbar gravitons emitted primarily rather than to a continuous flow (see this). The ordinary detection criteria very probably exclude this kind of bursts as noise. The measurements of Witte attempting to detect absolute motion indeed observed this kind of motion identifiable as a motion of Earth with respect to the rest frame of galaxy but superposed with fractal fluctuations proposed to have interpretation in terms of gravitational turbulence - gravitational waves.

For details see the earlier posting, the little article Could the measurements trying to detect absolute motion of Earth allow to test sub-manifold gravity? or the chapter TGD and GRT .



The basic objection against TGD

The basic objection against TGD is that induced metrics for space-time surfaces in M4× CP2 form an extremely limited set in the space of all space-time metrics appearing in the path integral formulation of General Relativity. Even special metrics like the metric of a rotating black hole fail to be imbeddable as an induced metric. For instance, one can argue that TGD cannot reproduce the post-Newtonian approximation to General Relativity since it involves linear superposition of gravitational fields of massive objects. As a matter fact, Holger B. Nielsen- one of the very few colleagues who has shown interest in my work - made this objection for at least two decades ago in some conference and I remember vividly the discussion in which I tried to defend TGD with my poor English.

The objection generalizes also to induced gauge fields expressible solely in terms of CP2 coordinates and their gradients. This argument is not so strong as one might think first since in standard model only classical electromagnetic field plays an important role.

  1. Any electromagnetic gauge potential has in principle a local imbedding in some region. Preferred extremal property poses strong additional constraints and the linear superposition of massless modes possible in Maxwell's electrodynamics is not possible.

  2. There are also global constraints leading to topological quantization playing a central role in the interpretation of TGD and leads to the notions of field body and magnetic body having non-trivial application even in non-perturbative hadron physics. For a very large class of preferred extremals space-time sheets decompose into regions having interpretation as geometric counterparts for massless quanta characterized by local polarization and momentum directions. Therefore it seems that TGD space-time is very quantal. Is it possible to obtain from TGD what we have used to call classical physics at all?

The imbeddability constraint has actually highly desirable implications in cosmology. The enormously tight constraints from imbeddability imply that imbeddable Robertson-Walker cosmologies with infinite duration are sub-critical so that the most pressing problem of General Relativity disappears. Critical and over-critical cosmologies are unique apart from a parameter characterizing their duration and critical cosmology replaces both inflationary cosmology and cosmology characterized by accelerating expansion. In inflationary theories the situation is just the opposite of this: one ends up with fine tuning of inflaton potential in order to obtain recent day cosmology.

Despite these and many other nice implications of the induced field concept and of sub-manifold gravity the basic question remains. Is the imbeddability condition too strong physically? What about linear superposition of fields which is exact for Maxwell's electrodynamics in vacuum and a good approximation central also in gauge theories. Can one obtain linear superposition in some sense?

  1. Linear superposition for small deformations of gauge fields makes sense also in TGD but for space-time sheets the field variables would be the deformations of CP2 coordinates which are scalar fields. One could use preferred complex coordinates determined about SU(3) rotation to do perturbation theory but the idea about perturbations of metric and gauge fields would be lost. This does not look promising. Could linear superposition for fields be replaced with something more general but physically equivalent?

  2. This is indeed possible. The basic observation is utterly simple: what we know is that the effects of gauge fields superpose. The assumption that fields superpose is un-necessary! This is a highly non-trivial lesson in what operationalism means for theoreticians tending to take these kind of considerations as mere "philosphy".

  3. The hypothesis is that the superposition of effects of gauge fields occurs when the M4 projections of space-time sheets carrying gauge and gravitational fields intersect so that the sheets are extremely near to each other and can touch each other ( CP2 size is the relevant scale).

A more detailed formulation goes as follows.

  1. One can introduce common M4 coordinates for the space-time sheets. A test particle (or real particle) is identifiable as a wormhole contact and is therefore pointlike in excellent approximation. In the intersection region for M4 projections of space-time sheets the particle forms topological sum contacts with all the space-time sheets for which M4 projections intersect.

  2. The test particle experiences the sum of various gauge potentials of space-time sheets involved. For Maxwellian gauge fields linear superposition is obtained. For non-Abelian gauge fields gauge fields contain interaction terms between gauge potentials associated with different space-time sheets. Also the quantum generalization is obvious. The sum of the fields induces quantum transitions for states of individual space time sheets in some sense stationary in their internal gauge potentials.

  3. The linear superposition applies also in the case of gravitation. The induced metric for each space-time sheet can be expressed as a sum of Minkowski metric and CP2 part having interpretation as gravitational field. The natural hypothesis that in the above kind of situation the effective metric is sum of Minkowski metric with the sum of the CP2 contributions from various sheets. The effective metric for the system is well-defined and one can calculate a curvature tensor for it among other things and it contains naturally the interaction terms between different space-time sheets. At the Newtonian limit one obtains linear superposition of gravitational potentials. One can also postulate that test p"../articles/ moving along geodesics in the effective metric. These geodesics are not geodesics in the metrics of the space-time sheets.

  4. This picture makes it possible to interpret classical physics as the physics based on effective gauge and gravitational fields and applying in the regions where there are many space-time sheets which M4 intersections are non-empty. The loss of quantum coherence would be due to the effective superposition of very many modes having random phases.

The effective superposition of the CP2 parts of the induced metrics gives rise to an effective metric which is not in general imbeddable to M4× CP2. Therefore many-sheeted space-time makes possible a rather wide repertoire of 4-metrics realized as effective metrics as one might have expected and the basic objection can be circumvented In asymptotic regions where one can expect single sheetedness, only a rather narrow repertoire of "archetypal" field patterns of gauge fields and gravitational fields defined by topological field quanta is possible.

The skeptic can argue that this still need not make possible the imbedding of a rotating black hole metric as induced metric in any physically natural manner. This might be the case but need of course not be a catastrophe. We do not really know whether rotating blackhole metric is realized in Nature. I have indeed proposed that TGD predicts new physics new physics in rotating systems. Unfortunately, gravity probe B could not check whether this new physics is there since it was located at equator where the new effects vanish.

For background and more details see either the article Could the measurements trying to detect absolute motion of Earth allow to test sub-manifold gravity? or the chapter TGD and GRT.



Could the measurements trying to detect absolute motion of Earth allow to test sub-manifold gravity?

The history of the modern measurements of absolute motion has a long - more than century beginning from Michelson-Morley 1887. The reader can find in web a list of important publications giving an overall view about what has happened. The earliest measurements assumed aether hypothesis. Cahill identifies the velocity as a velocity with respect to some preferred rest frame and uses relativistic kinematics although he misleadingly uses the terms absolute velocity and aether. The preferred frame could galaxy, or the system defining rest system in cosmology. It would be easy to dismiss this kind of experiments as attempts to return to the days before Einstein but this is not the case. It might be possible to gain unexpected information by this kind of measurements. Already the analysis of CMB spectrum demonstrated that Earth is not at rest in the Robertson-Walker coordinate system used to analysis CMB data and similar motion with respect to galaxy is quite possible and might serve as a rich source of information also in GRT based theory.

In TGD framework the situation is especially interesting.

  1. Sub-manifold gravity predicts that the effective light-velocity measured in terms of M4 time taken for a light signal to propagate from point A to B depends on space-time sheet, on points A and B, in particular the distance between A and B. The maximal signal velocity determined in terms of light-like geodesics has this dependence because light-like geodesics for the space-time surface are in general not light-like geodesics for M4 but light-like like curves. The maximal signal velocity is in general smaller than its absolute maximum obtained light-like geodesics of M4, depends on particle, and could be larger than for photon space-time sheets. This might explain neutrino super-luminality (see this).

  2. Space-time sheets move with respect to larger space-time sheets and it makes sense to speak about the motion of solar system space-time sheet with respect to galactic space-time sheet and this velocity is in principle measurable. Maximal signal velocity can be defined operationally in terms of time needed to travel from point A to B using Minkowski coordinates of the imbedding space as preferred coordinates. It depends on pair of points involved: basically on the direction on and spatial distance along effectively light-like geodesic defined by the sum of the perturbations of the induced metric for the space-time sheets involved. The question is whether one could say something interesting about various experiments carried out to measure the absolute motion interpreted in terms of velocity of space-time sheet with respect to say galactic space-time-sheet.

Also in Special Relativity the motion relative to the rest system of a larger system is a natural notion. In General Relativistic framework situation should be the same but the mathematical description of the situation is somewhat problematic since Minkowski coordinates are not global due to the loss of Poincare invariance as a global symmetry. In practice one must however introduce linear Minkowski coordinates and this makes sense only if one interprets the general relativistic space-time as a perturbation of Minkowski space. This background dependence is in conflict with general coordinate invariance. For sub-manifold gravity the situation is different.

Could the measurements performed already by Michelson-Morley and followers could provide support for the sub-manifold gravity? This might indeed be the case as the purpose of the following arguments demonstrate. The basic results of this analysis are following.

  1. The basic formulas for interferometer experiments using relativistic kinematics instead of Galilean one are same as the predictions of Cahill using different basic assumptions, and allow to conclude that already the data of Mickelson and Morley show the motion of Earth -not with respect to aether- but with galactic rest system.

  2. The only difference is the appearance of the maximal signal velocity c# for space-time sheet to which various gravitational fields contribute. In the static approximation sum of gravitational potentials contributes to c#.

  3. This allows to utilize the results of Cahill, who has carried out a re-analysis of experiments trying to detect what he calls absolute motion using these formulas. Cahill has also replicated the crucial experiments of Witte.

  4. The value of the velocity as well as its direction can be determined and the results from various experiments are consistent with each other. The travel time data demonstrate a periodicity due to the rotation of Earth and motion with respect a preferred frame identifiable as a galactic rest frame. The tell-tale signature is the periodicity of sidereal day instead of exact 24 hour periodicity. The travel time for photons shows fluctuations which might be interpreted in terms of gravitational waves having fractal patterns. TGD view about gravitons would suggest that the emission takes place -not as a continuous stream- but in burst-wise manner producing fractal fluctuation spectrum. These fluctuations could show themselves as a jitter also in the neutrino travel times discovered by Opera collaboration.

One must answer several questions before one can make predictions.

  1. The reduction of light velocity in the case that there are many space-time sheets whose M4 projections intersect, is described using common M4 coordinates for the space-time sheets. The induced metric for given space-time sheet is the sum of flat M4 metric and CP2 contribution identified as classical gravitational field. The hypothesis is that in good approximation a linear superposition for the effects of the gravitational fields holds true in the sense that a test particle having wormhole throat contacts to these space-time sheets experiences the sum of the gravitational fields of various sheets. Similar description holds for induced gauge fields.

    From this one can identify the reduced light velocity in the static situation as c#=(gtt)1/2. In a more realistic necessary non-local treatment one calculates the effective light-velocity by assuming that the orbit of massless state n geometric optics approximation is light-like geodesic for the sum of the metric perturbations: this line is not a light-like geodesic of any of the space-time sheets.

    In the general the effective metric defined in this manner is not imbeddable as induced metric. This description of linear super-position allows to circumvent the basic objection against TGD, which is that induced metric and gauge fields are extremely strongly correlated since they are expressible in terms of CP2 coordinates and their gradients and that the variety of metrics representable as induced metrics is extremely restricted. Same of course applies to gauge fields. This resuls is extremely important and would deserve a separate blog posting.

  2. How the reduced light-velocity c# relates to the reduced light-velocity in medium which is usually described by introducing the notions of free and polarization charges and magnetization and magnetization currents. In the simple situation when polarization tensor is scalar, refractive index n characterizes the reduction of the light velocity: V=c#/n. Since the reduction of maximal signal velocity due to sub-manifold is purely gravitational and its reduction in medium has an electromagnetic origin, one can argue that the two notions have nothing to do with each other. Hence c# should be treated as a local concept possibly depending on direction of motion by taking the limit when light-like geodesic with respect to effective metric becomes infinitesimally short. This dependence can be deduced by comparing light-like geodesics emanating from a point and calculating the maximal signal velocity as a function of direction angles of the light-like geodesic and the spatial distance along it.

  3. What happens to the boundary conditions between different media deduced from the structural equations of classical electrodynamics and Maxwell equations? For instance, does the refraction of light take place also when c# changes? It might of course be that c# changes only in astrophysical scales - maybe at the surfaces of astrophysical objects - and stays constant at the boundaries between two media in laboratory scale but nevertheless this issue should be understood. The safest guess is that at the level of kinematic local Lorentz invariance still holds true so that the tangential wave vectors identifiable in terms of massless momentum components are conserved at boundaries and one obtains law of refraction also now.

  4. In TGD Universe space-time sheets can move with respect to each other and the larger space-time sheet defines the analog of absolute reference frame in this kind of situation. Also in cosmology one can assign to CMB radiation a specific frame and Earth indeed moves with respect to it rather than being at rest in the global Robertson-Walker coordinate system. For Earth solar system is one such frame. Galactic rest system is second such preferred reference frame. To both one can assign linear Minkowski coordinates, which play a special physical. The obvious question is whether this kind of motion could be detected and whether the measurements carried out to detect absolute motion could allow to deduce this kind of velocity with respect to galactic rest system.

  5. The question is how photons in medium behave when this kind of motion is present. Assume that the medium is characterized by refractive index n so that one has V= c#/n and that space-time sheet moves with respect to larger one by velocity v characterized by direction angles and magnitude. Here c#<c0 is the maximal signal velocity at the space-time sheet. For definiteness assume that the larger space-time sheet corresponds to galaxy.

    1. In the measurements of light velocity the light propagates in medium with velocity V<c#<c0, and the question is how to describe this mathematically. In his experiments Michelson assumed summation of velocities based on Galilean invariance. This is of course wrong and Special Relativity suggests summation of velocities according to the relativistic formula:

      V →V1(v,u) == [V+ vu]/[1+ u(Vv/c#2)] ,

      V= c#/n , u=cos(θ) .

      Here θ is the direction of the light signal with respect to the velocity v. This formula might be justified in TGD framework: also photon has very small but non-vanishing mass and summation formula for velocities can be applied. This demands the assumption of local Lorentz invariance made routinely in General Relativity. Also it requires that the complex process of repeated absorption and emission of photons is described by a propagation of photon with the reduced velocity.

    2. This predicts two effects which might be seen in the experiments trying to measure absolute velocity and its direction. Both solar and galactic gravitational field and also its perturbations - even gravitational waves- can affect the signal velocity via fluctuations in c# deduced from the superposition of the perturbative contributions of CP2 to the effective induced metric. Second effect is due to the change of the propagation time. This change depends on the propagation direction. Note however that also c# in general has the directional dependence and only in the situation when the components gti vanish, this dependence is trivial. In the Newtonian approximation the assumption gti≈ 0 is made and corresponds to the description of the situation in terms of gravitational potential.

For background and more details see either the article Could the measurements trying to detect absolute motion of Earth allow to test sub-manifold gravity? or the chapter TGD and GRT.



To the index page