Wednesday, December 31, 2014

Transition from quantum classicality as transition from P to NP?

There was an interesting link in Thinking Allowed Original related to computation. The argument is that Schrödinger equation becomes computationally so complex above some particle number/related scale that as a computational problem it belongs to the class NP above this scale thus requiring in practice exponentially increasing time as function of points involved with discretization defining the calculational accuracy. In class P the depence would be only polynomial.

For some systems exact solutions are known and they would be computational simple. Hydrogen atom and harmonic oscillator are the standard examples of this. If NP does not reduce to P by some ingenious computational trick, one can conclude that for some critical particle number Schrödinger equation cannot be solved in given accuracy in time depending only polynomially on accuracy characterized as a number of points involved with the discretization.

The author concludes that world becomes classical above this particle number to which also a scale is associated with system has constant density. For instance, macroscopic quantum coherence (superconductivity, superfluidity, and quantum biology!) would be impossible because it would make the solution of Schrödinger equation computationally so complex that it would not be possible in polynomial time. I definitely refuse to swallow this argument! I do not personally believe arguments about what is physically possible on what we can do with our recent technology and limited ideas about what computation is.

Some objections

Below some more detailed objections to the arguments of the article.

  1. The notion of computation developed by Turing began from a model for how the women in the offices of that time did calculations (men preferred warfare over boring calculations!) might be too simplistic. It was not based on physics, and the philosophy about consciousness behind Turing test is behavioristic. The view describing us as automatons reacting to inputs has been dead for a long time.

    What was manipulated by these women were basically natural numbers. I believe that reduction to the manipulation of rational numbers or numbers in their extensions is an essential element of not only computation but also of formation of cognitive representations (to be distinguished from cognition!). The reason would be that cognition has p-adic physics as a physical correlate and the systems doing calculations are in the intersections of realities and p-adicity where everything is expressible in terms of rationals or numbers in extension of rationals. Algebraic geometers for have centuries studied this intersection of cognition and matter consisting of subsets of rational or algebraic points of algebraic surfaces. Fermat's theorem says that the 2-D surface xn + yn = zn of 3-D space has no rational solutions for n>2.

  2. Very intricate questions relate to what is discretized. The naivest discretization is at the level of space-time, where one has continuum. The less naive discretization is at the level of Hilbert space of solutions of Schrödinger equation, where discretization is very natural. For instance, superpositions of basic states can be based on complex rational numbers at the level of individual solution ansätze everything is smooth and continuous in this case.

  3. Problem solving is much more than application of algorithm. For instance, we have not found exact solutions of Schrödinger equation numerically but by discovery involving conceptual thinking totally outside the capabilities of classical computer. Here I think is the fundamental limitation of AI approach to brain.

  4. Schrödinger equation is assumed as a universal equation applying in all scales. This is a highly questionable assumption but blindly made even in quantum gravity in Planck scale although Schrödinger equation is non-relativistic. This is one of the strange implications of formula blindness.

Does a phase transition from P to NP occur for some critical particle number?

Above some scale which corresponds to a critical particle number N P would transform to NP in a kind of phase transition. Mathematically this looks imaginable if one talks only about particle number translating to the dimension D= 3N of the effective configuration space E3N of N-particle system for which one is solving Schrödinger equation. One could speak of critical dimension. P to NP phase transition could occur but the assumption that it would imply that world becomes classical looks non-sense to me.

Personally I do not believe that dimension is relevant at fundamental level. To my opinion, the symmetries of the system are the relevant thing. For Schrödinger equation they would be symmetries of potential function and it can have in arbitrary dimensions symmetries making the equation exactly solvable. Complexity of potential function would be the relevant thing. It is easy to believe that ar randomly potential function is not equivalent with Coulomb potential but whether computer can ever discover this by performing discrete algorith t is far from clear to me. Perhaps here is the basic difference between conscious intellect and computer!

In TGD framework the geometry of the infinite-dimensional world of classical worlds (WCW) exists only if it has maximal group of isometries and there are excellent reasons to believe that this geometry and thus physics is unique. If hyperfinite factors of type II1 describe the physics of TGD world then these infinite-dimensional systems are finite-dimensional in arbitrary good approximation. The resulting physics is such that it is calculable. If the calculations we want to do reduce to simulations of physics, P might be achieved always. There could be of course also calculations which do not reduce to P! Finite measurement resolution to be considered later could reduce NP to P and also prevent the drowning to irrelevant information.

Does a phase transition from P to NP occur in some scale?

Authors assign to critical particle number N a critical scale. That a transition from quantum to classical would occur at some scale has been belief for a long time but we have began to learn that the idea is wrong. Already superconductivity is in conflict with it and bio-systems are probably macroscopically quantum coherent. The only logical conclusion is that the world is quantal in all scales. It is our conscious experience about which makes it look like classical (but only in those aspects about which it is!). When Schrödinger cat is perceived it becomes either dead or alive.

  1. I believe that the notion of scale is fundamental but in the sense of fractality according to which the world looks roughly the same in all scales. The notion of finite measurement/cognitive resolution is the essential concept implicitly involved. Resolution hierarchies allow to avoid the weird conclusion that the world becomes classical in some scale or for some critical particle number. This is not not taken into account in the formulation of Schrödinger equation. Turing paradigm doesn't speak about computational precision either.

  2. In quantum field theories the notion of length scale resolution cannot be avoided and is part of renormalization program but has remained the ugly ducling of theoretical physics. I have worked with the problem in TGD framework and proposed that von Neumann algebras known as hyperfinite factors II1 already mentioned provide and extremely elegant realization for a hierarchy of resolutions in terms of their inclusions.

  3. Length scale resolution means that one is not interested on details below some resolution scale. There would be infinite hierarchy of resolution scales. Actually several kinds of them: p-adic length scale hierarchy, dark matter hierarchy labelled by the values of Planck constant heff=n×h, the hierarchy of causal diamonds of ZEO with increasing sizes, and self hierarchy representing a hierarchy of abstractions. The longer the scale, the more abstract and less detailed the representation of world by conscious experience: good boss at the top does not waste time with details un-essential for this job.

  4. The sticking to rational numbers in given measurement resolution is a kind of gauge fixing and also forced by cognitive representability (cognitively representable must be in the intersection of p-adicities and reality!). There are indeed good arguments suggesting that finite measurement resolution realized in terms of inclusions of hyper-finite factors can be represented physically as a dynamical gauge invariance. Answers to computation in finite resolution would be gauge equivalence classes. Usually finite measurement resolution is regarded as a limitation. It however makes possible to avoid drowning to inessential information and the connection with dynamical gauge invariance fits this aspect nicely.

Could one achieve P=NP by some new physics?

  1. It is not known whether P is different from NP even for Turing paradigm: we cannot exclude the possibility that all computational problems could be solved in finite time even using Turing machines. We do neither know whether there could exist much more effective manners to compute allowing the reduction of NP to P. If this is the case, the argument of the article fails. Here biology might reach something to us.

  2. Zero Energy Ontology (ZEO) suggested a possible mechanism to transform NP to P, which I considered seriously for some time. One can imagine doing the calculation in finite geometric time by decomposing it to parts which happen in opposite time directions. One goes forth and back in time so long that the calculation is done rather than single time direction. This looked nice but a problem emerged when I understood in detail how the arrow of time emerges. The quantum state is superposition over CDs with various sizes and in given sequence of state function reductions on fixed boundary the average distances of the non-fixed boundary from it increases in statistical sense. This gives rise to self and flow of time as increase of average CD size. It also means that one cannot do calculations instantaneously by this time zig-zag mechanism.

Is Turing machine the final model for computation?

Turing machines can emulate, that is mimic each other. Mimicry is basic manner to learn. Could one generalize computation to simulation? Could simulation allow to achieve what one achieves by computing? Computer as a simulator simulating other physical systems? Or can one imagine other kinds of computation like activities in which one could avoid the unreasonable restrictions of un-necessary numerical accuracy.

In TGD framework one can consider a view about computation like processes which need not reduce to Turing machine or its quantum analog realized as unitary evolution for qubits.

  1. A concrete realization for an analog of topological quantum computer in TGD Universe might be in terms of braids realized as magnetic flux tubes. The braiding of flux tubes induced by the interaction with extremal world would form topological representations for the interaction. For instance, nerve pulses pattern passing through axon would generate braiding of flux tube connecting the lipids of lipid layers of axonal membrane to the DNA codons of cell nucleus or to the to tubulins of microtubules.

    One could call these braidings quantum computer programs but the computation would the topological and could be much more general than numerical computation and free from the slavery of precise numbers. Rational numbers in a given resolution realized in terms of HFFs would be only representatives of numbers in given computational resolution. Kind of gauge fixing would be in question.

  2. Communication is part of communication and reconnections of flux tubes making possible transfer of dark photons or dark charged particles between subsystems would build communication lines. Large heff would make possible macroscopic quantum coherence. Computer could modify itself by heff changing phase transitions which in biology would be essential for basic processes such as DNA replication. Resonant communications in which communication occurs only between systems with same cyclotron frequencies would become possible. This kind of problem solving might be very different from that using Turing machine or quantum computation as it is usually understood.

  3. Zero energy ontology forces to change the view about computation. In ZEO space-time surface connecting the boundaries of causal diamond would be the analog of classical computer program. The counterpart of quantum computation in TGD Universe would be self/mental image and correspond to a sequence of repeated state function reductions to a fixed boundary of CD leaving the part of state at it unchanged. It would define the counterpart of unitary time evolution for Schrödinger equation.

  4. In quantum computation the outcome is represented in terms of probabilities. In ZEO these probabilities would be for states associated with ensembles of sub-CDs affected by the classical fields associated with CD. Kind of quantum classical correspondence in which field patterns (wave lengths, frequencies) correspond of states would be realized. EEG could be an example of this in case of brain.

  5. In classical computation the question is local with respect to time: What is the final state at later time given initial state. Field equations such as Schrödinger equation would give the altgorithm to calculate the final state.

    In ZEO the notion of classical computation is different. The question is formulated in terms of boundary values and non-local with respect to time: Can one find space-time surface connecting two 3-surfaces at opposite boundaries of CD as solutions of field equations?

    How this view affects the notion of computation? In mathematics theorem proving corresponds to the first kind of classical computation. The answer to the question whether some theorem holds true or not would be in spirit with TGD view.

  6. Can one speak about P and NP if conscious entity becomes the counterpart of computation? What about quantum jump to opposite boundary of CD at some level of hierarchy of selves? Could it correspond to an eureka moment having no representation as a classical computation? Could these eureka moments bring in discovery (of a new axiom) as something transcending the limitations of scientist who only computes and is bound to remain to the confines of what is already known.

To conclude, Turing paradigm gains its power from precise axiomatics and I can only admire the incredibly refined theorems from people studying quantum computation theoretically. To my opinion the Turing paradigm and its quantum counterpart are however quite too limited approaches and based on questionable assumptions (behavioristic view about consciousness, reduction of problem solving to application of algorithm, separation of computation completely from physics and biology) and the less axiomatic approaches of physicist and consciousness theorist look more attractive to me.

Tuesday, December 30, 2014

Sensory organs as seats of primary sensory qualia and memory recall as seeing in time direction

There were two very interesting links in Thinking Allowed Original related to neuroscience. Thanks to Ulla for doing all the hard work and thanks also to the participants for providing these links. It is a pity that I do not have time for more comments.

The first link was about the relationship of visual perception and imagination. Sensory input is signalling from bottom to top: from retina to the cortex (and to magnetic body in TGD Universe). Now it has been learned that imagination proceeds from to to botton.

Upside down character of imagination is to my opinion a very deep result supporting the TGD view that sensory organs give rise to the primary sensory experience - qualia - rather than brain, which would perform the conceptualization, division of perceptive field to objects with names, forming of associations, etc.. This requires virtual sensory input as feedback from brain to sensory organs, which drives the sensory mental images to standard ones by kind of iteration. Our view about the world is art work, not one-to-one copy. That our eyes move during REM sleep is one piece of support for this picture. This top-down character of imagination might also relate to the fact that the retina (or some part of it, do not remember exact details) is upside down: this looks idiotic from the engineering point of view but could be perhaps understood in TGD framework. Instead of optimal input from outside world one would have an optimal input from the internal world (dark photon signals (producing biophotons in heff reducing photon energy conserving phase transitions could be rather weak).

Imagined pictures would be due to virtual sensory input from associative cortex (or even magnetic body) to lower levels and down to retina whereas the dark photons would generate virtual sensory input "from within". Ordinary visual input would be from the outside world. During dreams the virtual sensory in put would not be masked by genuine sensory input. During hallucinations virtual sensory input would be also so strong that masking would not occur. Of course, there could exist also a mechanism which halts the virtual sensory input in hallucinations so that it does not proceed down to the level of retina in ordinary wake-up consciousness: this could fail for schizophrenics and perhaps also during hypnosis. The analog of this kind of mechanism would be associated with the motor imagination and prevent actual motor actions (say during dreams).

Second link was related to memory. Altzheimer patients can get back their memories. This finding supports the TGD vision inspired by zero energy ontology (ZEO) about memory. TGD brain is 4-dimensional and basic memories are in geometric past where (rather than when;-)) the event happened. Of course copies can be made and learning builds these so that the space-time surface contains a lot of copies of memories. To recall is to send negative energy dark photon signal to geometric past reflected back in time direction from some copy of the memory and produces it image in geometric now: seeing in time direction is in question.

The ability of Altzheimer patients to get back their memories would not be due to restorage of memories: the primary memories would be still there in the geometric past and there would be no need to restore or regenerate them. The ability to recall memories - that is the ability to send these negative energy signals to geometric past and/or to receive them - would be regained.

Saturday, December 27, 2014

Pioneer and Flyby anomalies as demonstrations of dark matter spheres associated with orbits of planets

There was a very interesting link at Thinking Allowed Original in this morning - a lot of thanks for Ulla. The link was to two old anomalies discovered in the solar system: Pioneer anomaly and Flyby anomalies with which I worked for years ago. I remember only the general idea that dark matter concentrations at orbits of planets or at spheres with radii equal that of orbit could cause the anomaly. So I try to reconstruct all from scratch and during reconstruction become aware of something new and elegant that I could not discover for years ago.

The article says that Pioneer anomaly is understood. I am not at all convinced about the solution of Pioneer anomaly. Several "no new physics" solutions have been tailored during years but later it has been found that they do not work.

Suppose that dark matter is at the surface of sphere so that by a well-known text book theorem it does not create gravitational force inside it. This is an overall important fact, which I did not use earlier. The model explains both anomalies and also allow to calculate the total amount of dark matter at the sphere.

  1. Consider first Pioneer anomaly.

    1. Inside the dark matter sphere with radius of Jupiter's orbit the gravitational force caused by dark matter vanishes. Outside the sphere also dark matter contributes to the gravitational attraction and Pioneer's acceleration becomes a little bit smaller since the dark matter at the sphere containing the orbit radius of Jupiter or Saturn also attracts the space-craft after the passby. A simple test for spherical model is the prediction that the mass of Jupiter effectively increases by the amount of dark matter at the sphere after passby.

    2. The magnitude of the Pioneer anomaly is about Δa/a=1.3× 10-4 and translates to Mdark/M≈ 1.3× 10-4. What is highly non-trivial is that the anomalous acceleration is given by Hubble constant suggesting that there is a connection with cosmology fixing the value of dark mass once the area of the sphere containing it is fixed. This follows as a prediction if the surface mass density is universal and proportional to the Hubble constant. The value of acceleration is a=.8× 10-10× g, g=9.81 m/s2 whereas the MOND model () finds optimal value for the postulated minimal gravitational acceleration to be a0=1.2× 10-10 m/s2. In TGD framework it would be assignable to the traversal through a dark matter shell. The ratio of the two accelerations is a/a0=6.54.

      Could one interpret the equality of the two accelerations as an equilibrium condition? The Hubble acceleration associated with the cosmic expansion (expansion velocity increases with distance) would be compensated by the acceleration due to the gravitational force of dark matter? Could one interpret the equality of the two accelerations as an equilibrium condition? The Hubble acceleration H associated with the cosmic expansion (expansion velocity increases with distance) would be compensated by the acceleration due to the gravitational force of dark matter. The formula for surface density of dark matter is from Newton's law GMdark/R2= H given by σdark= H/4π G. The approximate value of dark matter surface density is from Hc=6.7 × 10-10 m/s2 equal to σ =.8 kg/m2 and surprisingly large.


    3. TGD inspired quantum biology requiring that the universal cyclotron energy spectrum of dark photons heff=hgr transforming to bio-photons is in visible and UV range for charged particles gives the estimate Mdark/ME≈ 2× 10-4 and is of the same order of magnitude smaller than for Jupiter. The minimum value of the magnetic field at flux tubes has been assumed to be BE=.2 Gauss, which is the value of endogenous magnetic field explaining the effects of ELF em radiation on vertebrate brain. The two estimates are clearly consistent.

  2. In Flyby anomaly spacecraft goes past Earth to gain momentum (Earth acts as a sling) for its travel towards Jupiter. During flyby a sudden acceleration occurs but this force is on only during the flyby but not before or after that. The basic point is that the spacecraft visits near Earth, and this is enough to explain the anomaly.

    The space-craft enters from a region outside the orbit of Earth containing dark matter and thus experiences also the dark force created by the sphere. After that the space craft enters inside the dark matter region, and sees a weaker gravitational force since the dark matter sphere is outside it and does not contribute. This causes a change in its velocity. After flyby the spacecraft experiences the forces caused by both Earth and dark matter sphere and the situation is the same as before flyby. The net effect is a change in the velocity as observed. From this the total amount of dark matter can be estimated. Also biology based argument gives an estimate for the fraction of dark matter in Earth.

This model supports the option in which the dark matter is concentrated on sphere. The other option is that it is concentrated at flux tube around orbit: quantitative calculations would be required to see whether this option can work. One can consider of course also more complex distributions: say 1/r distribution giving rise to constant change in acceleration outside the sphere.

A simple TGD model for the sphere containing dark matter could be in terms of boundary defined by gigantic wormhole contact (at its space-time sheet representing "line of generalized Feynman diagram" one has deformation of CP2 type vacuum extremal with Euclidian signature of induced metric) with radius given by the radius of Bohr orbit with gravitational Planck constant equal to hgr =GMm/v0. This radius does not depend on the mass of the particle involved and is given by rn= GM/v03 where 2GM is Schwartschild radius equal to 3 km for Sun. One has v0/c≈ 2-11.

An interesting possibility is that also Earth-Moon system contains a spherical shell of dark matter at distance given by the radius of Moon's orbit (about 60 Earth's radii). If so the analogs of the two effects could be observed also in Earth Moon system and the testing of the effects would become much easier. This would also mean understanding of the formation of Moon. Also interior of Earth (and also Sun) could contain spherical shells containing dark matter as the TGD inspired model for the spherically symmetric orbit constructed for more than two decades ago suggests. One can raise interesting questions. Could also the matter in mass scale systems be accompanied by dark matter shells at radii equal to Bohr radii in the first approximation and could these effects be tested? Note that a universal surface density for dark matter predicts that the change of acceleration universally be given by Hubble constant H.

For background and details see the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article Pioneer and Flyby anomalies for almost ten years later.

Also Matt Strassler realises that LHC does not prove Higgs mechanism

It is nice to find that I am not anymore the only one trying to tell to the particle physics community that LHC has proven only the existence of Higgs, not Higgs vacuum expectation as a source of particle masses. Particle physicist and particle physics blogger Matt Strassler indeed says what I have been repeatedly saying about Higgs for two decades. Unfortunately, I missed the posting of Strassler which had appeared already at October - thanks for Ulla for the link at Thinking Allowed Original.

Because of the importance of the message - especially so for particle physicists - I repeat it: the experiments at LHC do not prove anything about Higgs mechanism, they just demonstrate the existence of Higgs like particle. Higgs can have gradient coupling to the fermions so that scattering amplitudes become proportional to the mass. Particle massivation could happen quite differently and in TGD framework p-adic thermodynamics would be responsible for it. Higgs vacuum expectation in standard model provides only a parametrisation of fermion masses: it does not predict anything. That gauge bosons "eat" complex Higgs doublet almost totally only says in popular manner the fact that it is possible to choose a gauge in which only one component of Higgs is non-vanishing and weak bosons have third polarisation state and can be massive.

The reader interested on TGD view about particle massivation that I discovered for twenty years ago can for instance consult the book "p-Adic length scale hypothesis and dark matter hierarchy".

Friday, December 26, 2014

TGD inspired comments on Mae-Wan Ho's talk about protons in water

At Thinking Allowed Original there is a link to the talk of Mae-Wan Ho in Conference on the Physics, Chemistry and Biology of Water 2014. It is a very nice representation and I learned new facts highly relevant for my own work.

The main points of Mae were that protons make water aconductor, maybe even superconductor. In TGD framework the statement would be that dark protons flowing along magnetic flux tubes make this possible. I however believe that electronic and even ionic Cooper pairs are are involved. Mae believes also that water associated with collagen networks appears as superfluid in nano-scales. Also this is very attractive idea and if the heff= hgr condition holds as some arguments suggest, then superfluidity allowing macroscopic quantum coherence with gravitational Compton length having no dependence on the mass of particle becoems possible. One of the effects is fountain effect explained elegantly by macroscopic quantum gravitational coherence: water would effectively defy gravitation: this effect might allow testing of the hypothesis.

CDs and EZs

Mae Wan-Ho talked about and compared two notions: CDs (coherent domains of water with size of about micrometer postulated by quantum field theoreticians, in particular Emilio del Giudice) and EZs (exclusion domains with size about 200 micrometers discovered by Gerald Pollack and collaborators experimentally). By the way, in Zero Energy Ontology (ZEO) I talk about causal diamonds (CDs), which are typically much larger than CDs of Giudice et al.

  1. Inside EZ the water forms layered structure consisting of hexagonal layers and the stoichiometry is H1.5O so that every fourth proton must be outside EZ (proton is not accompanied by electron if charge separation takes place: EZ is indeed negatively charged so that one obtains different pHs inside EZ and in its exterior). This state is experimentally heavier than ordinary water.

  2. So called tetrahedral or 4-coordinated water is assigned with CDs. CDs and EZs could correspond to two different p-adic length scales in TGD framework. This state would be less dense than ordinary water. Both CD and EZ contain plasma of almost free electrons. CDs are excited to 12.06 eV just .5 eV below the ionizing potential 12.56 eV. .5 eV which is the nominal value of metabolic energy quantum - probably not an accident.

TGD inspired model for CDs and EZs

I try my best to summarise some very interesting points of the talk and develop in more detail TGD inspired model for EZs and their formation, and the TGD view of metabolism leading to a prediction of new form of metabolism involving dark UV photons from Sun.

  1. The splitting of ordinary water H2O to 2H++2e- + O is a key step in photosynthesis. In particular, it produces oxygen without which which we cannot survive. The splitting process involves two ionizations. The ionisation energy of the first electron 12.56 eV and in ultraviolet much above the metabolic energy quantum around .5 eV. How the splitting of water can be achieved at all? This looks like a very real problem!

  2. CDs/EZs could be the solution to the problem. Inside CD the energy for the splitting of water is much smaller due to the fact that electrons are almost free as already mentioned: if the splitting energy equals to the so called formation energy, it is about .41 eV for CD: nothing but the metabolic energy quantum! Also at the interace of EZ just above the boundary of EZ the electronic states are excited and only an energy of .51 eV - known as formation energy - is needed for the splitting. This suggests that metabolic energy quanta are used to generate EZs and/or CDs in the fundamental step metabolism. Also irradiation at these energies generates CDs/EZs.

  3. My layman logic says that formation energy for EZ must correspond to the energy needed to increase the size of /EZ by a minimum amount. In TGD model this would mean creating one proton-electron pair such that electron remains inside the EZ, whose size thus increases and proton becomes dark proton at dark magnetic flux tube. This step would be also a key step in the splitting of water. Splitting of water and growth of EZ would be essentially the same process. In the case of CD it would seem that charge separation takes place inside CD in the splitting and proton can go outside.

    What comes in mind that the formation of CDs requiring large excitation UV energy of 12.06 eV precedes that of EZs. After the formation of CD and almost free electrons only metabolic energy quantum per proton is required to kick single proton to dark magnetic flux tube. This would conform with the fact that CD radius is about 200 times larger than that of CD meaning that volumes are related by a factor 8×106≈ 223. The formation of EZ would transform tetrahedral water to the hexagonal H1.5O and suck protons to dark protons at magnetic flux tubes. If this picture is correct, the proper identification of formation energy for CD would be as absorption energy for CD equal to 12.06 eV and in UV. Recall that biophoton spectrum extends to UV and dark photons with this energy could be responsible for the formation of CDs. This would adde dark photons transforming to biophotons to the picture.

    The formation of EZ can be seen as pulling out one ordinary proton from ordinary water just above the surface of the EZ and making it dark proton at a magnetic flux tube assignable to the EZ and perhaps connecting it to neighboring EZ for form a quantum coherent network. Dark proton would serve as a current carrier and make water a conductor and perhaps even super-conductor. Even superfluidity can be considered.

  4. The metabolic energy quantum .5 eV can be also assigned with hydrogen bond. Could the process of generating dark proton and increasing the size of EZ by one electron involve cutting of the hydrogen bond binding the proton to the water outside. If so then the only thing keeping the excited water inside CD as a coherent phase would be the bond energy of hydrogen bonds! Maybe this is too simplistic.

    I have proposed earlier that hydrogen bonds are short magnetic flux flux tubes, which can suffer heff increasing phase transition. These flux tubes could in turn experience reconnections with U shaped large heff flux tubes and get connected to the dark web. Mae-Wan Ho also tells that the transfer of proton from covalent OH bond to the middle of hydrogen bond happens with a considerable probability. Could this step precede the increase of heff and reconnection? This would give a connection with hydrogen bonding about which Mae also talked about. These naive models of course cannot be correct in detail but give hopes about fusion of existing chemical thinking and new quantal notions.

  5. A process bringing in mind the formation of EZs occurs as one perturbs molecular bio-systems - that is feeds energy into it. The system "wakes up" from "winter sleep", the globular proteins, which are in resting state with hydrogen bonds at their surface forming kind of ice layer unfold and protein aggregates are formed. Molecular summer begins and ceases when the energy feed is over. Cellular winter begins again. Maybe cellular summer is just temporary formation of EZ layers around the protein involving melting of hydrogen bonds and generation of dark protons making system conscious!

Is a new source of metabolic energy needed?

What remains to be understood is the process generating CDs: where could the UV photons with energy 12.06 eV come? Clearly a new form of metabolism is involved and the only source of energy seems to be the Sun!

  1. Solar radiation cannot however provide UV photons as ordinary photons since UV radiation at these wavelengths is absorbed by the atmosphere. In TGD framework a reasonable candidate for dark radiation with energies in UV range is dark cyclotron radiation with energy E=heff×f: biophotons would be produced in the transformation of dark cyclotron photons to ordinary photons.

  2. Could part of solar UV radiation transform to dark UV photons at magnetic flux tubes of even size scales larger than that of Earth predicted by the model of EEG and arrive along them through the atmosphere? The presence of a new source of metabolic energy is in principle a testable prediction: is the energy feed from the visible part of solar radiation really enough to cover the metabolic energy needs? Here one must however take into account the fact that the UV energy would be received by water. The water from which CDs are eliminated would not allow photosynthesis.
To sum up, if the proposed picture is correct photosynthesis involves formation of EZs and cellular respiration the inverse of this process. As discussed earlier, the purpose of metabolic processes would be basically generation and transfer of negentropic entanglement assignable to large heff states.

See the article TGD inspired model for the formation of exclusion zones from coherence regions.

Tuesday, December 23, 2014

New finding about pseudo gap in high temperature super-conductivity

There is an interesting news about high temperature superconductivity at Phys.org. The existence of so called pseudogap has been known for a couple of decades. At temperature Tc1 >Tc, where Tc is the critical temperature for high Tc superconductivity a phase transition to a new poorly understood phase occurs. Pseudo gap is assigned with this phase.

The new experimental result, led by researchers at Stanford University and the Department of Energy's SLAC National Accelerator Laboratory, is the culmination of 20 years of research aimed at finding out whether the pseudogap helps or hinders superconductivity. The researchers conclude that pseudogap competes with high Tc superconductivity. For more information see the article Makoto Hashimoto et al., Nature Materials, 2 November 2014.

In TGD based model for high Tc superconductivity this pseudo gap is predicted. Already at Tc1 rather short flux tube pairs carrying Cooper pairs are formed. The flux tubes have the form of flattened O looking like U:s at both ends and form two parallel flux tubes in the middle carrying opposite fluxes. Antiferromagnetism makes possible the emergence of these closed flux tubes. This gives rise to a superconductivity but in rather short scales only. The electrons of the Cooper pair are at flux tubes of U with opposite spins so that spin interaction energy is negative and gives rise to Cooper pairs.

At Tc <Tc1 these flux tubes combine by reconnection to form much longer flux tubes, and high Tc superconductivity appears as superconductivity in long length scales. The value of effective Planck constant heff=n×h increases in this process. This process is mathematically like percolation in which water starts to dribble through a layer of porous substance and it wets. The formation of a macroscopic supra current is analogous to wetting.

In TGD picture pseudogap is however a prerequisite for superconductivity: it competes but does not hinder. This is not in conflict with experimental findings. The competion has a simple interpretation: both reconnections and their reversals occur already above Tc, so that the two phases are present. The system is at criticality. Long scale superconductivity wins the competition at Tc!

Thursday, December 18, 2014

Latest debate about multiverse

George Ellis and Jose Silk have written a nice article titled "Scientific method: Defend the integrity of physics". The article criticizes the situation in theoretical physics, which to my view is due to arrogant "the only game in the town" attitude making communications extremely difficult, and the simple and brutal fact that these theories did not work . String models certainly led to a discovery of very powerful and beautiful mathematics: the Universe according to string theory is however certainly not elegant as Brian Greene claims in his book. It would be surprising if this mathematics would not find applications in future physics.

The article of Ellis and Silk has generated a lot of response. One of the responses is the article by Brian Greene titled "Is String Theory About to Unravel?". There is a a strong smell of hype but Greene admits that the connection with experiment is lacking and feels uneasy with multiverse notion.

Peter Woit reacted first and agreed with Ellis and Silk: I can only agree with most what he said. The article of Ellis and Silk generated also science entertainment: Lubos Motl wrote a really long and really aggressive rant and told that all those who do not not blindly believe that string theory is the only game in the town are complete idiots and should flush themselves dow to the toiled where they undoubtedly belong. I cannot avoid associations with old cartoon Tom and Jerry in which Tom tried to destroy Jerry with all kinds of explosives. My humble opinion is that explosives are not the method to build a better world.

Sabine Hossenfelder wrote a nice article Does the Scientific Method need Revision?.

What one means when one says that string theory does not work?

The basic problem was that string world sheet is not 4-D space-time and together with blind belief in GUT ideology this led to a wrong track which led directly to landscape catastrophe and multiverse nonsense. First the adhoc notion of spontaneous compactification was introduced and space-time was identified as actually 10-dimensional object with 6 small dimensions assigned with Calabi-Yau space. The mathematics of Calabi-Yaus is extremely beautiful and only this can explain why so idiotic idea (from the point of view of physics) was taken seriously. The problem is that there is huge number of Calabi-Yaus. This lead to the landscape catastrophe. But still string theory did not work.

Then branes were introduced and space-time was identified as 3-brane. Also higher and lower-dimensional space-times emerged and one had multiverse in which anything goes and there is no hope about testing anything. This opened all the flood gates and endless variety of brane constructions have emerged. One ended up with the multiverse scenario: physics laws depend on the corner of space-time one happens to live in and this forces the introduction of antropic principle if one wants to say something interesting about the universe. Experimentally there is not a slightest indication about multiverse. Ironically, the real physics seems to be extremely simple!

Why theoreticians fell in the trap?

I think that partially this was made possible by the powerful tools of algebraic geometry and I can understand that technically oriented theoreticians loved the application of these magnificent tools developed by mathematicians. Physics was seen as boring low energy phenomenology left for simplistic minds. There was of course also a lot of face keeping and its perfectly understandable that string model gurus who had got power, money, and fame did not want to leave the drowning ship.

In hindsight the tragedy of string theory was that it contained a lot of good mathematics although the proposed physics became gradually more and more non-sensical as the structures became more and more adhoc and complex. Conformal invariance wherefrom everything started is certainly one of the gems with deep physical content. Unfortunately, conformal invariance as such is 2-D notion and should have been generalized to 4-dimensional case in a non-trivial manner without losing the infinite-dimensional character of conformal group.

Could twistors and TGD help out of dead alley?

This is where TGD enters the stage. TGD generalizes conformal invariance and at the same time explains why space-time must be four-dimensional and why 4-D Minkowski space is so unique. What was good that string models brought a lot of new mathematics to the collective consciousness of physics community. Although Calabi-Yau manifolds are not physics, the methods of algebraic geometry used to construct them are extremely powerful and theoreticians have gained impressive knowhow about algebraic geometry. What is lacking is the proper to which one could apply these methods.

The lack of this kind of powerful tools in TGD framework has been a source of personal frustration for me: I have the physics but I do not have the mathematical tools to express it effectively. It seems that the situation is however changing. Twistor spaces associated with 4-D space-times are 6-D like Calabi-Yaus and that associated with empty Minkowski space is Calabi-Yau.

The really important observation is that there are only two twistor spaces with Kähler structure (possessed also by Calabi-Yaus) and M4 and CP2. This fact fixes TGD completely both mathematically and physically: H=M4 ×CP2 is the only choice. TGD also follows from standard model symmetries as well as from number theoretical arguments involving classical number fields. General twistor space has almost complex or even complex structure but not Käahler structure. In any case, the tools of algebraic geometry apply to them. The motivations of Penrose for introducing twistors was indeed this: to use the methods of algebraic geometry to solve field not usable at the level of Minkowski space. These methods indeed work nicely already in the construction of instanton solutions of Yang-Mills equations and one can solve these equations in the twistor space of M4.

But what is so fantastic in these two very special twistor spaces? I realized this only during last week. One of the great challenges of TGD is to construct the solutions of field equations - extremals - of Kähler action since they define an exact part of quantum TGD (in standard QFT this is not the case). The twistor space inspired conjecture is that one can construct solutions of classical field equations of TGD by constructing - not space-time surfaces but their twistor spaces as 6-D surfaces in the twistor space of M4 ×CP2! The twistor space property would be equivalent with extremal property! All structure would be induced also now. This theory would be also a generalization of Witten's twistor string theory which I proposed years ago but in slightly different form and without realizing the connection with twistor spaces. Later I became skeptic and gave up this idea but it can be found in some blog posting.

The transition from 4-D space-time to 6-D twistor space of course looks at first like an un-necessary complication but it brings in complex numbers: all the magnificient technology from algebraic geometry becomes available and string theorists have developed enormous knowhow about it during years! For instance, all the nice mathematics inspired by Calabi-Yaus such as mirror symmetry and associated constructions could generalize to the category of twistor spaces realized as sub-manoflds. The landscape and multiverse would reduce to the world of space-time surfaces representing generalized Feynman diagrams! Physics from TGD and mathematical knowhow from string models!: I dare claim that this is the only way out from the dead alley.

For twistor revolution in classical TGD see and earlier postings. See also the article Classical part of twistor story.

Wednesday, December 17, 2014

Twistor spaces as TGD counterparts of Calabi-Yau spaces of string models

I gave an overall view about twistor revolution in classical TGD in previous posting. In an earlier posting I already summarized the basic ideas but without discussion about the connection with string models and intuitive recipes for the construction of twistor spaces as by lifting space-time surfaces to twistor bundles by adding CP1 fiber so that twistor spaces can be also interpreted also as generalized Feynman diagrams!

In the following I describe the basic picture about lifting space-time surfaces of M4× CP2 to twistor spaces in CP3 × F3 getting their twistor structure via induction like process. This approach means generalizing the induction procedure from the level of space-time surfaces and imbedding space to the level of twistor spaces of space-time surfaces and twistor space of imbedding space (some time ago I wrote about induced second quantization). The outcome is that the magnificient mathematical knowhow about algebraic geometry utilized in super string theories becomes available in TGD framewok and in principle any string theorist can start doing TGD. Of course, Calabi-Yaus are replaced with twistor spaces and physical interpretation changes dramatically.

Conditions for twistor spaces as sub-manifolds

Consider the conditions that must be satisfied using local trivializations of the twistor spaces identified as sub-manifolds of CP3× F3 with induced twistor structure. Before continuing let us introduce complex coordinates zi=xi+iyi resp. wi=ui+ivi for CP3 resp. F3.

  1. 6 conditions are required and they must give rise by bundle projection to 4 conditions relating the coordinates in the Cartesian product of the base spaces of the two bundles involved and thus defining 4-D surface in the Cartesian product of compactified M4 and CP2.
  2. One has Cartesian product of two fiber spaces with fiber CP1 giving fiber space with fiber CP11× CP12. For the 6-D surface the fiber must be CP1. It seems that one must identify the two spheres CP1i. Since holomorphy is essential, holomorphic identification w1 = f(z1) or z1=f(w1) is the first guess. A stronger condition is that the function f is meromorphic having thus only finite numbers of poles and zeros of finite order so that a given point of CP1i is covered by CP1i+1. Even stronger and very natural condition is that the identification is bijection so that only Möbius transformations parametrized by SL(2,C) are possible.

  3. Could the Möbius transformation f: CP11→ CP12 depend parametrically on the coordinates z2,z3 so that one would have w1= f1(z1,z2,z3), where the complex parameters a,b,c,d (ad-bc=1) of Möbius transformation depend on z2 and z3 holomorphically? Does this mean the analog of local SL(2,C) gauge invariance posing additional conditions? Does this mean that the twistor space as surface is determined up to SL(2,C) gauge transformation?

    What conditions can one pose on the dependence of the parameters a,b,c,d of the Möbius transformation on (z2,z3)? The spheres CP1 defined by the conditions w1= f(z1,z2,z3) and z1= g(w1,w2,w3) must be identical. Inverting the first condition one obtains z1= f-1(w1,z2,z3). If one requires that his allows an expression as z1= g(w1,w2,w3), one must assume that z2 and z3 can be expressed as holomorphic functions of (w2,w3): zi= fi(wk), i=2,3, k=2,3. Of course, non-holomorphic correspondence cannot be excluded.

  4. Further conditions are obtained by demanding that the known extremals - at least non-vacuum extremals - are allowed. The known extremals can be classified into CP2 type vacuum
    extremals with 1-D light-like curve as M4 projection, to vacuum extremals with CP2 projection, which is Lagrangian sub-manifold and thus at most 2-dimensional, to massless extremals with 2-D CP2 projection such that CP2 coordinates depend on arbitrary manner on light-like coordinate defining local propagation direction and space-like coordinate defining a local polarization direction, and to string like objects with string world sheet as M4 projection (minimal surface) and 2-D complex sub-manifold of CP2 as CP2 projection. There are certainly also other extremals such as magnetic flux tubes
    resulting as deformations of string like objects. Number theoretic vision relying on classical number fields suggest a very general construction based on the notion of associativity of tangent space or co-tangent space.

  5. The conditions coming from these extremals reduce to 4 conditions expressible in the holomorphic case in terms of the base space coordinates (z2,z3) and (w2,w3) and in the more general case in terms of the corresponding real coordinates. It seems that holomorphic ansatz is not consistent with the existence of vacuum extremals, which however give vanishing contribution to transition amplitudes since WCW ("world of classical worlds") metric is completely degenerate for them.

    The mere condition that one has CP1 fiber bundle structure does not force field equations since it leaves the dependence between real coordinates of the base spaces free. Of course, CP1 bundle structure alone does not imply twistor space structure. One can ask whether non-vacuum extremals could correspond to holomorphic constraints between (z2, z3) and (w2, w3).

  6. The metric of twistor space is not Kähler in the general case. However, if it allows complex structure there is a Hermitian form ω, which defines what is called balanced Kähler form
    satisfying d(ω∧ω)=2ω∧ dω=0: ordinary Kähler form satisfying dω=0 is special case about this. The natural metric of compact 6-dimensional twistor space is therefore balanced. Clearly, mere CP1 bundle structure is not enough for the twistor structure. If the the Kähler and symplectic forms are induced from those of CP3× Y3, highly non-trivial conditions are obtained for the imbedding of the twistor space, and one might hope that they are equivalent with those implied by Kähler action at the level of base space.

  7. Pessimist could argue that field equations are additional conditions completely independent of the conditions realizing the bundle structure! One cannot exclude this possibility. Mathematician could easily answer the question about whether the proposed CP1 bundle structure with some added conditions is enough to produce twistor space or not and whether field equations could be the additional condition and realized using the holomorphic ansatz.

The physical picture behind TGD is the safest starting point in an attempt to gain some idea about what the twistor spaces look like.
  1. Canonical imbeddings of M4 and CP2 and their disjoint unions are certainly the natural starting point and correspond to canonical imbeddings of CP3 and F3 to CP3× F3.

  2. Deformations of M4 correspond to space-time sheets with Minkowskian signature of the induced metric and those of CP2 to the lines of generalized Feynman diagrams. The simplest deformations of M4 are vacuum extremals with CP2 projection which is Lagrangian manifold.

    Massless extremals represent non-vacuum deformations with 2-D CP2 projection. CP2 coordinates depend on local light-like direction defining the analog of wave vector and local polarization direction orthogonal to it.

    The simplest deformations of CP2 are CP2 type extremals with light-like curve as M4 projection and have same Kähler form and metric as CP2. These space-time regions have Euclidian signature of metric and light-like 3-surfaces separating Euclidian and Minkowskian regions define parton orbits.

    String like objects are extremals of type X2× Y2, X2 minimal surface in M4 and Y2 a complex sub-manifold of CP2. Magnetic flux tubes carrying monopole flux are deformations of these.

    Elementary particles are important piece of picture. They have as building bricks wormhole contacts connecting space-time sheets and the contacts carry monopole flux. This requires at least two wormhole contacts connected by flux tubes with opposite flux at the parallel sheets.

  3. Space-time surfaces are constructed using as building bricks space-time sheets, in particular massless exrremals, deformed pieces of CP2 defining lines of generalized Feynman diagrams as orbits of wormhole contacts, and magnetic flux tubes connecting the lines. Space-time surfaces have in the generic case discrete set of self intersections and it is natural to remove them by connected sum operation. Same applies to twistor spaces as sub-manifolds of CP3× F3 and this leads to a construction analogous to that used to remove singularities of Calabi-Yau spaces.

Twistor spaces by adding CP1 fiber to space-time surfaces

Physical intuition suggests that it is possible to find twistor spaces associated with the basic building bricks and to lift this engineering procedure to the level of twistor space in the sense that the twistor projections of twistor spaces would give these structure. Lifting would essentially mean assigning CP1 fiber to the space-time surfaces.

  1. Twistor spaces should decompose to regions for which the metric induced from the CP3× F3 metric has different signature. In particular, light-like 5-surfaces should replace the light-like 3-surfaces as causal horizons. The signature of the Hermitian metric of 4-D (in complex sense) twistor space is (1,1,-1,-1). Minkowskian variant of CP3 is defined as the projective space SU(2,2)/SU(2,1)×U(1). The causal diamond (CD) (intersection of future and past directed light-cones) is the key geometric object in zero energy ontology (ZEO) and the generalization to the intersection of twistorial light-cones is suggestive.

  2. Projective twistor space has regions of positive and negative projective norm, which are 3-D complex manifolds. It has also a 5-dimensional sub-space consisting of null twistors analogous to light-cone and has one null direction in the induced metric. This light-cone has conic singularity analogous to the tip of the light-cone of M4.

    These conic singularities are important in the mathematical theory of Calabi-You manifolds since topology change of Calabi-Yau manifolds via the elimination of the singularity can be associated with them. The S2 bundle character implies the structure of S2 bundle for the base of the singularity (analogous to the base of the ordinary cone).

  3. Null twistor space corresponds at the level of M4 to the light-cone boundary (causal diamond has two light-like boundaries). What about the light-like orbits of partonic 2-surfaces whose light-likeness is due to the presence of CP2 contribution in the induced metric? For them the determinant of induced 4-metric vanishes so that they are genuine singularities in metric sense. The deformations for the canonical imbeddings of this sub-space (F3 coordinates constant) leaving its metric degenerate should define the lifts of the light-like orbits of partonic 2-surface. The singularity in this case separates regions of different signature of induced metric.

    It would seem that if partonic 2-surface begins at the boundary of CD, conical singularity is not necessary. On the other hand the vertices of generalized Feynman diagrams are 3-surfaces at which 3-lines of generalized Feynman digram are glued together. This singularity is completely analogous to that of ordinary vertex of Feynman diagram. These singularities should correspond to gluing together 3 deformed F3 along their ends.

  4. These considerations suggest that the construction of twistor spaces is a lift of construction space-time surfaces and generalized Feynman diagrammatics should generalize to the level of twistor spaces. What is added is CP1 fiber so that the correspondence would rather concrete.

  5. For instance, elementary particles consisting of pairs of monopole throats connected buy flux tubes at the two space-time sheets involved should allow lifting to the twistor level. This means double connected sum and this double connected sum should appear also for deformations of F3 associated with the lines of generalized Feynman diagrams. Lifts for the deformations of magnetic flux tubes to which one can assign CP3 in turn would connect the two F3s.

  6. A natural conjecture inspired by number theoretic vision is that Minkowskian and Euclidian space-time regions correspond to associative and co-associative space-time regions. At the level of twistor space these two kinds of regions would correspond to deformations of CP3 and F3. The signature of the twistor norm would be different in this regions just as the signature of induced metric is different in corresponding space-time regions.

    These two regions of space-time surface should correspond to deformations for disjoint unions of CP3s and F3s and multiple connected sum form them should project to multiple connected sum (wormhole contacts with Euclidian signature of induced metric) for deformed CP3s. Wormhole contacts could have deformed pieces of F3 as counterparts.

Twistor spaces as analogs of Calabi-Yau spaces of super string models

CP3 is also a Calabi-Yau manifold in the strong sense that it allows Kähler structure and complex structure. Witten's twistor string theory considers 2-D (in real sense) complex surfaces in twistor space CP3. This inspires tome questions.

  1. Could TGD in twistor space formulation be seen as a generalization of this theory?

  2. General twistor space is not Calabi-Yau manifold because it does does not have Kähler structure. Do twistor spaces replace Calabi-Yaus in TGD framework?

  3. Could twistor spaces be Calabi-Yau manifolds in some weaker sense so that one would have a closer connection with super string models.

Consider the last question.

  1. One can indeed define non-Kähler Calabi-Yau manifolds by keeping the hermitian metric and giving up symplectic structure or by keeping the symplectic structure and giving up hermitian metric (almost complex structure is enough). Construction recipes for non-Kähler Calabi-Yau manifold are discussed in here. It is shown that these two manners to give up Kähler structure correspond to duals under so called mirror symmetry, which maps complex and symplectic structures to each other. This construction applies also to the twistor spaces and is especially natural for them because of the fiber space structure.

  2. For the modification giving up symplectic structure, one starts from a smooth Kähler Calabi-Yau 3-fold Y, such as CP3. One assumes a discrete set of disjoint rational curves diffeomorphic to CP1. In TGD framework work they would correspond to special fibers of twistor space.

    One has singularities in which some rational curves are contracted to point - in twistorial case the fiber of twistor space would contract to a point - this produces double point singularity which one can visualize as the vertex at which two cones meet (sundial should give an idea about what is involved). One deforms the singularity to a smooth complex manifold. One could interpret this as throwing away the common point and replacing it with connected sum contact: a tube connecting the holes drilled to the vertices of the two cones. In TGD one would talk about wormhole contact.

  3. Suppose the topology looks locally like S3× S2× R+/- near the singularity, such that two copies analogous to the two halves of a cone (sundial) meet at single point defining double point singularity. In the recent case S2 would correspond to the fiber of the twistor space. S3 would correspond to 3-surface and R+/- would correspond to time coordinate in past/future direction. S3 could be replaced with something else.

    The copies of S3× S2 contract to a point at the common end of R+ and R- so that both the based and fiber contracts to a point. Space-time surface would look like the pair of future and past directed light-cones meeting at their tips.

    For the first modification giving up symplectic structure only the fiber S2 is contracted to a point and S2× D is therefore replaced with the smooth "bottom" of S3. Instead of sundial one has two balls touching. Drill small holes two the two S3s and connect them by connected sum contact (wormhole contact). Locally one obtains S3× S3 with k connected sum contacts.

    For the modification giving up Hermitian structure one contracts only S3 to a point instead of S2. In this case one has locally two CP3:s touching (one can think that CPn is obtained by replacing the points of Cn at infinity with the sphere CP1). Again one drills holes and connects them by a connected sum contact to get k-connected sum of CP3.

    For k CP1s the outcome looks locally like to a k-connected sum of S3 × S3 or CP3 with k≥ 2. In the first case one loses
    symplectic structure and in the second case hermitian structure. The conjecture is that the two manifolds form a mirror pair.

    The general conjecture is that all Calabi-Yau manifolds are obtained using these two modifications. One can ask whether this conjecture could apply also the construction of twistor spaces representable as surfaces in CP3× F3 so that it would give mirror pairs of twistor spaces.

  4. This smoothing out procedures isa actually unavoidable in TGD because twistor space is sub-manifold. The 6-D twistor spaces in 12-D CP3× F3 have in the generic case self intersections consisting of discrete points. Since the fibers CP1 cannot intersect and since the intersection is point, it seems that the fibers must contract to a point. In the similar manner the 4-D base spaces should have local foliation by spheres or some other 3-D objects with contract to a point. One has just the situation described above.

    One can remove these singularities by drilling small holes around the shared point at the two sheets of the twistor space and connected the resulting boundaries by connected sum contact. The preservation of fiber structure might force to perform the process in such a manner that local modification of the topology contracts either the 3-D base (S3 in previous example or fiber CP1 to a point.

The interpretation of twistor spaces is of course totally different from the interpretation of Calabi-Yaus in superstring models. The landscape problem of superstring models is avoided and the multiverse of string models is replaced with generalized Feynman diagrams! Different twistor spaces correspond to different space-time surfaces and one can interpret them in terms of generalized Feynman diagrams since bundle projection gives the space-time picture. Mirror symmetry means that there are two different Calabi-Yaus giving the same physics. Also now twistor space for a given space-time surface can have several imbeddings - perhaps mirror pairs define this kind of imbeddings.

To sum up, the construction of space-times as surfaces of H lifted to that of (almost) complex sub-manifolds in CP3× F3 with induced twistor structure shares the spirit of the vision that induction procedure is the key element of classical and quantum TGD. It also gives deep connection with the mathematical methods applied in super string models and these methods should be of direct use in TGD.

For background see the new chapter Classical part of twistor story of "Towards M-matrix". See also the article Classical part of twistor story.

Classical TGD and imbedding space twistors

The understanding of twistor structure of imbedding space and its relationship to the consctruction of extremals of Kähler action is certainly greatest breakthroughs in the mathematical understanding of TGD for years. One can say that the good physics provided by TGD can be now combined with the marvelous mathematics produced by string theorists by replacing Calabi-Yau manifolds with twistor spaces assignable to space-time surfaces and representable as sub-manifolds of the twistor space CP3× F3 of imbedding space M4× CP2 (strictly speaking M4 twistor space is the non-compact space SU(2,2)/SU(2,1) ×U(1)). What it so wonderful is that the enormous collective knowhow involved with algebraic geometry becomes avaiblable in TGD and now this mathematics makes sense physically. My sincere hope is that also colleagues would finally realize that TGD is the only way out from the recent dead alley in fundamental physics.

Below is the introduction of the article article Classical part of twistor story. I will later add some key pieces of the article.


Twistor Grassmannian formalism has made a breakthrough in N=4 supersymmetric gauge theories and the Yangian symmetry suggests that much more than mere technical breakthrough is in question. Twistors seem to be tailor made for TGD but it seems that the generalization of twistor structure to that for 8-D imbedding space H=M4× CP2 is necessary. M4 (and S4 as its Euclidian counterpart) and CP2 are indeed unique in the sense that they are the only 4-D spaces allowing twistor space with Kähler structure.

The Cartesian product of twistor spaces CP3 and F3 defines twistor space for the imbedding space H and one can ask whether this generalized twistor structure could allow to understand both quantum TGD and classical TGD defined by the extremals of Kähler action.

In the following I summarize the background and develop a proposal for how to construct extremals of Kähler action in terms of the generalized twistor structure. One ends up with a scenario in which space-time surfaces are lifted to twistor spaces by adding CP1 fiber so that the twistor spaces give an alternative representation for generalized Feynman diagrams having as lines space-time surfaces with Euclidian signature of induced metric and having wormhole contacts as basic building bricks.

There is also a very close analogy with superstring models. Twistor spaces replace Calabi-Yau manifolds and the modification recipe for Calabi-Yau manifolds by removal of singularities can be applied to remove self-intersections of twistor spaces and mirror symmetry emerges naturally. The overall important implication is that the methods of algebraic geometry used in super-string theories should apply in TGD framework. The basic problem of TGD has been
indeed the lack of existing mathematical methods to realize quantitatively the view about space-time as 4-surface.

The physical interpretation is totally different in TGD. Twistor space has space-time as base-space rather than forming with it Cartesian factors of a 10-D space-time. The Calabi-Yau landscape is replaced with the space of twistor spaces of space-time surfaces having interpretation as generalized Feynman diagrams and twistor spaces as sub-manifolds of CP3× F3 replace Witten's twistor strings. The space of twistor spaces is the lift of the "world of classical worlds" (WCW) by adding the CP1 fiber to the space-time surfaces so that the analog of landscape has beautiful geometrization.

For background see the new chapter Classical part of twistor story of "Towards M-matrix". See also the article Classical part of twistor story.

Monday, December 15, 2014

The classical part of the twistor story

Twistors Grassmannian formalism has made a breakthrough in N=4 supersymmetric gauge theories and the Yangian symmetry suggests that much more than mere technical breakthrough is in question. Twistors seem to be tailor made for TGD but it seems that the generalization of twistor structure to that for 8-D imbedding space H=M4× CP2 is necessary. M4 (and S4 as its Euclidian counterpart) and CP2 are indeed unique in the sense that they are the only 4-D spaces allowing twistor space with Kähler structure. These twistor structures define define twistor structure for the imbedding space H and one can ask whether this generalized twistor structure could allow to understand both quantum TGD and classical TGD defined by the extremals of Kähler action. In the following I summarize the background and develop a proposal for how to construct extremals of Kähler action in terms of the generalized twistor structure.

Summary about background

Consider first some background.

  1. The twistors originally introduced by Penrose (1967) have made breakthrough during last decade. First came the twistor string theory of Edward Witten proposed twistor string theory and the work of Nima-Arkani Hamed and collaborators led to a revolution in the understanding of the scattering amplitudes of scattering amplitudes of gauge theories. Twistors do not only provide an extremely effective calculational method giving even hopes about explicit formulas for the scattering amplitudes of N=4 supersymmetric gauge theories but also lead to an identification of a new symmetry: Yangian symmetry which can be seen as multilocal generalization of local symmetries.

    This approach, if suitably generalized, is tailor-made also for the needs of TGD. This is why I got seriously interested on whether and how the twistor approach in empty Minkowski space M4 could generalize to the case of H=M4× CP2. In particular, is the twistor space associated with H should be just the cartesian product of those associated with its Cartesian factors. Can one assign a twistor space with CP2?

  2. First a general result: any oriented manifold X with Riemann metric allows 6-dimensional twistor space Z as an almost complex space. If this structure is integrable, Z becomes a complex manifold, whose geometry describes the conformal geometry of X. In general relativity framework the problem is that field equations do not imply conformal geometry and twistor Grassmann approach certainly requires the complex manifold structure.

  3. One can also consider also a stronger condition: what if twistor space allows also Kähler structure? The twistor space of empty Minkowski space M4 is 3-D complex projective space P3 and indeed allows Kähler structure. Rather remarkably, there are no other space-times with Minkowski signature allowing twistor space with Kähler structure. Does this mean that the empty Minkowski space of special relativity is much more than a limit at which space-time is empty?

    This also means a problem for GRT. Twistor space with Kähler structure seems to be needed but general relativity does not allow it. Besides twistor problem GRT also has energy problem: matter makes space-time curved and the conservation laws and even the definition of energy and momentum are lost since the underlying symmetries giving rise to the conservation laws through Noether's theorem are lost. GRT has therefore two bad mathematical problems which might explain why the quantization of GRT fails. This would not be surprising since quantum theory is to high extent representation theory for symmetries and symmetries are lost. Twistors would extend these symmetries to Yangian symmetry but GRT does not allow them.

  4. What about twistor structure in CP2? CP2 allows complex structure (Weyl tensor is self-dual), Kähler structure plus accompanying symplectic structure, and also quaternion structure. One of the really big personal surprises of the last years has been that CP2 twistor space indeed allows Kähler structure meaning the existence of antisymmetric tensor representing imaginary unit whose tensor square is the negative of metric in turn representing real unit.

    The article by Nigel Hitchin, a famous mathematical physicist describes a detailed argument identifying S4 and CP2 as the only compact Riemann manifolds allowing Kählerian twistor space. Hitchin sent his discovery for publication 1979. An amusing co-incidence is that I discovered CP2 just this year after having worked with S2 and found that it does not really allow to understand standard model quantum numbers and gauge fields. It is difficult to avoid thinking that maybe synchrony indeed a real phenomenon as TGD inspired theory of consciousness predicts to be possible but its creator cannot quite believe. Brains at different side of globe discover simultaneously something closely related to what some conscious self at the higher level of hierarchy using us as instruments of thinking just as we use nerve cells is intensely pondering.

    Although 4-sphere S4 allows twistor space with Kähler structure, it does not allow Kähler structure and cannot serve as candidate for S in H=M4× S. As a matter of fact, S4 can be seen as a Wick rotation of M4 and indeed its twistor space is P3.

    In TGD framework a slightly different interpretation suggests itself. The Cartesian products of the intersections of future and past light-cones - causal diamonds (CDs)- with CP2 - play a key role in zero energy ontology (ZEO). Sectors of "world of classical worlds" (WCW) correspond to 4-surfaces inside CD× CP2 defining a the region about which conscious observer can gain conscious information: state function reductions - quantum measurements - take place at its light-like boundaries in accordance with holography. To be more precise, wave functions in the moduli space of CDs are involved and in state
    function reductions come as sequences taking place at a given fixed boundary. These sequences define the notion of self and give rise to the experience about flow of time. When one replaces Minkowski metric with Euclidian metric, the light-like boundaries of CD are contracted to a point and one obtains topology of 4-sphere S4.

  5. The really big surprise was that there are no other compact 4-manifolds with Euclidian signature of metric allowing twistor space with Kähler structure! The imbedding space H=M4× CP2 is not only physically unique since it predicts the quantum number spectrum and classical gauge potentials consistent with standard model but also mathematically unique!

    After this I dared to predict that TGD will be the theory next to GRT since TGD generalizes string model by bringing in 4-D space-time. The reasons are manyfold: TGD is the only known solution to the two big problems of GRT: energy problem and twistor problem. TGD is consistent with standard model physics and leads to a revolution concerning the identification of space-time at microscopic level: at macroscopic level it leads to GRT with some anomalies for which there is empirical evidence. TGD avoids the landscape problem of M-theory and anthropic non-sense. I could continue the list but I think that this is enough.

  6. The twistor space of CP2 is 3-complex dimensional flag manifold F3= SU(3)/U(1)× U(1) having interpretation as the space for the choices of quantization axes for the color hypercharge and isospin. This choice is made in quantum measurement of these quantum numbers and a means localization to single point in F3. The localization in F3 could be higher level measurement leading to the choice of quantizations for the measurement of color quantum numbers.

    Analogous interpretation could make sense for M4 twistors represented as points of P3. Twistor corresponds to a light-like line going through some point of M4 being labelled by 4 position coordinates and 2 direction angles: what higher level quantum measurement could involve a choice of ligh-like line going through a point of M4? Could the associated spatial direction specify spin quantization axes? Could the associated time direction specify preferred rest frame? Does the choice of position mean localization in the measurement of position? Do momentum twistors relate to the localization in momentum space? These questions remain fascinating open questions and I hope that they will lead to a considerable progress in the understanding of quantum TGD.

  7. It must be added that the twistor space of CP2 popped up much earlier in a rather unexpected context: I did not of course realize that it was twistor space. Topologist Barbara Shipman has proposed a model for the honeybee dances leading to the emerge of F3. The model led her to propose that quarks and gluons might have something to do with biology. Because of her position and specialization the proposal was forgiven and forgotten by community. TGD however suggests both dark matter hierarchies and p-adic hierarchies of physics. For dark hierarchies the masses of particles would be the standard ones but the Compton scales would be scaled up by heff/h=n. Below the Compton scale one would have effectively massless gauge boson: this could mean free quarks and massless gluons even in cell length scales. For p-adic hierarchy mass scales would be scaled up or down from their standard values depending on the value of the p-adic prime.

Why twistor spaces with Kähler structure?

I have not yet even tried to answer an obvious question. Why the fact that M4 and CP2 have twistor spaces with Kähler structure could be so important that it would fix the entire physics? Let us consider a less general question. Why they would be so important for the classical TGD - exact part of quantum TGD - defined by the extremals of Kähler action?

  1. Properly generalized conformal symmetries are crucial for the mathematical structure of TGD. Twistor spaces have almost complex structure and in these two special cases also complex, Kähler, and symplectic structures (note that the integrability of the almost complex structure to complex structure requires the self-duality of the Weyl tensor of the 4-D manifold).

    The Cartesian product CP3× F3 of the two twistor spaces with Kähler structure is expected to be fundamental for TGD. The obvious wishful thought is that this space makes possible the construction of the extremals of Kähler action in terms of holomorphic surfaces defining 6-D twistor sub-spaces of CP3× F3 allowing to circumvent the technical problems due to the signature of M4 encountered at the level of M4× CP2. For years ago I considered the possibility that complex 3-manifolds of CP3×CP3 could have the structure of S2 fiber space but did not realize that CP2 allows twistor space with Kähler structure so that CP3× F3 is a more plausible choice.

  2. It is possible to construct so called complex symplectic manifolds by Kähler manifolds using as complexified symplectic form ω1+Iω2. Could the twistor space CP3× F3 be seen as complex symplectic sub- manifold of real dimension 6?

    The safest option is to identify the imaginary unit I as same imaginary unit as associated with the complex coordinates of CP3 and F3. At space-time level however complexified quaternions and octonions could allow alternative formulation. I have indeed proposed that space-time surfaces have associative of co-associative meaning that the tangent space or normal space at a given point belongs to quaternionic subspace of complexified octonions.

  3. Recall that every 4-D orientable Riemann manifold allows a twistor space as 6-D bundle with CP1 as fiber and possessing almost complex structure. Metric and various gauge potentials are obtained by inducing the corresponding bundle structures. Hence the natural guess is that the twistor structure of space-time surface defined by the induced metric is obtained by induction from that for CP3× F3 by restricting its twistor structure to a 6-D (in real sense) surface of CP3× F3 with a structure of twistor space having at least almost complex structure with CP1 as a fiber. If so then one can indeed identify the base space as 4-D space-time surface in M4× SCP2 using bundle projections in the factors CP3 and F3.

About the identification of 6-D twistor spaces as sub-manifolds of CP3× F3

How to identify the 6-D sub-manifolds with the structure of twistor space? Is this property all that is needed? Can one find a simple solution to this condition? In the following intuitive considerations of a simple minded physicist. Mathematician could probably make much more interesting comments.

Consider the conditions that must be satisfied using local trivializations of the twistor spaces. Before continuing let us introduce complex coordinates zi=xi+iyi resp. wi=ui+ivi for CP3 resp. F3.

  1. 6 conditions are required and they must give rise by bundle projection to 4 conditions relating the coordinates in the Cartesian product of the base spaces of the two bundles involved and thus defining 4-D surface in the Cartesian product of compactified M4 and CP2.

  2. One has Cartesian product of two fiber spaces with fiber CP1 giving fiber space with fiber CP11× CP12. For the 6-D surface the fiber must be CP1. It seems that one must identify the two spheres CP1i. Since holomorphy is essential, holomorphic identification w1=f(z1) or z1=f(w1) is the first guess. A stronger condition is that the function f is meromorphic having thus only finite numbers of poles and zeros of finite order so that a given point of CP1i is covered by CP1i+1. Even stronger and very natural condition is that the identification is bijection so that only Möbius transformations parametrized by SL(2,C) are possible.

  3. Could the Möbius transformation f: CP11 → CP12 depend parametrically on the coordinates z2,z3 so that one would have w1= f1(z1, z2,z3), where the complex parameters a,b,c,d (ad-bc=1) of Möbius transformation depend on z2 and z3 holomorphically?

    What conditions can one pose on the dependence of the parameters a,b,c,d of the Möbius transformation on (z2,z3)? The spheres CP1 defined by the conditions w1= f(z1, z2,z3) and z1= g(w1, w2,w3) must be identical. Inverting the first condition one obtains z1= f-1(w1, z2,z3) and this must allow an expression as z1= g(w1,w2,w3). This is true if z2 and z3 can be expressed as holomorphic functions of (w2,w3): zi= fi(wk), i=2,3, k=2,3. Non-holomorphic correspondence cannot be excluded.

  4. Further conditions are obtained by demanding that the known extremals - at least non-vacuum extremals - are allowed. The known extremals can be classified into CP2 type vacuum extremals with 1-D light-like curve as M4 projection, to vacuum extremals with CP2 projection, which is Lagrangian sub-manifold and thus at most 2-dimensional, to string like objects with string world sheet as M4 projection (minimal surface) and 2-D complex sub-manifold of CP2 as CP2 projection, to massless extremals with 2-D CP2 projection such that CP2 coordinates depend on arbitrary manner on light-like coordinate defining local propagation direction and space-like coordinate defining a local polarization direction. There are certainly also other extremals such as magnetic flux tubes resulting as deformations of string like objects. Number theoretic vision relying on classical number fields suggest a very general construction based on the notion of associativity of tangent space or co-tangent space.

  5. The conditions coming from these extremals reduce to 4 conditions expressible in the holomorphic case in terms of the base space coordinates (z2,z3) and (w2,w3) and in the more general case in terms of the corresponding real coordinates. It seems that holomorphic ansatz is not consistent with the existence of vacuum extremals, which however give vanishing contribution to transition amplitudes since WCW ("world of classical worlds") metric is completely degenerate for them.

    The mere condition that one has CP1 fiber bundle structure does not force field equations since it leaves the dependence between real coordinates of the base spaces free. On the other hand, CP1 bundle structure alone need not of course guarantee twistor space structure. One can ask whether non vacuum extremals could correspond to holomorphic constraints between (z2,z3) and (w2,w3).

  6. Pessimist could of course argue that field equations are additional conditions completely independent of the conditions realizing the bundle structure! One cannot exclude this possibility. Mathematician could easily answer the question about whether the proposed CP1 bundle structure is enough to produce twistor space or not and whether field equations could be the additional condition and realized using the holomorphic ansatz.

To sum up, the construction of space-times as surfaces of H lifted to that of (almost) complex sub-manifolds in CP3× F3 with induced twistor structure shares the spirit of the vision that induction procedure is the key element of classical and quantum TGD.

For background see the new chapter Classical part of twistor story of "Towards M-matrix". See also the article Classical part of twistor story.

Thursday, December 11, 2014

How transition from dark life to biochemical life could have taken place?

Ulla gave in the comment section of previous posting a link to article Hydrogen cyanide polymers, comets and the origin of life helping me to discover a new big gap in my knowledge about biology. HCN is everywhere and Miller demonstrated in his classic experiments
that 11 out of 20 amino-acids emerge in presence of HCN. It has been later found that well over 20 amino-acids were produced. In my own belief system amino-acids could have appeared first as concrete something "real" and DNA as symbolic representations of this something "real". First at dark matter level and then biochemically.

In TGD Universe one can imagine - with inspiration coming partially from Pollack's experiments - that dark variants DNA, RNA and amino-acids were realized first as dark proton sequences at flux tubes- dark nuclei - I call them just dark DNA, RNA and amino-acids although dark proton sequences are in question. The genetic machinery involving translation and transcription was realized as dark variant and dark DNA was a symbolic representation for dark amino-acids.

How did this dark life give rise to bio-chemical life as its image? This is the question! I can only imagine some further questions.

  1. Was this process like master teaching to a student a skill? Master does it first, and then student mimics. If so, the emergence of amino-acids, mRNA and DNA polymers would not have been purely chemical process. Dark variants of these polymers would have served as templates for the formation of ordinary basic biopolymers, for transcription, and for translation. These templates might have been necessary in order to generate long RNA and DNA sequences: mere chemistry might have not been able to achieve this. Without dark polymers one obtains only bio-monomers, with dark polymers as template one obtains also bio-polymers. Dark polymers would have been the plan, biopolymers the stuff used to build.

  2. Are dark DNA, RNA, amino-acids, etc indeed still there and form binary structures with their biochemical variants as I have indeed proposed?

  3. Are dark translation and transcription processes still an essential part of ordinary translation and transcription? Master-student metaphor suggest that these dark processes actually induce them just like replication of magnetic body could induce the replication of DNA or cell. Visible chemistry would only make visible the deeper "dark chemistry". Apologies for all biochemists who have done heroic work in revealing chemical reaction paths!;-)

How the process assigning biochemical life to dark life could have proceeded? The minimalistic guess is that the only thing that happened was that dark life made itself gradually visible! As a consciousness theoretician I have a temptation to see religious statements as hidden metaphors, at least they provide an excellent manner to irritate skeptics: Dark matter - "the God" made us- the biological life - to its own image;-).

  1. First dark amino-acid sequences were accompanied by ordinary amino-acid sequences so that the dark translation process had now a visible outcome. At this step the presence of HCN was crucial and made the step unavoidable. Also the presence of template was necessary.

  2. Dark mRNA got a visible counterpart in the same manner: the presence of template made possible long RNA polymers. The translation remained basically dark process but made visible by mRNA.

  3. Dark DNA got a visible companion: again the presence of the template was and still is crucial.

What about generation of DNA and RNA? It is known that in reducing atmosphere DNA and RNA nucleobasis are obtained in an environment believed to mimick prebiotic situation: the presence of HCN and ammonia are necessary (see this). Reducing atmosphere does not oxidize, in other worlds does not contain oxygen and other oxidizing agents and can contain also actively reducing agents such as hydrogen, carbon monoxide. There are however some problems.
  1. There is evidence that early Earth atmosphere contained less reducing molecules than thought during times of Miller. If life emerged in the underground water reservoirs as TGD strongly suggests, the usual atmosphere was absent and there are good hopes about reducing atmosphere.

  2. The experiments using reducing gases besides those used in Miller's experiments produce both left and right handed polymers so that chiral selection is missing. This is not a surprise since weak interactions generate extremely small parity breaking for visible matter. If dark proton strings or even dark nuclei are involved, the Compton length of weak gauge bosons can be of the order of atomic length scale or even longer and weak interactions would be as strong as electromagnetic interactions. Therefore chiral selection becomes possible. The simplest option is that chirality selection occurred already for the helical magnetic flux tubes and induced that of biopolymers.
To sum up, it seems that the pieces fit nicely!

For details and references see the new chapter Criticality and dark matter of "Hyper-finite factors and hierarchy of Planck constants" or the article Criticality and dark matter.