Sunday, October 21, 2012

Do blackholes and blackhole evaporation have TGD counterparts?

The blackhole information paradox is often believed to have solution in terms of holography stating in the case of blackholes that blackhole horizon can serve as a holographic screen representing the information about the surrounding space as a hologram. The situation is however far from settled. The newest challenge is so called firewall paradox proposed by Polchinsky et al. Lubos Motl has written several postings about firewall paradox and they inspired me to look the situation in TGD framework.

These paradoxes strengthen the overall impression that the blackhole physics indeed represent the limit at which GRT fails and the outcome is recycling of old arguments leading nowhere. Something very important is lacking. On the other hand, some authors like Susskind claim that the physics of this century more or less reduces to that for blackholes. I however see this endless tinkering with blackholes as a decline of physics. If super string had been a success as a physical theory, we would have got rid of blackholes.

If TGD is to replace GRT, it must also provide new insights to blackholes, blackhole evaporation, information paradox and firewall paradox. This inspired me to look for what blackholes and blackhole evaporation could mean in TGD framework and whether TGD can avoid the paradoxes. This kind of exercises allow also to sharpen the TGD based view about space-time and quantum and build connections to the mainstream views.

I do not bother to type the text here as html but give link to the first draft of a little article "Do blackholes and blackhole evaporation have TGD counterparts?".

Sunday, October 14, 2012

Could Higgs mechanism provide a description of p-adic particle massivation at QFT limit?

The most recent TGD based explanation of the observed Higgs like state with 125 GeV mass is as "half-Higgs" identified as "Euclidian pion". Euclidian pion would give the dominating contribution to the masses of gauge bosons but the contribution to fermion masses would be negligible and come from p-adic thermodynamics. This scenario saves from the hierarchy problem resulting from fermionic loops giving large contribution from heavy fermion masses and destabilizes Higgs mechanism. It is also known that for the observed mass of the Higgs like state Higgs vacuum is unstable.

What if the Higgs like state decays to fermions pairs with the rate predicted by standard form of Higgs mechanism? This is unpleasant question from TGD point of view. Could it mean that TGD is deadly wrong? Unpleasant questions are often the most useful ones so that it is perhaps time to boldly articulate also this question.

Is the recent TGD based view about Higgs like state as "Euclidian pion" as source of gauge boson masses and p-adic thermodynamics as a source of fermion masses exactly correct? p-Adic thermodynamics is based on very general assumptions like super-conformal invariance, the existence of string like objects of length of order CP2 length predicted by the modified Dirac equation, and the powerful number theoretic constraints coming from p-adic thermodynamics and p-adic length scale hypothesis. Could Higgs mechanism be only a QFT approximation for a microscopic description of massivation based on p-adic thermodynamics - as suggested for more than fifteen years ago - so that the two approaches would not be actually competitors?

Microscopic description of massivation

Consider first the microscopic description in more detail (see this).

  1. In the recent TGD description elementary particles correspond to loops carrying Kähler magnetic monopole flux and having two wormhole contacts with Euclidian signature of induced metric as ends at which magnetic flux flows between opposite light-like 3-D wormhole throats at different space-time sheets. In the case of fermions fermion number resides at wormhole throat at the either end of the loop. In the case of bosons fermion and antifermion number reside at the opposite throats of either wormhole contact. One can imagine variants of this picture since two wormhole contacts are involved: fermion number could be delocalized to both wormhole contacts and both wormhole throats, and bosons could have fermion and antifermion at the throats of different wormhole contacts. p-Adic mass calculations do not allow to distinguish between these options. The solutions of the modified Dirac equation assign to the flux loop closed string and this leads to a rich spectrum of topological quantum numbers and implies that elementary particles are also knots: unfortunately the predicted effects are extremely small.

  2. What does one actually mean with the expectation value of mass squared in p-adic thermodynamics (as a matter of fact, ZEO suggests that p-adic thermodynamics is replaced with its "complex square root". This has some non-trivial number theoretical implications in the case of fermions (see this). There are two options.

    1. Genuine mass squared is in question. The simplest possibility is that both throats of the wormhole carry light-like momentum. If the momenta are not parallel, this can give rise to stringy mass squared spectrum with string tension determined by CP2 length. The role of string is connects the opposite throats of the wormhole contact. In the case of bosons the ends of the short string connecting the throats would carry fermion and antifermion. In the case of fermions second throat would carry purely bosonic excitations generated by the symplectic algebra of δ M4+/-× CP2. One could also assign mass squared to the string but holography suggests that this mass squared is identifiable as total mass squared assignable to the ends.

    2. Longitudinal mass squared is in question in the case of wormhole throat. The option favored by ZEO and number theoretical arguments is that p-adic thermodynamics gives only longitudinal mass squared, that this the square of M2-projection of light-like fermion momentum, where M2⊂ M4 characterizes given CD.

      What happens in the case of contact? Could the transversal momenta of the throats cancel and give rise to a purely longitudinal contribution equal to the entire momentum so that longitudinal option would be equivalent with the first one?

      Or could it be that the second wormhole throat does not contribute to the mass squared. The physical mass squared would be the average of longitudinal mass squared over various choices of M2⊂ M4 so that Lorentz invariance would be achieved. The propagators associated with massless twistor lines would be defined by the M2 projections of fermionic or bosonic momenta and would therefore be finite. Also gauge conditions would involve only the longitudinal projection. I have not been able to develop any killer argument against this option.

  3. According to the most recent view (see this), the dominating contribution to gauge boson masses would be due to their coupling to Higgs like Euclidian pion developing vacuum expectation associated with coherent state. But is this fundamental description or only effective description obtained at 8-D QFT limit? An alternative view discussed for a year or two ago is that gauge boson masses correspond to "stringy" contribution from the long portion of the closed flux tube pair connecting the two wormhole contacts with a distance of order weak length scale associated with the gauge boson - the analog of Minkowskian meson. The contribution from the short part of the closed flux tube - the wormhole contact defining Euclidian pion - would dominate fermionic masses. In this case Higgs vacuum expectation could provide only a convenient effective description at QFT limit.

  4. This picture leads naturally to generalized Feynman diagrams suggesting strongly twistor Grassmannian description since even the virtual wormhole throats are light-like. This description in turn would lead to QFT description when wormhole contacts are approximated by points of M4× CP2 or even M4.

p-Adic mass calculations give universal results but the drawback clearly is that they cannot fix the details of the model of elementary particles.

Does Higgs mechanism emerge at the QFT limit as effective description?

The above description is definitely not QFT description. Does QFT description exist at all - say as a limit of twistorial description? It the QFT limit exists in some sense, what can one conclude about it? In the possibly existing QFT description one must idealize flux loops with point-like particles. Even if p-adic thermodynamics predicts fermion masses by assigning them to short Euclidian strings and gauge boson masses by assigning them to long Minkowskian strings in the closed flux tube, the only manner to describe this at QFT limit might be based on the use of vacuum expectation of Higgs like field and coupling to Higgs field. One could even argue that p-adic thermodynamics is equivalent to Higgs mechanism at QFT limit.

Even if both fermionic and bosonic particle massivation were due to to p-adic thermodynamics at the fundamental level, one is forced to describe it at QFT limit by taking mass as given thermal mass. This could be achieved by using coupling to the vacuum expectation of the Higgs like state (Euclidian pion) and by choosing the dimensionless coupling so that a correct value of mass results.

If the fermions couple also to the quantum part of Higgs field as bosons would certainly do, one obtains standard model prediction but encounters the hierarchy problem and vacuum stability problem. M89 hadron physics for which the bump at mass about 130 GeV suggested by the results of Fermi laboratory serves as evidence, might solve these problems. A more plausible option is that the badly broken supersymmetry generated by the second quantized modes of induced spinor field labelled by conformal weight - essentially conformal supersymmetry - guarantees the cancellation of loop contributions at high energies. Note that the modes associated with right-handed neutrino are delocalized at the entire 4-surface, and do not seem plausible candidates for the needed SUSY (see this. One would end up with a description in which Higgs effectively gives rise to the masses of fermions. The outcome would be however an artifact of the QFT approximation.

One can consider QFT limits in H=M4× CP2 and M4 respectively.

  1. The 8-dimensional QFT limit treats fermions using H-spinors with quarks and leptons having different H-chiralities. In this case it is impossible to describe mass as in M4 since it would give rise to a coupling between quarks and leptons and break separate conservation of baryon and lepton numbers. One must introduce instead of scalar mass a vector in CP2 tangent space analogous to polarization vector.

    1. At quantum level Higgs vacuum expectation value defines a vector in CP2 tangent space expressible in terms of complexified gamma matrices having dimension 1/length so that is natural for the phenomenological description of mass generated by p-adic thermodynamics. In this description the counterpart of Higgs vacuum expectation would be the quantity Hkγk=HAγA, where HA is a vector in CP2 tangent space assignable to braid end at the partonic 2-surface (end or wormhole throat orbit at the boundary of CD). HA has dimensions of 1/length just as Higgs like field. The length squared of this vector would define the mass squared.

    2. Can one identify HA in terms of induced geometry? The CP2 part of second fundamental form vanishing for minimal surfaces (analogous to massless particles) is such a vector field. Only the value of HA at braid end is needed so that HA would be effectively constant. Quantum classical correspondence suggests that HA corresponds to vacuum expectation of Higgs field.

  2. M4 QFT limit would define even stronger approximation, which must be however consistent with 8-D QFT limit. Now one must use 4-D spinors and describe the coupling in terms of scalar mass coupling different M4 chiralities. There are two options for the coupling g;Ψbar; ΨΦ and (g/m0)Ψbar; γμΨ DμΦ. The latter option gives automatically effective coupling gm/m0 and Higgs couplings are therefore proportional to fermion masses. Fermion masses can be reproduced by the standard form of Higgs mechanism and also now the illusion that Higgs gives rise to fermion masses is created.

To sum up, it is possible that p-adic thermodynamics giving a dominating contribution to fermion and perhaps even boson masses from short/long flux tubes could have Higgs mechanism as the unique description at QFT limit so that the hopes of killing TGD or Higgs mechanism at one blow of experimentalist might be too optimistic. This could give a lesson in the art of ontology: wrong ontology can demand the existence of something that does not exist in more advanced ontology.

For a summary of the evolution of TGD inspired ideas about particle massivation see the chapter Higgs or something else? of "p-Adic length scale hierarchy and hierarchy of Planck constants". See also the short article Is it really Higgs?.

Wednesday, October 10, 2012

Could hyperbolic 3-manifolds and hyperbolic lattices be relevant in zero energy ontology?

In zero energy ontology (ZEO) lattices in the 3-D hyperbolic manifold defined by H3 (t2-x2-y2-z2=a2) (and known as hyperbolic space to distinguish it from other hyperbolic manifolds emerge naturally. The interpretation of H3 as a cosmic time=constant slice of space-time of sub-critical Robertson-Walker cosmology (giving future light-cone of M4 at the limit of vanishing mass density) is relevant now. ZEO leads to an argument stating that once the position of the "lower" tip of causal diamond (CD) is fixed and defined as origin, the position of the "upper" tip located at H3 is quantized so that it corresponds to a point of a lattice H3/G, where G is discrete subgroup of SL(2,C) (so called Kleinian group). There is evidence for the quantization of cosmic redshifts: a possible interpretation is in terms of hyperbolic lattice structures assignable to dark matter and energy. Quantum coherence in cosmological scales could be in question. This inspires several questions. How does the crystallography in H3 relate to the standard crystallography in Eucdlidian 3-space E3? Are there general results about tesselations H3? What about hyperbolic counterparts of quasicrystals? In this article standard facts are summarized and some of these questions are briefly discussed.

For details see the article Could hyperbolic 3-manifolds and hyperbolic lattices be relevant in zero energy ontology? or the chapter TGD and Cosmology of "Physiscs n Many-Sheeted Space-time".

Sunday, October 07, 2012

How the effective hierarchy of Planck constants could reveal itself in condensed matter physics?

Phil Anderson - one of the gurus of condensed matter physics - has stated that there exists no theory of condensed matter: experiments produce repeatedly surprises and theoreticians do their best to explain them in the framework of existing quantum theory.

This suggests that condensed matter physics might allow room even for new physics. Indeed, the model for fractional quantum Hall effect (FQHE) strengthened the feeling that the many-sheeted physics of TGD could play a key role in condensed matter physics often thought to be a closed chapter in physics. One implication would be that space-time regions with Euclidian signature of the induced metric would represent the space-time sheet assignable to condensed matter object as a whole as analog of a line of a generalized Feynman diagram. Also the hierarchy of effective Planck constants hbareff =n×hbar appears in the model of FQHE.

The recent TGD inspired discussion of possibility of quantum description of psychokinesis boils down to a model for intentional action based on the notion of magnetic flux tube carrying dark matter and dark photons and inducing macroscopic quantum superpositions of magnetic bubbles of ferromagnet with opposite magnetization. As a by-product the model leads to the proposal that the conduction electrons responsible for ferromagnetism are actually dark (in the sense of having large value of effective Planck constant) and assignable to a multi-sheeted singular covering of space-time sheet assignable to second quantization multifurcation of the preferred extremal of Kähler action made possible by its huge vacuum degeneracy.

What might be the signatures for hbareff=n×hbar states in condensed matter physics and could one interpret some exotic phenomena of condensed matter physics in terms of these states for electrons?

  1. The basic signature for the many-electron states associated with multi-sheeted covering is a sharp peak in the density of states due to the presence of new degrees of freedom. In ferromagnets this kind of sharp peak is indeed observed at Fermi energy.

  2. In the theory of super-conductivity Cooper pairs are identified as bosons. In TGD framework all bosons - also photons - emerge as wormhole contacts with throats carrying fermion and antifermion. I have always felt uneasy with the assumption that two-fermion states obey exact Bose-Einstein statistics at the level of oscillator operators: they are after all two-fermion states. The sheets of multi-sheeted covering resulting in a multifurcation could however carry both photons identified as fermion-antifermion pairs and Cooper pairs and this could naturally give rise to Bose-Einstein statistics in strong sense and also be involved with Bose-Einstein condensates. The maximum number of photons/Cooper pairs in the Bose-Einstein condensate would be given by the number of sheets. Note that in zero energy ontology also the counterparts of coherent states of Cooper pairs are possible: in positive energy ontology they have ill-defined fermion number and also this has made me feel uneasy.

  3. Majorana fermions have become one of the hot topics of condensed matter physics recently.

    1. Majorana particles are actually quasiparticles which can be said to be half-electrons and half-holes. In the language of anyons would would have charge fractionization e→ e/2. The oscillator operator a(E) creating the hole with energy E defined as the difference of real energy and Fermi energy equals to the annihilation operator a(-E) creating a hole: a(E)=a(-E). If the energy of excitation is E=0 one obtains a(0)=a(-0).

      Since oscillator operators generate a Clifford algebra just like gamma matrices do, one can argue that one has Majorana fermions at the level of Fock space rather than at the level of spinors. Note that one cannot define Fock vacuum as a state annihilated by a(0). Since the creation of particle generates a hole equal to particle for E=0, Majorana particles come always in pairs. A fusion of two Majorana particles produces an ordinary fermion.

    2. Purely mathematically Majorana fermion as a quasiparticle would be highly analogous to covariantly constant right-handed neutrino spinor in TGD with vanishing four-momentum. Note that right-handed neutrino allows 4-dimensional modes as a solution of the modified Dirac equation whereas other spinor modes localized to partonic 2-surfaces and string world sheets. The recent view is however that covariantly constant right-handed neutrino cannot not give rise to the TGD counterpart of standard space-time SUSY.

    3. In TGD framework the description that suggests itself is in terms of bifurcation of space-time sheet. Charge -e/2 states would be electrons delocalized to two sheets. Charge fractionization would occur in the sense that both sheets would carry charge -e/2. Bifurcation could also carry two electrons giving charge -e at both sheets. Two-sheeted analog of Cooper pair would be in question. Ordinary Cooper pair would in turn be localized in single sheet of a multifurcation. The two-sheeted analog of Cooper pair could be regarded as a pair of Majorana particles if the measured charge of electron corresponds to its charge at single sheet of bifurcation (this assumption made also in the case of FQHE is crucial!). Whether this is the case, remains unclear to me.

    4. Fractional Josephson effect in which the current carriers of Josephson current become electrons or quasiparticles with the quantum numbers of electron has been suggested to serve as a signature of Majorana quasiparticles. An explanation consistent with above assumption is as a two-sheeted analog of Cooper pair associated with a bifurcated space-time sheets.

      If the measurements of Josephson current measure the current associated with single branch of bifurcation the unit of Josephson current is indeed halved from -2e to -e. These 2-sheeted Cooper pairs behave like dark matter with respect to ordinary matter so that dissipation free current flow would become possible.

      Note that ordinary Cooper pair Bose-Einstein condensate would correspond to N-furcation with N identified as the number of Cooper pairs in the condensate if the above speculation is correct. Fractional Josephson effect generated in external field would correspond to a formation of mini Bose-Einstein condensates in this framework and also smaller fractional charges are expected. In this case the interpretation as Majorana fermion does not seem to make sense.

For more details see the chapter Does TGD Predict Spectrum of Planck Constants? of "Towards M-Matrix".

Thursday, October 04, 2012

Two attempts to understand PK

In quantum theory context one can try to explain retro PK (psychokinesis) and perhaps even PK using quantum measurement theory. It seems however that quantum theory is not enough and feedback loop to past allowing to observer to affect the quantum system generating random number. In TGD framework intentional action based on negative energy signal to geometric past would be a rough manner to state what this feedback to geometric past is.
For instance, intentional generation of motor action would involve a negative energy signal - say in EEG frequency range - from the "personal" magnetic body to the brain of geometric past, where it would initiate neural activity leading to motor action.

My attempt to concretize this picture in TGD framework - inspired by an unpublished article by Brian Millar - led to following two options restricting the consideration on PK in which operator tries to increase or decrease the number of 1:s or 0:s in random sequence of bits generated by transitions of microscopic quantum system to two alternative final states labelled by bit.

  1. For the first option observer (operator of experimenter) performs state function reduction for the quantum superpositions of two states resulting in quantal microscopic process and entangled with bits in data file: one can say that before the reading of the file it contains qubits. This requires further entanglement with observer's quantum states.

    Standard quantum measurement theory alone does not suggest any PK effect since the entanglement with observer does not affect the probabilities of the outcomes of microscopic quantum process. To achieve a non-trivial effect the measurement interaction generating entanglement with the observer must be able to modify the probabilities of the outcomes. This interaction could be called feedback loop in time. This picture seems to me more or less equivalent with that of Brian Millar.

  2. "Too-good-to be-true" option would be that the observer's intent transferred backwards in geometric time (feedback loop using the terminology of Brian Millar) can affect directly also the bits in data file so that they become superpositions of the originally quantum measured (read) bit, and then perform the quantum measurement as above. In this case PK effect could be observed directly by comparing the file subject to PK with its unaffected copy. The size of the effect would be characterized by the induced mixing. Of course, this kind of idea would have looked completely crazy for few years ago and perhaps even now.

    The fact however is that quantum entanglement and quantum superposition have been now demonstrated for increasingly larger systems. Of course, the observer-bit interaction might be be extremely weak due to the large energy needed to change the direction of bit classically.

I am a dilettante as a parapsychologist and in order to compare the two options in more detail I have used as background the article "Correlations of Random Binary Sequences with Pre-Stated Operator Intention: A Review of a 12-Year Program" tells about experiments of Jahn and others in which operator tried to affect the RNG output by intentional action: single cycle consisting of an attempt to increase the number of 1s, an attempt to decrease it, and no intention to either direction. Retro PK experiments have been also done: see the articles PK Effect on Pre-Recorded Targets an Addition effect for PK on pre-recorded targets of Schmidt. In these experiments the background philosophy seems to be conform with the first option.

There is also a report about an experiment in which chicken was labelled to a robot preprogrammed for months ago to wander randomly around the room: the path of robot was claimed to change so that it stayed near the chicken. Also Libet's experiments support propagation of intent backwards in geometric time in time scale of about .1 seconds.

Quantum measurement theory option

PK selects the outcome of a quantal microscopic process such as radioactive decay producing a superposition of two states (mapped to superposition of bits by entanglement - qubit in fact) and later to bit by state function reduction. Data file can be said to contain quantum superpositions of bits corresponding to the two outcomes of quantum process and observer (experimenter or operator in PK experiment) entangled with these state pairs in PK experiment and observes/state function reduces the state with click telling whether the outcome was desired.

This stage should bring in the effect of intention and change the probabilities for the outcomes. Standard quantum measurement theory does not allow this: experimenter acts as a passive selector of the outcome. Therefore some kind of feedback interaction propagating to geometric past and affecting the probabilities of outcomes in quantum superposition is needed.

If the data are read before experiment state function reduction takes place qubits become bits. One can also copy the file to a second one and check that the two files are identical. In this case standard measurement theory tells that the effect of observer cannot change the situation and null effect is obtained. This can be of course tested experimentally. Maybe this have been done.

If this option explains the experiment with chicken and robot, the reading of the random number sequence determining the path of the robot before the experiment implies that the labeling of the chicken to robot would have no effect on robot. The interpretation of Libet's findings about neural activity beginning before conscious decision could be that quantum superposition of neural states corresponding to "I do it" and "I don't do it" is generated a fraction of second before the conscious decision which selects either of these options. Does the intentional action propagating to geometric past generate this superposition? Conscious decision "I shall do this or that" would followed by the choice between "this" and "that".

"Too-good-to-be-true" option

Suppose that the data file is copied and second copy is read by human observer to guarantee state function reduction (according to the standard quantum measurement theory: in TGD framework state function reduction does not require human observer).

In this case the feedback loop of the observer (operator or experimenter) realized as negative energy signals to geometric past must be able to modify the states of the binary digits directly and induce a superposition of binary digits presumably containing a very small contribution of opposite binary digit for a given original digit. After this state function could take place just as in the experiment above. Now the test would be direct: compare the data file with its copy not subject to the action of observer. Statistical procedures would not be necessary and direct demonstration of PK would become possible.

In chicken and robot experiment the chicken could affect the path or robot even if it the file or its copy have been read by human observer. In Libet's experiment decision "I do it" would first generate quantum superposition of options "I do it" and "I don't do it" ("Should I do it?") and select "I do it".

I do not know whether this option even deserves to be killed. Certainly this should be very easy.

How the intention to increase/decrease the number of 1:s or 0:s could be realized?

Can one imagine in TGD framework any mechanism allowing to increase the number of 1:s or 0:s? The basic vision is following.

  1. One can consider magnetic fields or their wormhole counterparts accompanying necessarily elementary particles. Ordinary magnetic fields would correspond to single sheeted magnetic flux tubes carrying conserved magnetic flux. Wormhole magnetic fields consist of a pair of flux tubes carrying opposite monopole fluxes at different space-time sheetes and have wormhole contacts at their ends transferring the monopole flux between the sheets. Flux tubes or pairs wormhole magnetic flux tubes play a key role in TGD inspired quantum biology and proposed also to be a basic space-time correlate of intentional action. In the recent case flux tubes would connect the observer (operator or experimenter) to the device storing the bits. For wormhole flux tubes the flux tubes at the two sheets could have M4 projections, which do not overlap at all so that bits could interact with either flux tube but not with both simultaneously.

  2. If bits are realized as magnetized regions, the magnetic interaction between bits and the magnetic field carried by flux tube (or either of the opposite fluxes associated with the wormhole magnetic field) is a natural candidate for the interaction defining quantization axis of spin, and also for the interaction inducing a small mixing of the bits by Larmor precession induced by a small perturbation of the flux tube magnetic field. This perturbation could be the TGD counterpart of Alfwen wave inducing geometrical oscillation of the flux tube and therefore the direction of the magnetic field. State function reduction after the perturbation has ceases would produce either value of the bit. The strength and duration of perturbation determines how large the probability of bit reversal is.
If one assumes that magnetic interaction is in question, the most natural choice for the representation of bit is as magnetized region of data tape with direction of magnetization determining the value of the bit. This restriction can be criticized but will be made in the following.
  1. The energy needed to turn the bit must be above thermal energy but the minimization of energy costs requires that this energy is not much above it and is therefore larger than 5× 10-2 eV which by the way is also the order of magnitude for the energy gained by elementary charge in the electric field of cell membrane. This energy is considerably smaller than metabolic energy quantum with nominal value of .5 eV. Therefore metabolic energy of observer could provide the energy needed to turn the bit. Note that p-adic length scale hypothesis strongly suggests a hierarchy of metabolic energy quanta coming as octaves.

  2. Classically the effect of the small perturbation of the external magnetic field on spin is Larmor precession due to the torque τ=μ× B. A simple model is obtained by assuming that the magnetic moments in magnetized region is simply the sum of elementary magnetic moments of (say) electrons, which in magnetized state are parallel: μ= Neμe, where Ne is the number of electrons in the magnetized region defining the bit. The mutual interaction of spins forces them to have same direction so that they are not free.

    Classical torque is time derivative of angular momentum and one has total angular momentum J= (Nm/ge) μe, where g is so called g-factor not too far from unity. This gives dmμe/dt= μe× B, mue=(ge/m)s, where s is the spin of the electron. The situation reduces to single electron level and the oscillation of the magnetized regions takes place with the Larmor frequency ω=egB/2m of electron.

    This model is of course highly oversimplified but gives a good idea about what happens. The Larmor frequency of electron is given by ω=egB/2m and in the "endogenous" magnetic field of .2 Gauss (2/5 of the nominal value of the Earth's magnetic field) is f=6× 105 Hz. One expects that the flux tube magnetic fields and their perturbations are considerably weaker so that the perturbation gives rise to a rather slow change in the direction of the magnetic moment classically.

    At quantum level the evolution of the magnetic moment reduces to a unitary evolution of electron's spin by standard Hamiltonian defined by magnetic interaction energy E= -μ•B and if perturbation acts only a finite time the final state contains a small contribution from opposite value of spin.

  3. If all magnetized regions representing bits interact simultaneously with flux tube, the net effect to the spin/ bit average is zero since the probabilities for the inversion of magnetic moment are same for the values of bit. Therefore it is not possible to realize the intention to increase or reduce the total number of 1s/0s in this manner.

  4. Wormhole magnetic fields provide a possible solution to the problem. If the M4 projections of the two flux tubes involved do not overlap energy minimization favors the attachment of the magnetized region with the flux tube for which the energy E= -μ• B is smaller - that is negative. Since the fields of flux tubes are in roughly opposite directions, bits 1 and 0 tend to condense at different flux tubes. Hence a small short lasting perturbation associated with either flux tube can only reduce the number of 1s or 0s but not both and it would be possible to realize the intention "reduce the total number of 1:s or 0:s" equivalent with the intention "increase the total number 0:s or 1:s". This if the observer's intention boils down to a selection of the wormhole flux tube carrying the perturbation so that wormhole flux tubes would represent bits at the fundamental level.

The consideration of the energetics for the flip of the magnetization direction brings in naturally the hierarchy of effective Planck constants hbareff=n×hbar implied by the vacuum degeneracy of Kähler action.

  1. For ferromagnets the Weiss mean field theory predicts that in absence of external magnetic field both magnetization directions have same energy. External magnetic field splits the degeneracy. One could say that the
    if one regards the magnetized region as big spin, both spin directions have same energy and external field - now emerging from the observer as flux tubes - removes the degeneracy and defines direction for the quantization of spin. The mean field theory of Weiss based on the expression of free energy as function of magnetization as F= aM2+bM4- HM is minized and gives M as function of H=B/μ representing the external magnetic field. For H=0 one obtains remanent magnetization and clearly both signs of remanent magnetization correspond to the same free energy. This theory is of course thermodynamical theory and it is not clear whether it applies to the recent situation (zero energy ontology quantum theory at least formally a "square root" of thermodynamics).

  2. The energy needed to turn the spin of single free electron (for ferromagnet electrons have strong exchange interaction and are not free) must be above thermal energy but the minimization of energy costs requires that this energy is not much above it and is therefore larger than 5× 10-2 eV, which by the way is also the order of magnitude for the energy gained by elementary charge in the electric field of cell membrane. For electron Curie temperature is 843 K, which corresponds to thermal energy E ∼ 6 × 10-2 eV. This energy is considerably smaller than metabolic energy quantum with nominal value of .5 eV. Therefore metabolic energy of observer could provide the energy needed to turn the spin direction of single electron (note that there is a strong exchange interaction with other electrons). p-Adic length scale hypothesis allows to consider a hierarchy of metabolic energy quanta coming as octaves.

  3. Suppose that magnetized region behaves like single big spin so that the magnetic field of flux tube manages to change the directions of all spins simultaenously so that the contribution of exchange interactions is not affected and the change in the energy of the system in external field is due the change of single electron energies only. The large value for the number Ne of electrons gives for the total energy needed to turn the bit Etot= Neg eB/m. For micrometer sized region Ne is of order Ne=1012 for one conduction electron per atom. The magnetic field associated with the flux tube is expected to be much weaker than the remanent magnetization of order 1 Tesla. For B =1 nT one would have Etot=.3 eV, which is of the order of metabolic energy quantum. The electronic cyclotron frequency is in this field 30 Hz and in EEG range.

  4. Magnetic flux tubes are identified as carriers of dark matter and dark photons. This suggests that dark photons
    representing metabolic energy quanta are involved with the effective value of Planck constant hbareff= Nehbar, and that the transition can be regarded as an absorption of single dark photon turning the entire magnetized region. In terms of singular covering of the imbedding space, dark photon can be regarded as a pile of sheets of covering of space-time sheet each containing single ordinary photon. These photon space-time sheets should be somehow attached to the electrons of the magnetized region.

  5. The attempt to imagine how multisheeted photon/magnetic flux tube interacts with the conduction electrons responsible for ferromagnetism, forces to ask whether also they are dark with the same value of effective Planck constant and reside at various sheets of the singular covering having the size of the magnetized region. Only the first sheet associated with the double sheeted structure describing electron would multifurcate and second sheet would carry external magnetic field, and perhaps also the TGD counter part of the Weiss mean field interpreted as effective description of quantum mechanical exchange forces and having order of magnitude of 100 Tesla. Weiss mean field could allow an identification as return flux of the magnetic field generated by the multi-sheeted electron state. If so, the multifurcations of space-time sheets predicted by the vacuum degeneracy of Kähler action and predicting hierarcy of effective Planck constants comings multiples of hbar, would play a crucial role in the condensed matter physics. Also the TGD inspired model of fractional quantum Hall effect kenociteallb/anyontgd) encourages to consider this possibility seriously.

    The signature for the many-electron states associated with multi-sheeted covering is a sharp peak in the density of states due to the presence of new degress of freedom. In ferromagnets this kind of sharp peak is indeed observed at Fermi energy. Sheets of multi-sheeted covering could also carry Cooper pairs and this could give rise to effective Bose-Einstein statistics of Cooper pairs. In TGD photons emerge from fermions as wormhole contacts with throats carrying fermion and antifermion. This raises the question about realizability of Bose-Einstein statistics in Bose-Eintein condensation. If Bose-Einstein condensate corresponds to multi-furcation of space-time sheet, one obtains Bose-Einstein statistics effectively.

As such this model says nothing specific about the temporal direction of the intentional action although it is clear that the situation is four-dimensional in accordance with basic assumptions of TGD inspired theory of consciousness and with zero energy ontology. Most naturally, the negative energy signal to the geometric past could induce a magnetic perturbation propagating along either flux tube.


Both models are consistent with the general vision discussed by Brian Millar and thus leave open the question whether it is experimenter or operator, who is responsible the PK effect. Experimenters are not completely objective robots, and successful experimenters could have a dream about demonstrating PK convincingly whereas the operators are chosen in "Big Pot" approach randomly. Skeptic experimenters trying to replicate the experiment would tend to produce null result. Experimenters could prove PK by producing it themselves (not my original suggestion)! Taking this seriously, one faces the question whether similar situation could prevail also in other experiments than those of parapsychology.

See the chapter TGD Inspired View About Remote Mental Interactions and Paranormal of "TGD Based Vision about Living Matter and Remote Mental Interactions" or the little article Two attempts to understand PK.

Monday, October 01, 2012

Does the square root of p-adic thermodynamics make sense?

In zero energy ontology M-matrix is in a well-defined sense "complex" square root of density matrix reducing to a product of Hermitian square root of density matrix multiplied by unitary S-matrix. A natural guess is that p-adic thermodynamics possesses this kind of square root or better to say: is modulus squared for it.

For fermions the value of p-adic temperature is however T=1 and thus minimal. It is not possible to construct real square root by simply taking the square root of thermodynamical probabilities for various conformal weights. One manner to solve the problem is to assume that one has quadratic algebraic extension of p-adic numbers in which the p-adic prime splits as p= ππ*, π= m+(-k)1/2n. For k=1 primes p mod 4=1 allow a representation as product of Gaussian prime and its conjugate.

For primes p mod 4=3 Gaussian primes do not help. Mersenne primes rerpesent an important examples of these primes. Eisenstein primes provide the simplest extension of rationals splitting Mersenne primes. For Eisenstein primes one has k=3 and all ordinary primes satisfying either p=3 or p mod 3=1 (true for Mersenne primes) allows this splitting. For the square root of p-adic thermodynamics the complex square roots of probabilities would be given by π(L0/T)/Z1/2, and the moduli squared would give thermodynamical probabilities as p(L0/T)/Z. Here Z is the partition function.

An interesting question is whether T=1 for fermions means that complex square of thermodynamics is indeed complex and whether T=2 for bosons means that the square root is actually real.

For background see the chapter Physics as Generalized Number Theory: p-Adicization Program of "Physics as Generalized Number Theory".