Sunday, February 27, 2011

A concise view about SUSY phenomenology in TGD inspired Universe

The results from SUSY have been very inspiring and forced me to learn the basics of MSSM phenomenology and abstract what it shares with TGD view. As always this kind of process has bee very useful. In the following I summarize the big ideas which distinguish TGD SUSY from MSSM and try to formulate the resulting general phenomenological picture. Of course, one can argue that this kind of approach assuming that QFT calculations can be interpreted in TGD framework is adhoc since twistor approach applied in TGD means deviation from QFT at the fundamental level. I just assume that QFT approach is a good approximation to TGD and look for consequences. Also the experimental constraints on SUSY parameters deduced in the framework of MSSM model with R-symmetry are discussed in TGD framework and shown to be relaxed considerably so that it is possible to avoid problems plaguing MSSM approach: mention only little hierarchy problem and conficting demands from g-2 anomaly of muon and from squark mass limits.

1. Super-conformal invariance and generalized space-time supersymmetry

Super-conformal symmetry is behind the space-time supersymmetry in TGD framework. It took long time to get convinced that one obtains space-time supersymmetry in some sense.

  1. The basic new idea is that the fermionic oscillator operators assignable to partonic 2-surfaces defined the SUSY algebra analagous to space-time SUSY algebra. Without length scale cutoff the number of oscillator operators is infinite and one would have N=∞ SUSY. Finite measurement resolution realized automatically by the dynamics of the modified Dirac action however effectively replaces the partonic two-surface with a discrete set of points which is expected to be finite and one obtains reduction finite value of N. Braids become basic objects in finite measurement resolution at light-like 3-D wormhole throats. String world sheets become the basic objects at the level of 4-D space-time surfaces and it is now possible to identify them uniquely and a connection with the theory of 1-knots, their cobordisms and 2-knots emerges.

    The supersymmetry involves several algebras for which the fermionic oscillator operator algebra serves as a building block. Also the gamma matrices of the world of classical worlds (WCW) are expressible in terms of the fermionic oscillator operators so that fermionic anticommutations have purely geometric interpretation. The presence of SUSY in the sense of conservation of fermionic supra currents requires a consistency condition. For induced gamma matrices the surface must be minimal surfaces (extremal of volume action). For Kähler action and Chern-Simons action one must replace induced gammas by modified ones defined by the contractions of canonical momentum densities with the imbedding space gamma matrices.

    The super-symmetry in TGD framework differs from that in standard approach in that Majorana spinors are not involved. One has 8-D imbedding space spinors with interpretation as quark and lepton spinors. This makes sense because color corresponds to color partial waves and quarks move in triality t=+/- 1 and leptons in t=0 waves. Baryon and lepton number are conserved exactly.

  2. A highly non-trivial aspect of TGD based SUSY is bosonic emergence meaning that bosons can be constructed from fermions. Zero energy ontology makes this construction extremely elegant since both massive states and virtual states are composites of massless states. General arguments support the existence of pseudoscalar Higgs but it is not quite clear whether its existence is somehow forbidden by symmetries. Scalar and pseudoscalar Higgs transforming according to 3+1 decomposition under weak SU(2) replace two complex doublets of MSSM if this is not the case. This difference is essentially due to the fact that spinors are not M4 spinors but M4× CP2 spinors. A proper notation would be B, hB for the gauge bosons and corresponding Higgs particles and one expects that electroweak mixing characterized by Weinberg angle takes place for neutral Higgs particles and also their super-counterparts.

    The sfermions associated with left and right handed fermions should couple to fermions via P+/-=1+/-γ5 so that one can speak about left- and right-handed scalars. Maximal mixing between them leads to scalar and pseudoscalar. This observation raises a question about Higgs and its pseudoscalar variant. Could one assume that the initial states are right and left handed Higgs and that maximal mixing leads to scalar and pseudoscalar with scalar eaten by gauge bosons.

  3. Also spin one particles regarded usually as massless must have small mass and this means that Higgs scalar is completely eaten by gauge bosons. Also scalar gluons are predicted and would be eaten by gluons to develop a small mass. This resolves the IR difficulties of massless gauge theories and conjecture to make possible exact Yangian symmetry in the twistor approach to TGD. The disappearance of Higgs means that corresponding limits on the parameters of SUSY are lost. For instance the limits in (tan(β),MSUSY) coming from Higgs mass do not hold anymore. The disappearence of Higgs means also the disappearance of the little hierarchy problem, which is one of the worst headaches of MSSM SUSY: no Higgs- no Higgs mass to be stabilized.

2. Induced spinor structure and purely geometric breaking of SUSY

Particle massivation and the breaking of SUSY and R-symmetry are the basic problems of both QFT and stringy approach to SUSY. Just by looking the arguments related to how MSSM could emerge from string models make clear how hopelessly ad hoc the constructions are. The notion of modified gamma matrix provides a purely geometric approach to these symmetric breakings involving no free parameters. To my view the failure to realize partially explains the recent situation in the forefront of theoretical physics and LHC findings are now making clear that something is badly wrong.

  1. The space-time supersymmetry is broken. The reason is that the modified gamma matrices are superpositions of M4 and CP2 gamma matrices. This implies mixing of M4 chiralities, which is direct symptom of massivation and is responsible for Higgs like aspects of massivation: p-adic thermodynamics is second and completely new aspect. There is hierarchy of supersymmetries according to the strength of breaking. Right handed neutrino generates supersymmetry which is broken only by the mixing of right handed neutrino with left handed one and induced by the mixing of gamma matrices. This corresponds to the supersymmetry analogous to that of MSSM. The supersymmetries generated by other fermionic oscillator operators with electroweak quantum numbers break the super-symmetry in much more dramatic manner but the basic algebra remains and should allow an elegant formulation of TGD in terms of generalized super fields (see this).

  2. One important implication is R-parity breaking due to the transformation of right handed neutrino of superpartner to left-handed neutrino. If this takes place fast enough, the process sP→ P+ν becomes possible. The universal decay signature would be lonely neutrino representing missing energy without accompanying charged lepton. This means that the experimental limits on sparticle masses deduced assuming R-symmetry do not hold anymore. For instance, the masses of charginos and neutralinos can be considerably lower than weak mass scale as suggested by strange 1995 event (see the earlier posting). The recent very high lower bounds of squark masses putting them above 800 GeV assume also R-symmetry and therefore need not hold true if the decays sh→ q+ν and sg→ g+ν take place fast enough. A further implication is that the scale of SUSY can be weak mass scale- say the p-adic mass scale of 105 GeV corresponding to Mersenne prime M89.

3. p-Adic length scale hypothesis and breaking of SUSY by a selection of p-adic length scale

p-Adic length scale symmetry leads to a completely new view about SUSY breaking which leads to extremely strong prediction. The basic conjecture is that if the p-adic length scales associated with particles and sparticle are same, their masses are identical. The basic aspect of SUSY breaking is therefore different p-adic mass scales of particle and sparticle. By p-adic length scale hypothesis the masses of particle and superpartner therefore differ by a power of 21/2. This is extremely powerful prediction and given only minimal kinematical constraints on the event suggesting super-symmetry allows to deduce the mass of super-partner. Some examples give an idea about what is involved.

  1. The idea about M89 as the prime characterizing both electroweak scale and SUSY mass scale leads to the proposal that all superpartners correspond to the same p-adic mass scale characterized by k=89. For instance, the masses of sfermions would be given in first order p-adic thermodynamics calculation by

    msL/GeV = (262, 439, 945) ,

    m/GeV = (235, 423, 938) ,

    msU/GeV = (262, 287, 893) ,

    msD/GeV = (235, 287, 900) .

  2. A good example is the already mentioned 1995 event suggesting the decay cascade involving selectron, chargino and neutralino. One can deduce the mass estimate for all these sparticles just from loose constraints on the mass intervals and obtains for the mass of selectron the estimate 131 GeV which corresponds to M91 instead of M89. This event allowed also to estimate the masses of Zino and corresponding Higgsino. The results are summarized by the following table:

    m(se)=131 GeV , m(sZ0)=91.2 GeV , m(sh)=45.6 GeV .

  3. In case of the mixing of gaugino and corresponding Higgsino the hypothesis means that the mass matrix is of such form that its eigenvalues have same magnitude but opposite sign. For instance, for the mixing of wino and hW one would have

    M11= M2=-μ

    M12=M21= 21/2MWcos(β) .

    The masses of the resulting two states would be same but they could correspond to different p-adic primes so that mass scales would differ by a power of 21/2. This formula applies also zino and shZ and photino and shγ. One possibility is that heavier weak gaugino corresponds to intermediate gauge boson mass scale and light gaugino to 1/2 of this scale: lighter mass scales are forbidden by the decay widths of weak gauge bosons. The exotic event of 1995 suggest that heavier zino has very nearly the same mass as Z0 and the lighter one mass which equal to one half of Z0 mass. This would mean M2<< MZ.

  4. If one accepts the MSSM formula (see the reference)

    mμ2= msL2+MZ2cos(2β)/2 ,

    relating neutrino and sneutrino masses, one can conclude that cos(β)=1/21/2 is the only possible option so that tan(β)=1 is obtained. This value is excluded by R-parity conserving MSSM but could be consistent with the explanation of g-2 anomaly of muon in terms of loops involving weak gauginos and corresponding higgsinos.

  5. One must distinguish between right- and left handed sfermions sFR and sFL. These states couple via 1+/-γ5 to fermions and are not therefore either pure scalars or pure pseudoscalars. One expects that a maximal mixing of left and right handed sfermions occurs and leads to scalar and pseudoscalar. Mass formula is naturally same for the states for the same value of p-adic prime and also same value of p-adic prime is suggestive. It might however happen that a p-adic mass scales are different for scalars and pseudoscalars. This would allow to have light and heavy variants of squarks and sleptons with scalars probably being the lighter ones.

    Around 1996 there was a lot of talk about RB anomaly and Aleph anomaly and some SUSY models of RB anomaly and Aleph anomaly based on R-parity breaking were proposed. No one talks about these anomalies today which suggest that they were statistical fluctuations. One could however spend a few minutes by pretending to take them seriously. If RB anomaly were real, the decay rate of Z0 to bbbar pair would be slightly higher than predicted. This might be understood if sbR and stR are light and have masses about 55 GeV not too far above mZ/2. This would make possible decays via loops involving decay to virtual sbR or stR pair decaying to b-pair by an exchance of light chargino or neutralino.

    Similar mechanism could apply to the claimed Aleph anomaly in which a pair of dijets is produced in e+e- annihilation. What would happen that virtual photon would produce first to stSstSc or sbSsbSc pair. Here subscript "S" refers to scalar associated with fermion and "PS" would refer to the pseudoscalar. Both final states would in turn decay to b quark and chargino or neutralino. If stS or sbS has mass 55 GeV it could explain the anomaly. k=97 would give mst= 55.8 GeV and msb= 56.25 GeV so that quantitatively the idea seems to work. Chargino and neutralino would have masses m≥ 45 GeV from Z0 decay width and satisfying m< 55 GeV to make the decay of squark to chargino and neutralino.

To sum up, TGD leads to a very predictive model for SUSY and its breaking.

  1. Since there is no Higgs, there are no bounds to parameters from Higgs mass and little hierarchy problem is avoided.

  2. The basic element is R-parity breaking reflected in the possibility of decays of sparticles to particle and lonely neutrino not balanced by charge lepton. This together with the absence of Higgs would allow to circumvent various mass limits deduced from LHC and its predecessors and only the limits from the decay width of intermediate gauge bosons on charginos and neutralinos would remain.

  3. The assumption that the masses of particles and sparticles are same for same p-adic length scale and that the choice of p-adic length scale breaks SUSY means that sparticle masses can be deduce from those of particles apart from a scaling by a power of 21/2. This is a powerful and directly testable prediction and predicts the 2× 2 mixing matrices for charginos and neutralinos completely apart from the parameter M=-μ. An attractive idea is that the mSUSY corresponds to the p-adic mass scale associated with Mersenne prime M89 characterizing electro-weak length scale except perhaps light chargino and neutralino. For M89 sfermions would have for this option masses in the range 200-1000 GeV.

  4. Muon g-2 anomaly could be understood from the predictions tan(β)=1, mSUSY≈ 105 GeV. Scalar and pseudoscalar sfermions could have different mass scales but there seems to be no recently accepted anomalies requiring this.

For more details see the chapter p-Adic Particle Massivation: New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Friday, February 25, 2011

About TGD inspired SUSY again

The data from LHC are coming and TGD based view about SUSY is becoming more detailed. The details can be found from the earlier posting updated today. See also the chapter the chapter p-Adic Particle Massivation: New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". The following response to a comment of Lubos summarizes the situation now.

Says Lubos:

Hi Matti,

again, this article of yours is complete nonsense, like others.

MSSM hasn't been killed in any way. Constraints don't mean "good bye". Quite on the contrary, the analysis of newer results has sharpened the prediction. The probability that squarks are between 800 and 1000 GeV and will be soon this year has increased, and the mass of squarks that is most likely has actually decreased by 100 GeV or so. See the paper by Allanach and others I discussed on my blog.

If you think that SUSY won't be found by the end of 2012 and you're sure about it, I offer you a 1:100 rate for a bet. I would pay $100, you would pay $10,000. Is that OK?

Best wishes

Lubos

Says Matti:

Dear Lubos,

your claim as some other claims by you are complete nonsense. MSSM is dead but SUSY a la TGD is more alive than ever and developing rapidly as data emerge from LHC. Read the posting again to see the outcome of this day.

p-Adic mass calculations assuming Mersenne prime M_89 for all sparticles so that electroweak scale is SUSY scale allow exact prediction of fermion masses and they range from 460 GeV to TeV. Not too bad.

SUSY mass scale is intermediate boson mass scale and the anomaly of anomalous magnetic moment of muon comes out correctly with very reasonable assumptions about remaining parameters.

The high bounds to squark masses could be a problem but also this problem can be solved by the breaking of R-symmetry. The new element is annihilation of sparticle to particle plus neutrino since right handed neutrino defining the super-symmetry can transform to left-handed one. If this takes place rapidly enough the bounds on quark masses are loosened. The unique signature of spartner is a jet with lonely neutrino instead of neutrinos accompanied by charged leptons.

SUSY will certainly be found but in TGD sense. Your bet is unfair. As an eternally unemployed I cannot afford gambling but I am ready to gamble for TGD SUSY with $100 if you put $10,000 for MSSM to the game;-). Isn't this fair;-)?

Tuesday, February 22, 2011

Gravitational waves remain still undetected

Both Sean Carroll and Lubos report that the LIGO has not detected gravitational waves from black holes with masses in the range 25-100 solar masses. This conforms with theoretical predictions. Earlier searches from Super Novae give also null result: in this case the searches are already at the boundaries of resolution so that one can start to worry.

The reduction of the spinning rate of Hulse-Taylor binary is consistent with the emission of gravitational waves with the predicted rate so that it seems that gravitons are emitted. One can however ask whether gravitational waves might remain undetected for some reason.

Massive gravitons is the first possibility. For a nice discussion see the article of Goldhaber and Nieto giving in their conclusions a table summarizing upper bounds on graviton mass coming from various arguments involving model dependent assumptions. The problem is that it is not at all clear what massive graviton means and whether a simple Yukawa like behavior (exponential damping) for Newtonian gravitational potential is consistent with the general coordinate invariance. In the case of massive photons one has similar problem with gauge invariance. One can of course naiively assume Yukawa like behavior for the Newtonian gravitational potential and derive lower bounds for the Compton wave length of gravitons. The bound is given by λc> 100 Mpc (parsec (pc) is about four light years).

Second bound comes from the pulsar timing measurements. The photons emitted by the pulsar are assume to surf in the sea of gravitational waves created by the pulsar. If gravitons are massive in Yukawa sense they arrive with velocities which are below light velocity, a dispersion of both graviton and photon arrival times is predicted. This gives a much weaker lower bound λc> 1 pc. Note that the distance of Hulse-Taylor binary is 6400 pc so that this upper bound for graviton mass could explain the possible absence of gravitational waves from Hulse-Taylor binary. There are also other bounds on graviton mass but all are plagued by model dependent assumptions.

Also in TGD framework one can imagine explanations for the possible absence of gravitational waves. I have discussed the possibility that gravitons are emitted as dark gravitons with gigantic value of hbar, which decay eventually to bunches of ordinary gravitons meaning that continous stream of gravitons is replaced with bursts which would not be interpreted in terms of gravitons but as noise (see this).

One of the breakthroughs of the last year was related to the twistor approach to TGD in zero energy ontology (ZEO).

  1. This approach leads to the vision that all building blocks (light-like wormhole throats) of physical particles -including also virtual particles and also string like objects- are massless. On mass shell particles are bound states of massless particles but virtual states do not satisfy bound state constraint and because negative energies are possible, also space-like virtual momenta are possible.

  2. Massive physical particles are identified as bound states of massless wormhole throats: since the three momenta can have different (as a special case opposite) directions, the bound states of light-like wormhole throats can be indeed massive.

  3. Masslessness of the fundamental objects saves from problems with gauge invariance and general coordinate invariance. It also makes it possible to apply twistor formalism, implies the absence of UV divergences, and yields an enormous simplification of generalized Feynman diagrammatics since mass shell constraints are satisfied at lines besides momentum conservation at vertices.

  4. A simple argument forces to conclude that all spin one and spin two particles- in particular graviton- identified in terms of multi-wormhole throat states must have arbitrary small but non-vanishing mass. The resulting physical IR cutoff guarantees the absence of IR divergences. This allows to preserve the exact Yangian symmetry of the M-matrix. One implication is that photon eats the TGD counterpart of the neutral Higgs and that only pseudoscalar counterpart of Higgs survives. The scalar counterparts of gluons suffer the same fate whereas their pseudoscalar partners would survive.

Is the massivation of gauge bosons and gravitons in this sense consistent with the Yukawa type behavior?

  1. The first thing to notice is that this massivation would be essentially a non-local quantal effect since both emitter and receiver both emit and receive light-like momenta. Therere the description of the massivation in terms of Yukawa potential and using ordinary QFT might well be impossible or be a good approximation at best.

  2. If the massive gauge bosons (gravitons) correspond to wormhole throat pair (pair of these) such that the three-momenta are light-like but in exactly opposite directions, no Yukawa type screening and velocity dispersion should take place.

  3. If the three momenta are not exactly opposite as is possible in quantum theory, Yukawa screening could take place since the classical cm velocity calculated from the total momentum for a massive particle is smaller than maximal signal velocity. The massivation of intermediate gauge bosons and the fact that Yukawa potential description works for them satisfactorily supports this interpretation.

  4. If the space-time sheets mediating gravitational interaction have gigantic values of gravitational Planck constant Compton length of graviton is scaled up dramatically so that screening would be absent but velocity dispersion would remain. This leaves open the possibility that gravitons from Hulse-Taylor binary could reveal the velocity dispersion if they are detected some day.

For details about large hbar gravitons see the chapter Quantum Astro-Physics of "Physics in Many-Sheeted Space-time". For the twistor approach to TGD see the chapter Yangian Symmetry, Twistors, and TGD of "Towards M-Matrix".

Sunday, February 20, 2011

Finding the roots of polynomials defined by infinite primes

Infinite primes identifiable as analogs of free single particle states and bound many-particle states of a repeatedly second quantized supersymmetric arithmetic quantum field theory correspond at n:th level of the hierarchy to irreducible polynomials in the variable Xn which corresponds to the product of all primes at the previous level of hierarchy. At the first level of hierarchy the roots of this polynomial are ordinary algebraic numbers but at higher levels they correspond to infinite algebraic numbers which are somewhat weird looking creatures. These numbers however exist p-adically for all primes at the previous levels because one one can develop the roots of the polynomial in question as powers series in Xn-1 and this series converges p-adically. This of course requires that infinite-p p-adicity makes sense. Note that all higher terms in series are p-adically infinitesimal at higher levels of the hierarchy. Roots are also infinitesimal in the scale defined Xn. Power series expansion allows to construct the roots explicitly at given level of the hierarchy as the following induction argument demonstrates.

  1. At the first level of the hierarchy the roots of the polynomial of X1 are ordinary algebraic numbers and irreducible polynomials correspond to infinite primes. Induction hypothesis states that the roots can be solved at n:th level of the hierarchy.

  2. At n+1:th level of the hierarchy infinite primes correspond to irreducible polynomials

    Pm(Xn+1)= ∑s=0,...,m ps Xsn+1 .

    The roots R are given by the condition

    Pm(R)=0 .

    The ansatz for a given root R of the polynomial is as a Taylor series in Xn:

    R= ∑ rkXnk ,

    which indeed converges p-adically for all primes of the previous level. Note that R is infinitesimal at n+1:th level. This gives

    Pm(R)=∑s=0,...,m ps (∑ rkXnk)s=0 .

    1. The polynomial contains constant term (zeroth power of Xn+1 given by

      Pm(r0)=∑s=0,...,m pr r0s .

      The vanishing of this term determines the value of r0. Although r0 is infinite number the condition makes sense by induction hypothesis.

      One can indeed interpret the vanishing condition

      Pm×m1(r0)=0

      as a vanishing of a polynomial at the n:th level of hierarchy having coefficients at n-1:th level. Here m1 is determined by the dependence on infinite primes of lower level expressible in terms of rational functions. One can continue the process down to the lowest level of hierarchy obtaining m×m1...×mk:th order polynomial at k:th step. At the lowest level of the hierarchy one obtains just ordinary polynomial equation having ordinary algebraic numbers as roots.

      One can expand the infinite primes as a Taylor expansion in variables Xi and the resulting number differs from an ordinary algebraic number by an infinitesimal in the multi-P infinite-P p-adic topology defined by any choice of n-plet of infinite-P p-adic primes (P1,...,Pn) from subsequent levels of the hierarchy appearing in the expansion. In this sense the resulting number is infinitely near to an ordinary algebraic number and the structure is analogous to a completion of algebraic numbers to reals. Could one regard this structure as a possible alternative view about reals remains an open question. If so, then also reals could be said to have number theoretic anatomy.

    2. If one has found the values of r0 one can solve the coefficients rs, s>0 as linear expressions of the coefficients rt, t<s and thus in terms of r0.

    3. The naive expectation is that the fundamental theorem of algebra generalizes so that that the number of different roots r0 would be equal to m in the irreducible case. This seems to be the case. Suppose that one has constructed a root R of Pm. One can write Pm(Xn+1) in the form

      Pm(Xn+1)= (Xn+1-R) × Pm-1(Xn+1) ,

      and solve Pm-1 by expanding Pm as Taylor polynomial with respect to Xn+1-R. This is achieved by calculating the derivatives of both sides with respect to Xn+1. The derivatives are completely well-defined since purely algebraic operations are in question. For instance, at the first step one obtains Pm-1(R)=(dPm/dXn+1)(R). The process stops at m:th step so that m roots are obtained. At lower levels similar branching occurs just as it occurs for polynomials of several variables.

What is remarkable that the construction of the roots at the first level of the hierarchy forces the introduction of p-adic number fields and that at higher levels also infinite-p p-adic number fields must be introduced. Therefore infinite primes provide a higher level concept implying real and p-adic number fields. If one allows all levels of the hierarchy, a new number Xn must be introduced at each level of the hierarchy. About this number one knows all of its lower level p-adic norms and infinite real norm but cannot say anything more about them. The conjectured correspondence of real units built as ratios of infinite integers and zero energy states however means that these infinite primes would be represented as building blocks of quantum states and that the points of imbedding space would have infinitely complex number theoretical anatomy able to represent zero energy states and perhaps even the world of classical worlds associated with a given causal diamond.

For background see the chapter TGD as a Generalized Number Theory III: Infinite Primes and for the pdf version of the argument the chapter Non-Standard Numbers and TGD of "Physics as a Generalized Number Theory".

Free will and quantum

There is rather interesting discussion about physics and free will in Hammock Physicist. I glue below my own comment which begins as objection against the strange claim that the experience of free will in some manner corresponds to inability to predict.

The basic flaw of argument claiming consistency of determinism of determinism and experience of free will is the identification of experience of free will with the inability to predict. I cannot calculate what will happen to me or the world tomorrow but I do not experience this inability as free will. Free will is something much more active. It involves selection between options and also intentionality, which is much more than a mere choice between a finite number of alternatives.

State function reduction is the obvious starting point when one tries to understand free will in terms of quantum theory or it generalization. The basic problems are well-known and the attempt to resolve them leads to the following big picture.

  1. The non-determinism of state function reduction is inconsistent with the determinism of Schroedinger time evolution. This problem is resolved by replacing quantum states with counterparts of the entire time evolutions of Schroedinger equation. Also the geometric past changes in quantum jump: this conforms with the classical findings of Libet that conscious decision is preceded by neural activity.

  2. One must replace state function reduction with quantum jump involving unitary process (something more general than unitary time evolution) followed by state function reduction. Unitary process would create superposition of worlds of quantum states and would be the genuinely creative aspect of free will. The first guess is that state function reduction chooses one state among the eigenstates of measured observables.

  3. The chronon of subjective time is identifiable in terms of quantum jump but one has to understand the relationship of subjective time to the geometric time of physicist. These times are not same (irreversility of the subjecitve time contra reversibility of geometric time). This leads from the usual positive energy ontology to zero energy ontology (ZEO) in which quantum states are pairs of positive and negative energy states with opposite conserved quantum numbers assignable to the future and past boundaries of causal diamond CD (analog of Penrose diagram) defined as intersection of future and past directed light-cones.

    The identification conforms with the crossing symmetry of QFT but predicts deviations from positive energy ontology. In ZEO one can understand how the arrow of subjective time is mapped to that of geometric time and also the localization of the contents of contents of sensory mental images to a finite and rather short (about .1 seconds) time interval: memories are about the region of entire CD and means totally new view about how memories are realized. ZEO allows maximal free will since any zero energy state can in principle be achieved from given one by quantum jumps.

  4. The natural variational principle for consciousness is what I call negentropy maximization principle (NMP). The fundamental observable would be density matrix characterizing the entanglement between system and its complement and characterized by entanglement entropy. NMP states that the reduction of entanglement entropy in state function reduction is maximal. This is consistent with standard quantum measurement theory but state function reduction would always lead to a state with vanishing entanglement negentropy in standard QM. Optimal situation would be no information at all. Something more is needed.

  5. Skeptic can also argue that the outcome of state function reduction is random so that no genuine free will can be assigned it. I believe that this is the case in standard quantum theory. The existence of number theoretic variants of Shannon entropy making sense for rational and even algebraic entanglement probabilities saves the situation: the replacement of probabilities appearing as the arguments of logarithms in Shannon entropy with their p-adic norms for any prime p leads to a modification of Shannon entropy satisfying same defining conditions. The negentropy is maximal for a unique prime.

    This entropy can be negative and the interpretation is in terms of information carried by entanglement: the information is not about whether cat is dead or alive but tells that it is better to not open the bottle;-). In this framework the outcome is not anymore completely random and entangleed state can be stable against state function reduction. p-Adic physics is unavoidable outcome and has interpretation in terms of correlates of intention and cognition. p-Adic space-time regions would be the mind stuff of Descartes. The natural hypothesis is that life corresponds to negentropic entanglement possible in the rational intersection of real and p-adic worlds (matter and cognition).

    Second law generalizes: quantum jumps can create genuinely negentropic subsystems but the price paid is the creation of subsystems with compensating entropy: by looking around and seeing what we have done one becomes convinced about the plausibility of the generalization;-). Next Unitary process however means a moment of mercy cleaning all the dirt;-).

This is just the basic vision. Working out the details requires considerably more lines of text: see my homepage.

Weak form of electric magnetic duality and duality between Minkowskian and Euclidian space-time regions

The reduction of the Kähler action for the space-time sheets with Minkowskian signature of the induced metric follows from the assumption that Kähler current is proportional to instanton current and from the weak form of electric-magnetic duality. The first property implies a reduction to a 3-D term associated with wormhole throats and the latter property reduces this term to Abelian Chern-Simons term. I have not explicitly considered whether the same happens in the 4-D regions of Euclidian signature representing wormhole contacts.

If these assumptions are made also in the Euclidian region, the outcome is that one obtains a difference of two Chern-Simons terms coming from Minkowskian and Euclidian regions at light-like wormhole throats. This difference can be non-trivial since Kähler form for CP2 defines non-trivial U(1) bundle. This however suggests that the total Kähler action is quantized in integer multiples of the Kähler action for CP2 type vacuum extremal so that one would have effectively sum over n-instanton configurations.

If the Kähler function of the "world of classical worlds" (WCW) is identified as total Kähler action, this implies the vanishing of the Kähler metric of WCW, which is a catastrophe. Should one modify the definition of Kähler function by considering only the contribution from either Minkowskian or Euclidian regions? What about vacuum functional: should one identify it as the exponent of Kähler function or of Kähler action in this case?

  1. The Kähler metric of WCW must be non-trivial. If Kähler function is piecewise constant in WCW, and if its second functional derivatives with respect to complex WCW coordinates indeed define WCW Kähler metric, then this metric must vanish identically almost everywhere. This is a catastrophe.

  2. To understand how the problem can be cured, notice that WCW metric receives a non-trivial contribution from the two Lagrange multiplier terms stating the weak form of electric-magnetic duality at Minkowskian and Euclidian sides. Neither Lagrangian multiplier term contributes to Kähler action. Either of them would guarantee that the theory does not reduce to a mere topological QFT and would give rise to a non-trivial Kähler metric. If both are taken into account their contributions cancel each other and the metric is trivial. The conjectured duality between the descriptions based on space-time regions of Minkowskian and Euclidian signature suggests that one should define Kähler function and Kähler metric using only either of the two contributions at wormhole throats.

    This duality could be very useful practically since the two expansions could correspond to weakly and strongly interacting phases analogous to those encountered in the case of electric-magnetic duality. For Euclidian side of the duality one would have power series in powers exp(-n8π2/gK2) multiplied by the exponential of the Minkowskian contribution with negative sign. At the Minkowskian side of the duality one would have exponent of the Minkowskian contribution with positive sign. p-Adicization suggests strongly that the exponent exp(-8π2/gK2) defining the Kähler action of CP2 is a rational number.

  3. Usually vacuum functional is identified as exponent of Kähler function but could one identify vacuum functional as exponent of total Kähler action giving a discrete spectrum for its values? The answer seems to be negative. There are excellent reasons for the identification of the vacuum functional as exponent of Kähler function. For instance, Gaussian and metric determinants cancel each other and the constant curvature spaced property with vanishing Ricci scalar implies that the curvature scalar giving otherwise divergent loop contributions vanishes. If one modifies the vacuum functional from Kähler function to total Kähler action, there is no kinetic term in the exponent of the vacuum functional, and one must give up the idea about perturbative definition of WCW functional integral using the (1,1) part of the contravariant WCW metric as propagator.

  4. The symmetric space property of WCW is what gives hopes about a practical definition of functional integral which is number theoretically universal making therefore sense also in the p-adic context. The reduction of the functional integral to harmonic analysis in infinite-dimensional symmetric spaces allowing to define integrals group theoretically would allow to define functional integrals non-perturbatively without propagator expansion. However, if the functional integral fails perturbatively, the hopes that it makes sense physically, are meager.

The overall conclusion is that the only reasonable definitions of Kähler function of WCW and vacuum functional realize the conjecture duality between Minkowskian and Euclidian regions of space-time surfaces. This duality would have also number theoretical interpretation. Minkowskian regions of the space-time surface would correspond to hyper-quaternionic and Eulidian regions to quaternionic regions. In hyper-quaternionic regions the modified gamma matrices would span hyper-quaternionic plane of complexified octonions (imaginary units multiplied by commutative imaginary unit). In quaternionic regions the modified gamma matrices multiplied by a product of fixed octonionic imaginary unit and commutative imaginary unit would span a quaternionic plane of complexified octonions (see this).

For background Does the Modified Dirac Equation Define the Fundamental Variational Principle of "TGD: Physics as Infinite-Dimensional Geometry".

Saturday, February 19, 2011

Is the effective metric defined by modified gamma matrices effectively one- or two-dimensional?

The following argument suggests that the effective metric defined by the anti-commutators of the modified gamma matrices is effectively one- or two-dimensional. Effective one-dimensionality would conform with the observation that the solutions of the modified Dirac equations can be localized to one-dimensional world lines in accordance with the vision that finite measurement resolution implies discretization reducing partonic many-particle states to quantum superpositions of braids. This localization to 1-D curves occurs always at the 3-D orbits of the partonic 2-surfaces.

The argument is based on the following assumptions.

  1. The modified gamma matrices for Kähler action are contractions of the canonical momentum densities Tαk with the gamma matrices of H.

  2. The strongest assumption is that the isometry currents

    J =Tα kjAk

    for the preferred extremals of Kähler action are of form

    JA α= ΨA (∇Φ)α

    with a common function Φ guaranteeing that the flow lines of the currents integrate to coordinate lines of single global coordinate variables (Beltrami property). Index raising is carried out by using the ordinary induced metric.

  3. A weaker assumption is that one has two functions Φ1 and Φ2 assignable to the isometry currents of M4 and CP2 respectively.:

    JA α1 = Ψ1A (∇Φ1)α ,

    JA α2 = Ψ2A (∇Φ2)α .

    The two functions Φ1 and Φ2 could define dual light-like curves spanning string world sheet. In this case one would have effective 2-dimensionality and decomposition to string world sheets (for the concrete realization see this). Isometry invariance does not allow more that two independent scalar functions Φi.

Consider now the argument.

  1. One can multiply both sides of this equation with jAk and sum over the index A labeling isometry currents for translations of M4 and SU(3) currents for CP2. The tensor quantity ∑A jAkjAl is invariant under isometries and must therefore satisfy

    A ηABjAkjAl= hkl ,

    where ηAB denotes the flat tangent space metric of H. In M4 degrees of freedom this statement becomes obvious by using linear Minkowski coordinates.

    In the case of CP2 one can first consider the simpler case S2=CP1= SU(2)/U(1). The coset space property implies in standard complex coordinate transforming linearly under U(1) that only the the isometry currents belonging to the complement of U(1) in the sum contribute at the origin and the identity holds true at the origin and by the symmetric space property everywhere. Identity can be verified also directly in standard spherical coordinates. The argument generalizes to the case of CP2=SU(3)/U(2) in an obvious manner.

  2. In the most general case one obtains

    Tα k1 =∑AΨ1A jAk × (∇Φ1)α == fk1 (∇Φ1)α ,

    Tα k2 =∑AΨ1A jAk × (∇Φ2)α ≡ fk2 (∇Φ2)α .

    Here i=1 refers to M4 part of energy momentum tensor and i=2 to its CP2 part.

  3. The effective metric given by the anti-commutator of the modified gamma matrices is in turn is given by

    Gα β = mklfk1fl1 (∇Φ1)α(∇Φ1)β +skl fk2 fl2 (∇Φ2)α(∇Φ2)β .

    The covariant form of the effective metric is effectively 1-dimensional for Φ12 in the sense that the only non-vanishing component of the covariant metric Gα β is diagonal component along the coordinate line defined by Φ≡ Φ12. Also the contravariant metric is effectively 1-dimensional since the index raising does not affect the rank of the tensor but depends on the other space-time coordinates. This would correspond to an effective reduction to a dynamics of point-like particles for given selection of braid points. For Φ1≠ Φ2 the metric is effectively 2-dimensional and would correspond to stringy dynamics.

For background see the chapter Does the Modified Dirac Equation Define the Fundamental Variational Principle of "TGD: Physics as Infinite-Dimensional Geometry".

Thursday, February 17, 2011

A comment about topological explanation of family replication phenomenon

The topological explanation of family replication phenomenon of fermions in terms of the genus g defined as the number of handles added to sphere to obtain the quantum number carrying partonic 2-surface distinguishes TGD from GUTs and string models. The orbit of the partonic 2-surface defines 3-D light-like orbit identified as wormhole throat at which the induced metric changes its signature. The original model of elementary particle involved only single boundary component replaced later by a wormhole throat. The generalization to the recent situation in which elementary particles correspond to wormhole flux tubes of length of order weak length scales with pairs of wormhole throats at its ends is straight-forward.

The basic objection against the proposal is that it predicts infinite number of particle families unless the g< 3 topologies are preferred for some reason. Conformal and modular symmetries are basic symmetries of the theory and global conformal symmetries provide an excellent candidate for the sought for reason why.

  1. For g<3 the 2-surfaces are always hyper-elliptic which means that they have have always Z2 as global conformal symmetries. For g>2 these symmetries are absent in the generic case. Moreover, the modular invariant elementary particle vacuum functionals vanish for hyper-elliptic surfaces for g>2. This leaves several options to consider. The basic idea is however that ground states are usually highly symmetric and that elementary particles correspond to ground states.

  2. The simplest guess is that g>2 surfaces correspond to very massive states decaying rapidly to states with smaller genus. Due to the the conformal symmetry g<3 surfaces would be analogous to ground states and would have small masses.

  3. The possibility to have partonic 2-surfaces of macroscopic and even astrophysical size identifiable as seats of anyonic macroscopic quantum phases (see this) suggests an alternative interpretation consistent with global conformal symmetries. For partonic 2-surfaces of macroscopic size it seems natural to consider handles as particles glued to a much larger partonic 2-surface by topological sum operation (topological condensation).

    All orientble manifolds can be obtained by topological sum operation from what can be called prime manifolds. In 2-D orientable case prime manifolds are sphere and torus representing in well-defined sense 0 and 1 so that topological sum corresponds to addition of positive integers arithmetically. This would suggest that only sphere and torus appear as single particle states. Particle interpretation however requires that also g=0 and g=2 surfaces topologically condensed to a larger anyonic 2-surface have similar interpretation, at least if they have small enough size. What kind of argument could justify this kind of interpretation?

  4. An argument based on symmetries suggests itself. The reduction of degrees of freedom is the generic signature of bound state. Bound state property implies also the reduction of approximate single particle symmetries to an exact overall symmetry. Rotational symmetries of hydrogen atom represent a good example of this. For free many particle states each particle transforms according to a representation of rotation group having total angular momentum defined as sum of its spin and angular momentum. For bound states rotational degrees of freedom are strongly correlated and only overall rotations of the state define rotational symmetries.

    In this spirit one could interpret sphere as vacuum, torus as single handle state, and torus with handle as a bound state of 2 handles in conformal degrees of freedom meaning that the Z2 symmetries of vacuum and handles are frozen in topological condensation (topological sum) to single overall Z2. If this interpretation is correct, g>2 2-surfaces would always have a decomposition to many-particle states consisting of spheres, tori and tori with single handle glued to a larger sphere by topological sum. Each of these topologically condensed composites would possess Z2 as approximate single particle symmetry.

For more details see the chapter Elementary Particle Vacuum Functionals of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Tuesday, February 15, 2011

Isotope effect of olfaction

Flies can smell the difference between normal hydrogen and deuterium. This is not in accordance with the standard theory of olfaction which says that olfaction relies on the shape of the molecule but conforms with the theory of Luca Turin, who is one of the co-authors of the article

Franco, M. I., Turin, L., Mershin, A. & Skoulakis, E. M. C. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1012293108 (2011)

reporting the discovery. The theory assumes that olfaction relies on molecular vibrational frequencies depending on the mass of the isotope. You can make your day perfect by enjoying the summary of the vibrational theory of scents by Luca Turin himself (I am grateful for Fischer Gabor for the link). There is also a Wikipedia article about Vibration Theory of Olfaction.

1. Turin's theory

From Turin's lecture and WIkipedia article one learns why reductionism is so nice when it can be applied.

  1. If the molecular vibrations in a reasonable approximation reduce to independent vibrations assignable to various chemical bonds, the problem of predicting the odor of the molecule reduces to the calculation or measurement of the oscillation frequencies associated with the chemical bonds of between two atoms or between two molecules forming a bigger molecule as a composite. Near IR frequencies in .8-2.5 μm wavelength range associated with vibrational spectrum are inversely proportional to the reduced mass of the pair of atoms or molecules connected by the chemical bond and the IR frequencies related to rotational-vibrational transitions depending on more complex manner on the molecular mass are good candidates for inducing the olfactory qualia at least in the case of insects.

  2. Situation is also simplified by the fact that only a finite range of frequencies is expected to induce odor sensation just as only finite range of frequencies induces visual percept. Hence the engineering of odors becomes possible by considering only some basic bonds. One can test the model by replacing the hydrogen with deuterium in some constituent of the molecule and this was done in the article referred above.

  3. The odor of the molecule should be a superposition of the basic odors assignable to the basic chemical bonds just like visual color is a superposition of primary colors. One must however remember that the quantum phase transition inducing the odor sensation itself need not have anything to do with the IR photons and many frequencies could induce the same quantum phase transition. The innocent novice is also allowed to ask whether the harmonics of the fundamental oscillation frequency could give rise to the olfactory analogy of timbre distinguishing between different musical instruments and whether octaves correspond to more or less similar odor sensation. The following considerations suggest that the answer to these questions is negative.

In Turin's theory vibrational frequencies are interpreted in terms of a model of receptor based on the idea that electron tunneling occurs between odor molecule and receptor and generates odor sensation if the energies of the electron states at the both sides are same. In general the ground state energies of the electron at the two sides are different but it can happen that the condition is satisfied for some excited state of electron of the acceptor so that odor perception is due to a tunneling to an excited state. The model requires the fusion of the odorant molecule to the receptor so that there is a close relationship with the standard theory assuming lock-and-key mechanism.

2. Callahan's theory

The finding conforms also with the old discovery of Callahan that the olfaction of insects is analogous to seeing at IR frequencies. This hypothesis explains among other things the finding that insects seem to love candles. See this:

Callahan, P. S. (1977). Moth and Candle: the Candle Flame as a Sexual Mimic of the Coded Infrared Wavelengths from a Moth Sex Scent. Applied Optics. 16(12) 3089-3097.

If I have understood Callahan's theory correctly, the IR photons emitted by the odorant would induce transitions of electrons or Cooper pairs of the odor receptor. This would allow "radiative smelling" without a direct contact between odor molecules and olfactory receptors and at the first glance this seems like an unrealistic prediction. However, since the average power of radiation is proportional to 1/r2, where r is the distance between the receptor and molecule, radiative smelling would in practice be limited to rather short distances unless the radiation is guided. Maybe this could be tested experimentally by using coherent beam of IR light as a candidate for an artificial odorant.

3. TGD based theory

In TGD inspired theory of qualia one must distinguish between the sensory input inducing the quale and its secondary representation in terms of Josephson and cyclotron frequencies.

  1. All qualia are coded (but not necessarily induced!) by various frequencies and communication using dark photons with various values of Planck constant meaning scaling down of visible basic frequencies is an essential element of communications at the level of biological body and between magnetic body and biological body. Josephson frequencies and cyclotron frequencies with so large Planck constant that energies are above thermal energy play a key role in the these communications. Note that cyclotron frequencies are inversely proportional to the mass of the ion so that isotope effect also at this level is predicted.

    Josephson frequencies are assignable to cell membrane and one ends up with a nice model for the visual qualia assuming some new physics predicted by TGD. Josephson frequencies and their modulation (as in the case of hearing) should be highly relevant for all qualia.

  2. The capacitor model for sensory qualia assumes that all qualia are generated via the quantum analog of dielectric breakdown in which particles with given quantum numbers characterizing the quale flow between the plates of the capacitor. For sensory receptors the capacitor is obtained by a multi-layered structure obtained by a multiple folding of the cell membrane so that the efficiency of the sensory receptor increases.

  3. In Turin's model the second plate of the capacitor model would correspond to the odorant molecule. This does not however allow anything resembling di-electric breakdown. It is difficult to imagine how to achieve a quantum phase transition involving simultaneous tunneling of a large number of electrons unless the receptor binds a large number of odorant molecules. Odor molecules should also form a quantum coherent state: a molecular analog of atomic Bose-Einstein condensate would be required. This would mean that only very special odor molecules could be smelled.

  4. For the Callahan's variant of the theory the IR photons could excite the Cooper pairs of the other plate of the capacitor so that the tunneling becomes possible and quantum variant of di-electric breakdown can take place. This model is consistent also with the assumption that cell membrane acts as a Josephson junction and fundamental sensory capacitor. The energy of electron gained in the electric field of the cell membrane is in the range .04-.08 eV which indeed corresponds to IR frequencies. The variation of the membrane potential would give rise to the spectrum of basic odors. Roughly one octave of frequencies could be smelled if the cell membrane defines the fundamental nose smelling the energy of electron.

    This option allows also the coding of odors by IR frequencies themselves so that brain could generate virtual odors by sending quantum coherent IR light to the odor receptors. This would explain odor hallucinations (and also other sensory hallucinations) as virtual percepts generated by brain itself. This sensory feedback would be absolutely essential for building up of standardized sensory percepts.

  5. The difference between visual and odour receptors would be that the ground states of the cell membrane would correspond to near to vacuum extremals resp. far from vacuum extremals and therefore Josephson frequencies would be in visible resp. IR range respectively.

For the general theory of qualia including also a model of odor perception see the chapter General Theory of Qualia of "TGD Universe as Conscious Hologram".

Friday, February 04, 2011

Good bye large extra dimensions and MSSM

New results giving strong constraints on large extra dimensions and on the parameters of minimally supersymmetric standard model (MSSM) have come from LHC and one might say that both larger extra dimensions and MSSM are experimentally excluded.

The problems of MSSM

According to the article The fine-tuning price of the early LHC by A. Strumia the results from LHC reduce the parameter space of MSSM dramatically. Recall that the king idea of MSSM is that the presence of super partners tends to cancel the loop corrections from ordinary particles giving to Higgs mass much larger correction that the mass itself. Unfortunately, the experimental lower bounds on masses of superpartners are so high and the upper bound on Higgs mass so low that the superpartners cannot give rise to large enough compensating corrections. The means need for fine-tuning even in MSSM known as little hierarchy problem.

Also the article Search for supersymmetry using final states with onelepton, jets, and missing transverse momentum with the ATLAS detector in s1/2 = 7 TeV pp collisions by ATLAS collaboration at LHC poses strong limits on the parameters of MSSM implying that the mass of gluino is above 700 GeV in the case that gluino mass is same as that of squark. The essential assumption is that R-parity is an exact symmetry so that the lightest superpartner is stable. The signature of SUSY is indeed missing energy resulting in the decay chain beginning with the decay of gluino to chargino and quark pair followed by the decay of chargino to W boson and neutralino representing missing energy.

A theorists with a modest amount of aesthetic sense would have made the unavoidable conclusions long time ago but the fight of theory against facts has continued. Maybe some corner of the parameter might give what one wants?! This has been the hope. The results from LHC however do not leave much about this dream. One must try something else.

The difficulties of large extra dimensions

One example of this something else are large extra dimensions implying massive graviton, which could provide a new mechanism for massivation based on the idea that massive particle in Minkowski space are massless particles in higher dimensional space (also essential element of TGD). This could perhaps the little hierachy problem if the mass of Kaluza-Klein graviton is in TeV range.

The article LHC bounds on large extra dimensions by A. Strumia and collaborators poses very strong constraints on large extra dimensions and mass and effective coupling constant parameter of massive graviton. Kaluza-Klein graviton would appear in exchange diagrams and loop diagrams for 2-jet production and could become visible in higher energy proton-proton collisions at LHC. KK graviton would be also produced as invisible KK-graviton energy in proton-proton collisions. The general conclusion from data gathered hitherto shrinks dramatically the allowed parameter space for the KK-graviton. I must say that for me the idea of large dimensions is so ugly that the results are not astonishing. Aether hypothesis was a beauty compared with this beast.

Could TGD approach save super-symmetry?

What is left? Should we follow the example of landscapeologists and just accept anthropic principle giving up all attempts to understand electroweak symmetry breaking?

Not at all! In TGD framework -which of course represents bad theoretical physics from the point of view of hegemony- the situation is not at all so desolate. Due to the differences between the induced spinor structure and ordinary spinors, Higgs corresponds to SU(2) triplet and singlet in TGD framework rather than complex doublet. The recent view about particles as bound states of massless wormhole throats forced by twistorial considerations and emergence of physical particles as bound states of wormhole contacts carrying fermion number and vibrational degrees of freedom strongly suggests- I do not quite dare to say "implies"- that also photon and gluons become massive and eat their Higgs partners to get longitudinal polarization they need. No Higgs- no fine tuning of Higgs mass- no hierarchy problems.

Note that super-symmetry is not given up in TGD but differs in many essential respects from that of MSSM. In particular, super-symmetry breaking and breaking of R-parity are automatically present from the beginning and relate very closely to the massivation. Therefore neutralino is unstable against decay to neutrino-antineutrino pair if the mass of the neutralino is larger than twice the mass of neutrino. The decay of the neutralino must also take place with high enough rate. The rate is determined by the rate with which right handed neutrino transforms to a left handed one. The rate is dictated by the mixing of M4 and CP2 gamma matrices.

  1. If the gamma matrices were induced gamma matrices, the mixing would be large by the light-likeness of wormhole throats carrying the quantum numbers. Induced gamma matrices are however excluded by internal consistency requiring modified gamma matrices obtained as contractions of canonical momentum densities with imbedding space gamma matrices. Induced gamma matrices would require the replacement of Kähler action with 4-volume and this is unphysical option.

  2. In the interior Kähler action defines the canonical momentum densities and near wormhole throats the mixing is large: one should note that the condition that the modified gamma matrices multiplied by square root of metric determinant must be finite. One should show that the weak form of electric-magnetic duality guarantees this: it could even imply the vanishing of the limiting values of these quantities with the interpretation that the space-time surfaces becomes the analog of Abelian instanton with Minkowski signature having vanishing energy momentum tensor near the wormhole throats. If this is the case, Euclidian and Minkowskian regions of space-time surface could provide dual descriptions of physics in terms of generalized Feynman diagrams and fields.

  3. At wormhole throats Abelian Chern-Simons-Kähler action with the constraint term guaranteeing the weak form of electric-magnetic duality defines the modified gamma matrices. Without the constraint term Chern-Simons gammas would involve only CP2 gamma matrices and no mixing of M4 chiralities would occur. The constraint term transforming TGD from topological QFT to almost topological QFT by bringing in M4 part to the modified gamma matrices however induces a mixing proportional to Lagrange multiplier. It is difficult to say anything precise about the strength of the constraint force density but one expect that the mixing is large since it is also large in the nearby interior.

If the mixing of the modified gamma matrices is indeed large, the transformation of the right-handed neutrino to its left handed companion should take place rapidly. If this is the case, the decay signatures of spartners are dramatically changed as will be found and the bounds on the masses of squarks and gluinos derived for MSSM do not apply in TGD framework.

In TGD framework p-adic length scale hypothesis (see this and this) allows to predict the masses of sleptons and squarks modulo scaling by a powers 21/2 determined by the p-adic length scale by using information coming from CKM mixing induced by topological mixing of particle families in TGD framework.

  1. If one assumes that the mass scale of SUSY corresponds to Mersenne prime M89 assigned with intermediate gauge bosons one obtains unique predictions for the various masses apart from uncertainties due to the mixing of quarks and neutrinos.

  2. In first order the p-adic mass formulas reads as

    mF=(nF/5)1/2 2(127-kF)/2me ,

    nL= (5,14,65), nν= (4,24,64) , nU=(5,6,58), nD=(4,6,59).

    Here kF is the integer characterizing p-adic mass scale of fermion via p≈ 2kF. The values of kF are not listed here since they are not needed now. Note that electroweak symmetry breaking distinguish U and D type fermions is very small when one uses p-adic length scale as unit.

    By assuming kF=89 for super-partners one obtains in good approximation (the first calculation contained erranous scaling factor)

    msL/GeV = (262, 439, 945) ,

    m/GeV = (235, 423, 938) ,

    msU/GeV = (262, 287, 893) ,

    msD/GeV = (235, 287, 900) .

  3. The simplest possibility is that also electroweak gauginos are characterized by k=89 and have same masses as W and Z in good approximation. Therefore sW could be the lightest super-symmetric particle and could be observed directly if the neutrino mixing is not too fast and allowing the decay sW+ν. Also gluinos could be characterized by M89 and have mass of order intermediate gauge boson mass. For this option to be discussed below the decay scenario of MSSM changes considerably. Also Higgsino (note that entire Higgs would be eaten by the massivation of all electroweak gauge bosons in the simpleset scenario) could be produced in the decay and would naturally have electroweak mass scale.

  4. It should be noticed that the single strange event reported 1995 (see the previous posting) gave for the mass of selectron the estimate 131 GeV which corresponds to M91 instead of M89. This event allowed also to estimate the masses of Zino and corresponding Higgsino. The results are summarized by the following table:

    m(se)=131 GeV , m(sZ0)=91.2 GeV , m(sh)=45.6 GeV .

    If one takes these results at face value one must conclude either that M89 hypothesis is too strong or MSUSY corresponds to M91 or that M89 is correct identification but also sfermions can appear in several p-adic mass scales.

The decay cascades searched for in LHC are initiated by the decay q→ sq+sg and g→ sq+ sqc. Consider first R-parity conserving decays. Gluino could decay in R-parity conserving manner via sg→ sq+ q. Squark in turn could decay via sq→ q1+sW or via sq→ q+sZ0. Also Higgsino could occur in the final states. For the proposed first guess about masses the decay sW→ νe+se or sZ0→ νe+sνe would not be possible on mass shell.

If the mixing of right-handed and left-handed neutrinos is fast enough, R-parity is not conserved and the decays sg→ g+ν and sq→ q+ν could take place by the mixing νR→ νL following by electroweak interaction between νL quark or antiquark appearing as composite of gluon. The decay signature in this case would be pair of jets (quark and antiquark or gluon gluon jet both containing a lonely neutrino not accompanied by a charged lepton required by electroweak decays. Also the decays of electroweak gauginos and sleptons could produce similar lonely neutrinos.

The lower bound to quark masses from LHC is about 600 GeV and 800 GeV for gluon masses assuming light neutralino is slightly above the proposed masses of lightest squarks (see this). These masses are allowed for R-parity conserving option if the decay rate producing the chargino is reduced by the large mass of chargino the bounds become weaker. If the decay via R-parity breaking is fast enough no bounds on masses of squarks and gluinos are obtained in TGD framework but jets with neutrino unbalanced by a charged lepton should be observed.

The anomalous magnetic moment of muon as a constraint on SUSY

The anomalous magnetic moment aμ== (g-2)/2 of muon has been used as a further constraint on SUSY. The measured value of aμ is aμexp=11659208.0(6.3)× 10−10. The theoretical prediction decomposes to a sum of reliably calculable contributions and hadronic contribution for which the low energy photon appearing as vertex correction decays to virtual hadrons. This contribution is not easy to calculate since non-perturbative regime of QCD is involved. The deviation between prediction and experimental value is Δ aμ(exp-SM)= 23.9(9.9)× 10-10 giving Δ aμ(exp-SM/)/aμ= 2× 10-6. The hadronic contribution is estimated to be 692.3× 10-10 so that the anomaly is 3 per cent from the hadronic contribution. This suggests that the uncertainties due to the non-perturbative effects could explain the anomaly.

It has been proposed that the loops involving superpartners could explain the anomaly. In order to get some idea of the situation one can just assume that QFT calculation makes sense as an approximation also in TGD framework and try to identify the TGD counterparts and also the values of the parameters appearing in MSSM calculation.

In one-loop order one would have the processes μ→ sμ+ sZ0 and μ → sμμ+ sZ0. The situation is complicated by the possible mixing of the gauginos and Higgsinos and in MSSM this mixing is described by the mixing matrices called X and Y.

  1. The basic outcome is that the mixing is proportional to the factor mμ2/mSUSY2. One expects that in the recent situation mSUSY=mW is a reasonable guess so that the mixing is large and could explain the anomaly. Second guess is as M89 p-adic mass scale.

  2. In MSSM the mixing is also proportional to tan(β) factor where the angle β characterizes the ratio of mass scales of U and D type fermions fixed by the ratio of Higgs expectations for the two complex Higgs doublets (see the reference). The value of the parameter tan(β) also characterizes in MSSM the ratio of vacuum expectation values of two Higgses and cannot be fixed from this criterion since in TGD framework one has one scalar Higgs and pseudoscalar Higgs decomposing to triplet and singlet under SU(2). One can however use the fact that β also characterizes the mixing of sW and charged Higgsino parametrized as by a matrix whose rows are given by

    X1 = (M2, MW21/2cos(β)) ,

    X2 =(MW21/2cos(β) , μ) .

    This parameterization makes sense in TGD framework with M2 and μ identified as masses of wino and charged Higgsino before mixing giving rise to their physical masses (note that the sign of μ can be negative). The first guess is that apart from p-adic mass scale same has M2=-μ= m: this guarantees identical masses for the mixed states in accordance with the ideas that different masses for particles and sparticles result from the different p-adic length scale. For cos(β)=1/21/2 this would give mass matrix with eigen values

    (M,-M), M= (m2+mW2)1/2,

    so that the masses of the mixed states would be identical and above mW mass for p=M89. Symmetry breaking by the increase of the p-adic length scale could however reduce the mass of the other state by a power of 21/2.

  3. In MSSM 4× 4 matrix is needed to describe the mixing of neutral gauginos and two kinds of neutral Higgsinos. In TGD framework second Higgs (if it exists at all) is pseudo-scalar and does not contribute and the 2× 2 matrices describe the mixing also now.

    1. Since Higss and Higgsino have representation content 3+1 with respect to electroweak SU(2) in TGD framework, one can speak about shB, B= W,Z,γ. An attractive assumption is that Weinberg angle characterizes also the mixing giving rise to sZ and sγ on one hand and shγ and shZ on the other hand. This would reduce the mixing matrix to two 2× 2 matrices: the first one for sγ and shγ and the second one for sZ and shZ.

    2. A further attractive assumption is that the mass matrices describing mixing of gauginos and corresponding Higgsinos are in some sense universal with respect to electroweak interactions. The form of the mixing matrix would be essentially same for all cases. This would suggest that MW is replaced in the above formula with the mass of Z0 and photon in these matrices (recall that it is assumed that photon gets small mass by eating the neutral Higgs). Note that for photino and corresponding Higgsino the mixing would be small. The guess is M2=-μ= mZ. For photino one can guess that M2 corresponds to M89 mass scale.

    These assumptions of course define only the first maximally symmetric guess and the simplest modification that one can imagine is due to the different p-adic mass scales. If the above discussed values for zino and neutralino masses deduced from the 1995 event are taken at face value, the eigenvalues would be +/- (M_Z^2+m^2)1/2 with m=M_2=-μ for sZ-shZ-mixing and the other state would have p-adic length scale k=91 rather than k=89. M and μ would have opposite signs as required by the correct sign for the g-2 anomaly for muon assuming that smuons correspond to k=M89 as will be found.

  4. If one accepts the MSSM formula (see the reference)

    mμ2= msL2+MZ2cos(2β)/2 ,

    and uses the fact the masses of sneutrinos and sleptons are very near to each other, the natural guess is β=+/- π/4 so that one would have tan(β)=1. This corresponds to the lower bound of allowed values of tan(β) and small mass scale for weak gauginos. In MSSM tan(β)>2 is required and this is due to the large value of the mSUSY.

By using the formulas 56-58 of the reference one obtains for the charged loop the expression

Δ aμ+/-=-(21g22/32π2)× (mμ/mW)2× sign(μ M2) .

For neutral contribution the expression is more difficult to deduce. As physical intuition suggests, the expression inversely proportional to 1/mW2 since mW corresponds now mSUSY although this is not obvious on the basis of the general formulas suggesting the proportionality toi 1/mμ2. The p-adic mass scale corresponding to M89 is the natural guess for MSUSY and would give MSUSY= 104.9 GeV. The correction has positive sign, which requires that μ and M2 have opposite signs unlike in MSSM. The sign factor is opposite to that in MSSM because sfermion mass scales are assumed to be much higher than weak gaugino mass scale.

The ratio of the correction to the lowest QED estimate aμ,0=α/2π can be written as

Δ aμ+/aμ,0= (21/4)× (1/sin2W))× (mμ/mSUSY)2≈2.73× 10-5 .

which is roughly 10 times larger than the observed correction (the first calculation contained an error). The contribution Δ aμ0 should reduce this contribution and certainly does. At this moment I am however not yet able to transform the formula for it to TGD context. Also the scaling up of the mSUSY=mW by a factor of order 23/2 could reduce the correction.

(tan(β)=1,MSUSY=100 GeV) corresponds to the boundary of the region allowed by the LHC data and g-2 anomaly is marginally consistent with these parameter values (see figure 16 of this). The reason is that in the recent case the mass of lightest Higgs particle does not pose any restrictions (the brown region in the figure). Due to the different mixing pattern of gauginos and higgsinos in neutral sector TGD prediction need not be identidal with MSSM prediction.

The proposed estimate is certainly poor man's estimate since it is not clear how near the proposed twistorial approach relying on zero energy ontology is to QFT approach. It is however encouraging that the simplest possible scenario might work and that this is essentially due to the p-adic length scale hypothesis

Also M-theorists admit that there are reasons for the skepticism

Lubos gives a link to the talk Supersymmetry From the Top Down by Michael Dine, who admits that there are strong reasons for skepticism. Dine emphasizes that the hierarchy problem related to the in-stablity of Higgs mass due to the radiative corrections is the main experimental motivation for SUSY but that little hierarchy problem remains the greatest challenge of the approach. As noticed, in TGD this problem is absent. The same basic vision based on zero energy ontology and twistors predicts among other things

  • the cancellation of UV and IR infinities in generalized Feynman (or more like twistor-) diagrammatics,
  • predicts that in the electroweak scale the stringy character of particles identifiable as magnetically charged wormhole flux tubes should begin to make itself manifest,
  • particles regarded usually as massless eat all Higgs like particles accompanying them (here "predict" is perhaps too strong a statement),
  • also pseudo-scalar counterparts of Higgs-like particles which avoid the fate of their scalar variants (there already exist indications for pseudo-scalar gluons).
Combined with the powerful predictions of p-adic thermodynamics for particle masses these qualitative successes make TGD a respectable candidate for the follower of string theory.

For a more details see the chapter p-Adic Particle Massivation: New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

A closely packed system of low-mass, low-density planets transiting Kepler-11

NASA has published the first list of exoplanets found by Kepler satellite. In particular, the NASA team led by Jack Lissauer reports a discovery of a system of six closely packed planets (see the article in Nature) around a Sunlike star christened as Kepler-11a located in the direction of constellation Cygnus at distance of about 2000 light years. The basic data about the six planets Kepler-11i, i=b,c,d,e,f,g and star Kepler-11a can be found in Wikipedia. Below I will refer to the star by Kepler-11 and planets with label i=b,c,d,e,f,g.

Lissauer regards it as quite possible that there are further planets at larger distances. The fact that the radius of planet g is only .462AU together with what we know about solar system suggests that this could be be the case. This leaves door for Earth like planet.

The conclusions from the basic data

Let us list the basic data.

  1. The radius and mass and surface temperature of Kepler-11 are very near to those of Sun.

  2. The orbital radii using AU as unit are given by
    (.091,.106,.159,.194,.250,.462).
    The orbital radii can be deduced quite accurately from the orbital periods by using Kepler's law stating that the squares of periods are proportional to cubes of orbital radii.The orbital periods of the five inner planets are between 10 and 47 days whereas g has a longer period of 118.37774 days (note the amazing accuracy). The orbital radii of e and f are .194 AU and .250 AU so that the temperature is expected to be much higher than at Earth so that life as we know it is not expected to be there. The average temperature of the radiation from Kepler-11 scaling as 1/r2 would be 4 times the temperature at Earth. The fact that gas forms a considerable fraction of the planet's mass could however mean that this does not give a good estimate for the temperature of the planet.

  3. The mass estimates using Earth mass as unit are
    (4.3,13.5,6.1,8.4,2.3 , <300).
    There are considerable uncertainties involved here, of order factor of 1/2.

  4. The estimates for the radii of the planets using the radius of Earth as unit are
    (1.97, 3.15,3.43,4.52,2.61,3.66).
    The uncertainties are about 20 per cent.

  5. From the estimates for the radii and mass estimates one can conclude that the estimates for the densities of the planets are considerably lower than those for Earth. Density of (e,f) is about (1/8,1/4) of that for Earth. The surface gravitation for e and f is roughly 1/2 of that at Earth. For g it is same as for Earth if g has mass roughly m≈ 15. For planet g only an upper bound 300 so that one can only that surface gravity is weaker than 20g.

The basic conclusions are following. One cannot exclude the possibility that the planetary system could contain Earth like planets. Furthermore, the distribution of the orbital radii of the planets differs dramatically from that in solar system.

How to understand the tight packing of the inner planets?

The striking aspect of the planetary system is how tightly packed it is. The ratio for the radii of g and b is about 5. This is a real puzzle for model builders with me included. TGD suggests three phenomenological approaches.

  1. Titius-Bode law
    r(n)=r0+ r12n
    is supported by p-adic length scale hypothesis. Stars would have onion-like structure consisting of spherical shells with inner and outer radii of the shell differing by factor two. The formation of planetary system involves condensation of matter to planets at these spherical shells. The preferred extremals of Kähler action describing stationary axially symmetric system corresponds to spherical shells containing most of the matter. A rough model for star would be in terms of this kind of spherical shells defined an onion-like structure defining a hierarchy of space-time sheets topologically condensed on each other. The value of the parameter r0 could be vanishing in the initial situation but subsequent gravitational dynamics could make it positive reducing the ratio r(n)/r(n-1) from its value 2.

  2. Bohr orbitology suggested by the proposal that gravitonic space-time sheets assigned with a given planet-star pair correspond to a gigantic value of gravitational Planck constant given by
    hbargr= GMm/v0,
    where v0 has dimensions of velocity and actually equal to the orbital velocity for the lowest Bohr orbit. For inner planets in solar system one has v0/c≈ 2-11.

    The physical picture is visible matter concentrates around dark matter and in this matter makes it astroscopic quantum behavior visible. The model is extremely predictive since the spectrum of orbital radii would depend only on the mass of the star and planetary systems would be much like atoms with obvious implications for the probability of Earth like systems supporting life. This model is consistent with the Titius-Bode model only if the Bohr orbitology is a late-comer in the planetary evolution.

  3. The third model is based on same general assumptions as the second one but only assumes that dark matter in astrophysical length scales associated with anyonic 2-surfaces (with light-like orbits in induced metric in accordance with holography) characterized by the value of the gravitational Planck constant. In this case the hydrogen atom inspired Bohr orbitology is just the first guess and cannot be taken too seriously. What would be important would be genuinely quantal dynamics for the formation of planetary system.

Can one interpret the radii in this framework in any reasonable manner?

  1. Titius-Bode predicts
    [r(n)-r(n-1)]/[r(n-1)-r(n-2)]=2
    and works excellently for c, f, and g. For b, d and e the law fails. This suggests that the four inner planets a,b,c,d, whose radii span single 2-adic octave in good approximation (!) correspond to single system which has split from single plane or will fuse to single planet distant future.

  2. Hydrogenic Bohr orbitology works only if g corresponds to n=2 orbit. n=1 orbit would have radius .116AU. From the proportionality r ∝ hbargr2 ∝ 1/v02, one obtains that the value one must have

    R==v02(Kepler)/v02(Sun)=3.04.

    This would result in a reasonable approximation for v0(Kepler)/v0(Sun)=7/4 (note that the value of Planck constant are predicted to be integer multiples of the standard value) giving R=7/42 ≈ 3.06.

    Note that the planets would correspond to those missing in Earth-Sun system for which one has n=3,4,5 for the inner planets Mercury, Venus, Earth.

    One could argue that Bohr orbits result as the planets fuse to two planets at these radii. This picture is not consistent with Titius-Bode law which predicts three planets in the final situation unless n=2 planet remains unrealized. By looking the graphical representation of the orbital radii of the planet system one has tendency to say that b,c,d,e, and f form a single subsystem and could eventually collapse to single planet. The ratio of gravitational forces between g and f is larger than that between f and e for m(g) > 6mE so that one can ask whether f could be eventually caught be g in this case. Also the fact that one has r(g)/r(f)<2 mildly suggests this.

Tuesday, February 01, 2011

Infinite primes as an answer to the question of Manin

Kea quotes Manin in her blog: If numbers are similar to polynomials in one variable over a finite field, what is the analogue of polynomials in several variables? Or, in more geometric terms, does there exist a category in which one can define absolute Descartes powers SpecZ×⋯×SpecZ?

Well, I answered this question for about 15 years ago. I of course did not know that this question had been posed by some-one (and still do not know when it has been posed for the first time). The answer came as I introduced the notion of infinite prime with motivations which were purely physical. I started from the following observation.

If evolution means a gradual increase of the p-adic prime P assumed to characterize the entire Universe and if there has been no first quantum jump, P should be literally infinite! Puzzling! Could infinite primes exist and what they could be? The answer came in few minutes from the observation that taking the product of all finite primes -call it simply X to avoid too long a posting;-)- and adding +/- 1 to it you get the simplest possible infinite primes P+=X+1 and P-=X-1. As a matter fact, infinite primes always come in pairs differing by 2 units.

This led to a series of amazing observations.

  1. The simplest lowest level infinite primes correspond to quantum states of a second quantized super-symmetric arithmetic QFT with bosons and fermions labelled by finite primes. Their products define infinite integers having interpretation as free many-particle Fock states.

  2. One can map infinite primes to polynomials and this leads to a surprising revelations. One obtains also the analogs of bound states! This is quite an emotional kick for anyone knowing how hard it is to understand non-perturbative effects - in particular bound states- in quantum field theories. Single particle states correspond first order polynomials of single variable and many particle states to irreducible higher order polynomials with algebraic roots!

  3. Second quantization can be repeated again and again by taking many particle states of the previous level single particle states of the next level in the hierarchy. Every step brings in an additional variable to the polynomials (essentially as the product of the infinite primes of the previous level) and one obtains infinite algebraics at the higher levels as roots of the polynomials besides algebraics. In TGD framework this hierarchy corresponds to the hierarchy of space-time sheets in many-sheeted space-time.

  4. This construction generalizes to other classical number fields and I have proposed a concrete identification of pairs of what I call hyper-octonion infinite primes in terms of standard model quantum numbers. The motivation comes from the observation that one can understand standard model symmetries in terms of octonions and quaternions and M4×CP2 has a number theoretic interpretation and the preferred extremals of Kähler action could be hyper-quaternionic space-time surfaces with the additional property that at each point the tangent space contains preferred hyper-octonionic imaginary unit.

This procedure gives a generalization of the notion of real number different from non-standard numbers of Robinson.

  1. Very roughly, infinitesimal and infinite numbers of Robinson are replaced with their exponentials which define infinite number of real units so that real number is replaced with infinite number of its copies with arbitrary complex number theoretic anatomy. There are neither infinitesimals nor infinities but there are infinite number of copies of real number equivalent as far as the magnitude of the number is considered.

  2. This number theoretcal anatomy is so incredibly complex that can quite seriously consider the possibility that one can ask whether the quantum states of the Universe and the world of classical worlds (WCW) could allow a concrete representation in terms of this anatomy. These infinite-dimensional spaces would be absolutely real- not fictions of quantum theorist! One can think that the evolution at the level of WCW is realized concretely as evolution at the level of space-time and imbedding space making the number theoretical anatomy of space-time points more and more complex.

  3. An intriguing possibility is that the sums of the units normalized so that unity comes out could be interpretation as representation of quantum superposition of zero energy states so that quantum superposition would not be needed separately. Physical existence would reduce to generalized octonions and subjective existence to quantum jumps!

  4. In zero energy ontology zero energy quantum states assignable to given causal diamond analogous to Penrose diagram. The infinite integers appearing as the numerator and denominator of the rational reducing to real unity would represent positive and negative energy parts of zero energy state and real unity property would code for the conservation of number theoretic momentum expressible as the sum ∑nilog(pi): each ni is separately conserved since the numbers log(pi) are algebraically independent. Number theoretic Brahman= Atman identity or algebraic holography would perhaps be the proper term here.

I have compared the notion of real number inspired by the notion of infinite prime to non-standard numbers in previous posting and in the new chapter Non-Standard Numbers and TGD of "Physics as Generalized Number Theory".