Wednesday, July 27, 2011

SUSY according to TGD

Standard SUSY seems to be dead. TGD predicts a different variant of SUSY (for the basic phenomenology see this). Super-conformal invariance for light-like 3-surfaces is the basic symmetry. In D=8 one does not need Majorana spinors so that quark and lepton numbers assigned with different chiralities of imbedding space spinors are conserved separately. The many fermion states assignable to partonic 2-surface representations of SUSY with large number of N but broken badly by the geometry of CP2. Right handed neutrino generates the least broken SUSY. R-parity associated with nuR is broken since right and left handed neutrinos mix.

Although SUSY is not needed to stabilize Higgs mass in TGD, the anomaly of muonic g-2 requires SUSY. The following strongly symmetry inspired picture is what allows rather precise predictions for sfermion masses.

  1. In TGD (p-adic thermodynamics) based SUSY mass formulas are same for particles and sparticles and only the p-adic length scale is different. This resolves the extremely problematic massivation issue of supersymmetric QFTs.

  2. Ordinary leptons are characterized by Mersennes or Gaussian Mersennes: (M127,MG,113, M107) for (e,μ,τ). If also sleptons correspond to Mersennes of Gaussian Mersennes, then (selectron, smuon, stau) should correspond to (M89,MG,79,M61) is one assumes that selectron corresponds to M89. This is of course a prediction assuming additional number theoretic symmetry.

    Selectron mass would be 250 GeV and smuon mass 13.9 TeV. g-2 anomaly for muon suggests that the mass of selectron should not be much above .1 TeV and M89 indeed fits the bill. Valence quarks correspond to the primes not smaller than the Gaussian Mersenne k=113, which suggests that squarks have k ≥ 79 so that squark masses should be above 13 TeV. If sneutrinos correspond to Gaussian Mersenne k=167 then sneutrinos could have mass below electron mass scale. Selectron would remain the only experiment signature of TGD SUSY at this moment.

  3. One decay channel for selectron would be to electron+ sZ or neutrino+ sW. sZ/sW (spartner of weak boson) would eventually decay to possibly virtual Z+ neutrino/W+neutrino: that is weak gauge boson plus missing energy. Neutralino and chargino need not decay in the detection volume. The lower bound for neutralino mass is 46 GeV from intermediate gauge boson decay widths. Hence this option is not excluded by experimental facts.

Muonic g-2 anomaly is an excellent test for this vision. The poor man's calculation (see this) modifying suitably MSSM calculation gives a value consistent with data if the mass of W gauge is twice the mass of W boson and sneutrinos are light in W boson mass scale. The result does not depend in the lowest order appreciably on the mass of muonic sneutrino. 250 GeV selectron remains the prediction testable at LHC.

The basic differences between TGD and MSSM and related approaches deserve to be noticed (for the experimental aspects of MSSM see this). If Higgses and Higgsinos are absent from the spectrum, SUSY in TGD sense does not introduce flavor non-conserving currents (FNCC problem plaguing MSSM type approaches). In MSSM approach the mass spectrum of superpartners can be only guessed using various constraints and in a typical scenario masses of sfermions are assumed to be same in GUT unification scales so that at long length scales the mass spectrum for sfermions is inverted from that for fermions with stop and stau being the lightest superpartners. In TGD framework p-adic thermodynamics and the topological explanation of family replication phenomenon changes the situation completely and the spectrum of sfermions is very naturally qualitatively similar to that of fermions (genus generation correspondence is the SUSY invariant answer to the famous question of Rabi "Who ordered them?" !). This is essential for the explanation of g-2 anomaly for instance. Note that the experimental searches concentrating on finding the production of stop or stau pairs are bound to fail in TGD Universe.

Another key difference is that in TGD the huge number of parameters of MSSM is replaced with a single parameter- the universal coupling characterizing the decay

sparticle→ particle+right handed neutrino,

which by its universality is very "gravitational". The gravitational character suggests that it is small so that SUSY would not be badly broken meaning for instance that sparticles are rather longlived and R-parity is a rather good symmetry.

One can try to fix the coupling by requiring that the decay rate of sfermion is proportional to gravitational constant G or equivalently, to the square of CP2 radius

R≈ 107+1/2(G/hbar0)1/2.

Sfermion-fermion-neutrino vertex coupling to each other same fermion M4 chiralities involves the gradient of the sfermion field. Yukawa coupling - call it L - would have dimension of length. For massive fermions in M4 it would reduce to dimensionless coupling g different M4 chiralities. In equal mass case g would be proportional to L(m1+m2)/hbar, where mi are the masses of fermions.

  1. For the simplest option L is expressible in terms of CP geometry alone and corresponds to

    L= kR .

    k is a numerical constant of order unity. hbar0 denotes the standard value of Planck constant, whose multiple the efffective value of Planck constant is in TGD Universe in dark matter sectors. The decay rate of sfermion would be proportional to k2R2 (M/hbar)3≈ k2107× (G/hbar0)× (M/hbar)3,

    where M is the mass scale characterizing the phase space volume for the decays of sfermion. M is the mass of sfermion multiplied by a dimensionless factor depending on mass ratios. The decay rate is extremely low so that R-parity conservation would be an excellent approximate symmetry. In cosmology this could mean that zinos and photinos would decay by an exchange of sfermions rather than directly and could give rise to dark matter like phase as in MSSM.

  2. Second option carries also information about Kähler action one would have apart from a numerical constant of order unity k= αK. The Kähler coupling strength αK= gK2/4π×hbar0≈ 1/137 is the fundamental dimensionless coupling of TGD analogous to critical temperature.

  3. For the option which "knows" nothing about CP2 geometry the length scale would be proportional to the Schwartchild radius

    L= kGM

    In this case the decay rate would be proportional to k2G2M2(M/hbar)3 and extremely low.

  4. The purely kinematic option which one cannot call "gravitational" "knows" only about sfermion mass and f Planck constant, and one would have

    L= k× hbar/M.

    Decay rate would be proportional to the naive order of magnitude guess k2(M/hbar) and fast unlike in all "gravitational cases". R-parity would be badly broken. Again k propto αK option can be considered.

Note that also in mSUGRA gravitatational sector in short length scales determines MSSM parameters via flavor blind interactions and also breaking of SUSY via breaking of local SUSY in short scales.

To my opinion the success of TGD using only simple scaling and symmetry arguments instead of heavy computational approach demonstrates how important it is to base the models on a genuine theory. Blind phenomenology can be completely misleading when basic principles are replaced with ad hoc assumptions. Here of course the problem is that super strings and M-theory can provide no principles helping the phenomenologist.

Addition: Tommaso Dorigo mentions the eprint Supersymmetry and Dark Matter in Light of LHC 2010 and XENON 100 Data as one of the first reactions of SUSY specialists to the 2010 LHC data represented in Europhysics 2011. The message is that one can find simple enough variants of MSSM which can cope with the experimental constraints (the number of parameters of MSSM is more than hundred). Authors are however cheating a little bit: they have dropped from their fit the really painful muonic g-2 anomaly requiring light wino or zino scale and/or some light sleptons, say light sneutrino. Taking this contraints into the fit *very* probably kills it. If not, authors would not have forgotten to mention it!;-)

For TGD SUSY see appropriate section in the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Sunday, July 24, 2011

Victory after all!

Without exaggeration I can tell that these days from Saturday to Monday have been the most exciting days of my life (maybe this over excitement explains why I only few minutes ago realized that that this wonderful day is Monday rather than Sunday;-).

  • Does Higgs exist and with what mass if so? This was the question of standard model people.

  • Does M89 hadron physics exists: in particular do the predicted 145 GeV M89 pion and 325 GeV M89 ρ and ω exist? This was the question of TGD people;-).
At Thursday strong new evidence from CDF for 325 ρ and ω of M89 hadron physics emerged. 325 GeV state is a complete mystery from the point of view of standard model so that already this announcement favored TGD taking into account the earlier indications (for a summary see this article).

At Friday ATLAS reported more than 2 sigma evidence for Higgs in the range 140-150 GeV in what was interpreted as Higgs to WW to leptons decays. Already months ago CDF had reported 4 sigma evidence for 145 GeV bump but soon forgotten in blogs: the memory span of bloggers is quite literally of same order of magnitude as the political memory of the average citizen.

After this the situation was extremely nerve breaking from TGD point of view. How soon conformation could emerge for TGD picture? Or could it be that these 33 years could have been a great delusion despite all the amazing successes such as successful calculation of elementary particle masses already 15 years ago, a spectrum of applications to all branches of physics, and creation of foundations of quantum biology and consciousness theory to say nothing about new mathematics inspired by TGD?

The Higgs story had taken a new twist at Saturday. I did not however have an opportunity to follow the developments at Sunday since I had marvelous time with the families of my children but it it is never too late to receive good news. So I learned this morning Lubos told that also Dzero and CDF experimental report similar signs so that both Tevatron and LCH to seem end up with similar conclusions: Higgs or somehing analogous exists around 145 GeV.

In TGD framework this Higgs like state would be of course the M89 pion with mass 145 GeV detected already by CDF with 4 sigma significance few months ago but not reported by D0 and ATLAS and then forgotten by most bloggers: very common reaction at this highly emotional era of quarter economy;-)! TGD predicts also the 325 GeV state (actually two of them: M89 ρ and ω with very nearly degenerate masses). This state is a completely mystery in standard model. On the other hand, TGD has a firm grasp to reality and can provide a long list of predictions for the masses of M89 hadrons so that experimentalists can begin a systematic search of other of M89 hadrons at any minute. Since also quark masses are predicted quark jets with these masses can be searched. These signatures will be of course rare since the formation of M89 is like formation of bubbles of vapour in a liquid at criticality.

Maybe the situation is now settled. TGD is the theory! After these long lonely years Nature simply forces us to accept TGD because it is the only theory that works and predicts! Isn't this marvelous! But: still every-one "has never heard" about TGD. Big Science is also great comedy and the players are masters of their art form: let's enjoy also this aspect of Big Science;-).

P.S. String- and M-theorists have been remarkably silent on the results represented in Europhysics 2011. No wonder: all string model inspired predictions turned out to be wrong and the failure of standard SUSY and the probable failure of the standard model in Higgs sector destroy all hopes about some connection of string models and M-theory with low energy phenomenology (that is physics). The non-existence of Higgs is also a catastrophe for the inflationary cosmology and should induce a profound re-evaluation of the funding in the theory sector.

To my best knowledge Lubos Motl is the only string theorist who has reacted publicly. He did this in genuinely Lubosian manner by rewriting completely the history of science by claiming that string theorists have in fact never made predictions about super-symmetry at LHC. This view about past is in extremely sharp contrast with his own earlier posting predicting that SUSY will be found with 90 per cent or higher probability at LHC (for details see Peter Woit's posting). Lubos even made bets for SUSY at LHC! Now Lubos concludes that those string people who have possibly wasted their time with low energy phenomenology (that is physics) should concentrate on pure string theory. Huh! The practically lethal doses of reality from LHC have only strengthened the cognitive immune system of Lubos. Whatever happens, string theory is correct and anyone who says something different has forgotten his morning medication. Amen. Peter Woit also summarizes the evolution of superstring hype from outstanding promises to almost complete silence by some representative examples.

Saturday, July 23, 2011

About exclusion plots for Higgs

During the three days of Europhysics conference I had opportunity to see very many exclusion plots for Higgs. Phil Gibbs did wonderful work here. To understand following it is good to look at these plots.

While trying to understand the message of these plots I realized that my understanding about the procedure yielding the exclusion plots for Higgs at various mass values is very poor. Outside the competence pressures of the academic community there is a strong temptation to conclude that it is better to leave the non-poetic side of physics to experimentalists and concentrate on deep theory. Sounds good but is just an excuse telling about the light-hearted laziness of a diletant. I try to summarize how little I understand.

  1. The basic question is whether the standard model without Higgs is able to mimic the Higgs within experimental resolution. Basically one must compare the theory without Higgs to that with Higgs. The convenient normalization is by the total cross section for producing final states of particular kind produced in the production and subsequence decay of a genuine Higgs. Experimentally we cannot tell whether this kind of final states are actually due to the decay of a genuine Higgs.

    Remark: What the model without Higgs means is not quite clear since Higgs is needed to explain massivation and could appear as virtual states. For sufficiently high energies its presence is necessary in order to avoid violation of unitarity.

  2. One must estimate production cross section for the final states which cannot be distinguished from those produced in the decays of Higgs in the model without Higgs. For instance, the total mass of the state possibly representing decay products of Higgs cannot be too far from Higgs mass. One can do this for both the real data and for the theory without Higgs. Typically the resulting cross section normalized by the cross section for the production via genuine Higgs is above unity since there are are many final states about which one cannot say whether they are decay products of genuine Higgs or fakes.

  3. In the typical exclusion plot the wiggly curve corresponds to the production cross section of a fake or genuine Higgs or something analogous to it deduced from the experimental data and the smooth curve to the cross section obtained by a simulation using the model without Higgs. If these curves are near to each other, there is no need to assume Higgs at that particular mass value if the statistics is good. Unfortunately it is not good enough yet so that the issue of Higgs remains still unsettled although the excluded region is very narrow (its identification depends on blog).

  4. If the experimental curve is much above the smooth one for certain range of mass values, one can conclude that Higgs might be there with mass in this range. If the experimental curve is much below the expected, one can ask whether there is some effect causing "anti-Higgs" behavior by reducing the cross section for the production of the states mimicking decay products of Higgs: destructive interference for Feynman diagrams could lead to this kind of effect. The propagator of some unknown particle other than Higgs could cause destructive interference because the sign of its real part changes above pole. This might be perhaps used as an additional criterion for identifying the masses of M89 hadrons. In fact for 325 GeV bump identified in terms of &rho: and ω of M89 hadron physics the 3 sigma deficit visible in some slides at 340 GeV might be due to this effect.

  5. The deviation between experimental and theoretical curves could be due to fluctuations so that one has to use a a measure for the probability of the fluctuation with given amplitude. Gaussian probability distributions for the fluctuations are typically assumed and here the familiar notion of sigma creeps in. The number of sigmas measures how probable it is that the deviation can be regarded as a fluctuation. If it is large one can conclude that one cannot explain the deviation without assuming Higgs or something else producing a similar effect.

Thursday, July 21, 2011

Tension is raising!

The tension in particle physics circles is raising and bloggers are eagerly waiting for results from Europhysics HEP conference opened today. It is not exaggeration to say that these days could be the most significant period in the history of physics. This posting has evolved during three days through additions as new data has emerged to the blogs. I see the these three days as a process in which TGD based view about physics developed at the level of detail thank to the altruistic efforts of many bloggers sharing freely information and ideas.

Phil Gibbs has done wonderful work by representing data related to Higgs search in organized form in his blog: viXra log actually became a forum for a vivid discussion. Also the postings of Lubos Motl during last years and especially so during last months have been extremely helpful as I have been trying to develop a coherent view about what particle physics in TeV range looks in TGD Universe. There has been no intentional effort to collaborate, no one has payed anything for anyone of us, and our views about physics are very different- to put it mildly! Despite this invisible collaboration whose members did not even realize that they form a collaboration has worked excellently! I can understand this only by looking ourselves as part of a much bigger intelligence - I call this entity Mother Gaia.

In this framework I see the blog activity as an important part of the problem solving of Mother Gaia using its brand new nervous system- the web - as a powerful tool to allow different views about world to interact. Just the attempt to understand what amount of technology and analysis is needed to produce a single plot providing information about Higgs mass makes it clear that we as individuals have extremely limited abilities. But although ants/neurons are rather stupid, the nest/brain in some miraculous manner is able to behave like an intelligent organism with goal directed behavior. I believe that this is possible only because the ant nest is conscious. In the same manner Mother Gaia is conscious super intellect and is using ourselves to its own purposes.

I see the Europhysics conference and the blog activity accompanying as a part of the ambitious project of Mother Gaia, whose greatest passion is to understand the basic laws of physics. The professionals receiving monthly salary are part of the necessary neuron population but not enough. Also outsiders and out-of-laws are needed because only they can have genuine birds's eye of view. This because they do not get lost to the details for the simple reason that this in practice impossible without belonging to a research group as a specialist. Blogs are their communication forum. Could it be that viXra log - a forum for crackpots - played a historical role in filling the details about the big picture? Mother Gaia probably knows and allows the reader decide himself;-)!

A. Negative results concerning standard SUSY and Higgs

The preliminary negative results relate to two big questions. Is Higgs there? Is the standard view about super-symmetry correct? My guess is that the results from LHC will weaken the hopes that the answers to these questions could be positive. This would have enormous significance for the future of theoretical particle physics.

A.1. What is the fate of standard SUSY?

  1. It is already now clear that there is not much room for standard super-symmetry and LHC will provide new results during next days.

    Addition: Phil Gibbs represents nice graphs about the talks held at yesterday. Standard SUSY exclusion limits have been made tighter by ATLAS using 1/fb data.

    The two basic parameters are mass parameters called m1/2 and m0 and are related squark and gluino mass scales in 1-1 manner. The plot represent in Phil's blog implies that squark and gluino masses must be above 850 GeV and 800 GeV respectively if SUSY is there are all. Another plot gives lower limits which equal to 1 TeV. Standard SUSY has a lot of parameters and die-hard believers can always argue that the next accelerator will observe standard SUSY. The history of physics however teaches that if the theory works at all it works excellently from the beginning.

    Completely inappropriate side remark: What 1/fb means? Experimenters use a total amount of collected data which can be expressed as a density of collisions per unit area. If this density is 1/fb one has collected data about one collision per femtobarn. 1/fb corresponds to a square with side 10-21-1/2 meters: the size scale of proton is roughly 10-15 meters so that there is roughly 1013 events per proton area. The total number of events is obtained by multiplying with the area of the cross section of the beam. Quite an impressive amount of data has been already analyzed.

    A.2. What about Higgs?

    The overall conclusion seems to be that standard model Higgs does not exist. This suits well with TGD inspired vision.

    The existing data disfavor the existence of Higgs in the interesting mass range as the plots published by D0 collaboration in March suggest. Tomorrow CDF should provide more precise data. Also ATLAS collaboration will give data.

    Maybe the photon indeed eats the remaining Higgs component as I have been repeatedly suggesting. This representation of massive particles makes also the application of twistor approach possible and brings in infrared cutoff from the massivation of the physical particles. This in TGD Universe where gauge bosons and Higgs both transform like 3+1 under SU(2). The mechanism is much more general and applies to all spins assignable to partonic 2-surfaces (gluons, super-partners etc...). In the case of spin one particles this conclusion is more or less forced by extremely simple kinematical argument if one accepts that particles are bound states of massless primary fermions with standard model quantum numbers and assignable to the throats of wormhole contacts.

    Addition: Lubos represents a characteristic example about misleading rhetoric in the title Fermilab: Higgs is probably between 114 and 137 GeV of his posting. What Fermilab actually says is that if Higgs is there at all it is most probably in this energy range! Lubos would be an excellent propaganda minister for any nation!

    Addition. The conclusion after the second day of Europhysics conference is that there is no discovery of Higgs so far. There is some evidence for the decays of virtual Higgs with mass in the range 130-150 GeV to WW pairs (see below). There is however no reason for excluding the alternative interpretation in terms of decays of virtual M89 pion (145 GeV CDF bump) to WW pair (plus possibly photon). Lubos believes that Higgs is in the range 111-131 GeV because there are a couple of bumps which are howewer in 1 sigma band. In the range 140-150 GeV the deviation is slightly more than 2 sigma so that to my view the localization of a new particle, not necessarily Higgs, to this interval is more in accordance with my view about the basic ideas of probability. In any case, we can perhaps cautiously conclude that TGD survided the Big Day for Higgs boson. Some bloggers conclude that the mainstream approach is in grave difficulties but it is wiser to wait for the further results.

    Addition (25.7.2011)): The Higgs story had got a new twist at Saturday. I did not have opportunity to follow the developments yesterday since I had marvelous time with the families of my children but it is never too late to receive good news.

    Recall that at Friday ATLAS and CMS at LHC reported signs of what might be Higgs or something else around 140-145 GeV- see my previous comment written at Friday evening. Now Lubos told told that also Dzero and CDF experimental report similar signs so that both Tevatron and LCH to end up with similar conclusions.

    In TGD framework this Higgs like state would be of course the M89 pion with mass 145 GeV detected already by CDF with 4 sigma significance few months ago but not reported by D0 and ATLAS and then forgotten by most bloggers: very common reaction at this highly emotional era of quarter economy;-)! TGD predicts also the 325 GeV state (actually two of them: M89 ρ and ω with very nearly degenerate masses). For 325 GeV state a firm evidence emerged from CDF during the first day of the conference. This state is a completely mystery in standard model.

    Maybe the situation is now settled. TGD is the theory! After these long lonely years Nature simply forces us to accept TGD because it is the only theory that works and predicts! Isn't this marvelous! But: still every-one "has never heard" about TGD. Big Science is also great comedy and the players are masters of their art form: let's enjoy also this aspect of Big Science;-).

    B. What about stringy exotics?

    The string theory inspired predictions about microscopic black holes, strong gravity, large extra dimensions, Randall-Sundrum gravitons, split supersymmetry, and various exotic objects form the hard core of the super string hype. Peter Woit informs that neither CMS nor ATLAS see such objects.

    C. What about TGD predictions?

    Restricting the attention to TGD, LHC could provide also many positive results. There are many indications from both CDF, D0, and dark matter searches (say DAMA) and also earlier anomalies some which were discovered already at seventies for the predictions of TGD. There are many questions which LHC probably answers in near future.

    C.1. Is M89 physics there?

    Is M89 hadron physics there? There are many indications discussed here. The CDF bump at 150 GeV (not detected by D0) would be identifiable as pion π(89). The masses of u(89) and d(89) is from a generalization of argument applying to ordinary pion 102 GeV and quark jets with this mass are predicted. Quite generally, jets with "too high" transversal momenta from the decays of M89 hadrons are the signature of this physics. There already exists indications for them. Indications exist for M89 &rho: and ω besides pion and also for Ψ/J and bottonium as well as the analog of λ baryon with mass around 390 GeV for which however quarks should be ordinary: only hadronic space-time sheet would correspond to M89 and can be seen as a state for which hadronic space-time sheet is heated from p-adic temperature corresponding to k=107 to that corresponding to k=89. By using p-adic length scale hypothesis and the model for ordinary hadrons one can deduce rather detailed predictions for the masses of the particles involved and QCD could help enormously in estimating scattering rates.

    Addition: Just after adding this posting to my blog I looked at the blog of Lubos and found that there is a sharp peak at 327 GeV in ZZ to four leptons. This peak or actually two were reported already earlier. If one identifies the 145 GeV CDF bump as M89 pion, a simple argument predicts the masses of M89 &rho: and ω mesons: they are very near to each other and the prediction is around 325 GeV! It really begins to look that M89 hadron physics is there!! Dare I hope that TGD will be finally taken seriously by establishment and my long banishment from the academic world will end with a rehabilitation?!

    Addition: The 145 GeV CDF bump was also discussed in an attempt to find an explanation for why CDF sees it and D0 does not. ATLAS reported their study concerning the 145 GeV bump and found nothing. Situation remains open. If these particles are dark in TGD sense, very delicate effects are possible since also darkness in this weaker sense means missing energy property before the transformation to visible matter. Recall that CMS saw what Lubos calls a "crazy deficit" of events at 325 GeV at which CDF now sees sharp resonance. Maybe the darkness implies that detection is very sensitive to the parameters of the situation. The history of dark matter in TGD sense beginning at seventies from the detection of what correspond to dark colored variants of electrons forming electro-pions is a series of forgotten anomalies: could the common explanation be that these particles are detected only if they decay to ordinary matter in detector volume.

    Addition: Matt Strassler notices that ATLAS has reported an excess in H→WW between 2 sigma and 2.8 sigma in the range 130-150 GeV. Phil Gibbs gives a link to the ATLAS slides. If the decay of virtual Higgs can produce this effect, why the decay M89 pion with mass 145 GeV decaying to WW could not do the same?

    Addition: I liked very much the argument represented by Lubos presumably inspired by the following piece of data (despite the fact that it was wrong, Lubos indeed removed his blog posting later!).

    Andrey Korytov presents the summary of the CMS search for the Standard Model Higgs thus far. Again 6 different studies combined together. 95 per cent exclusion ranges are 149-206, 300-440, and much of the region from 200-300. 90 per cent exclusion 145-480. Interesting excesses possible 120-145 but statistical significance hard to evaluate at this time (but somewhat smaller excess than ATLAS sees.)

    One can indeed wonder whether the mere presence of these exclusion regions or more precisely, the downwards tips inside them with sufficient amount of sigma carries some important message. This might be the case.

    The argument of Lubos goes as follows when appropriately generalized and put in correct context so that it applies. In processes where virtual boson- be it Higgs or M89 meson or something else one happens to believe - decays to on mass shell particles such as WW pair, a downwards tip is generated since the sign of the real part of propagator changes at pole and destructive interference with other contributions to the process takes pace. The effect is smoothed out by the finite width of the resonance but is still present. If this interference is not taken into account in Monte Carlo simulations, one can obain apparent exclusion regions and multi-sigma tips inside them. If this is the case, then at least in some cases the exclusion ranges could be seen as indications for resonances with masses somewhat below the range of inclusion. The original slides of Korytov contained also a near 3-sigma deficit near 340 GeV having a possible interpretation as a signature of a virtual ρ(89)/ω(89). Unfortunatley, this deficit did not however appear in the later slides.

    If one accepts this argument, one can ask whether the deficit giving rise to an exclusion range beween 149-206 GeV could be seen as a signature for the 145 GeV CDF bump interpreted as M89 pion. Unfortunately, the curves are safely in 1 sigma band in the exclusion range so that this argument does not seem bite. One can however argue that since the normalization is by the cross section for the model with Higgs similar interference effect in normalization cancels the interference effect so that the situation remains unsettled unless the inteference effect is considerably larger than for real Higgs! In ATLAS summary about Higgs to WW this effect is seen at 340 GeV and could be interpreted in terms of 325 GeV ρ/ω.

    C.2. Is supersymmetry in TGD sense present?

    Sparticles would be composites of particle and right-handed neutrino and would decay to particle and neutrino. R-parity conservation would be broken by the mixing of right handed and left handed neutrino. The TGD counterpart for the decays of gluinos to three quark jets allowed by standard SUSY would be the decays of M89 baryons.

    Addition: Do the above discussed limits on the squark and gluino masses deduced for standard SUSY apply in TGD? The cautious answer is "No". Basically everything boils down to the detection of the production of SUSY particles. R-parity conservation must hold true in good approximation in standard SUSY to make proton stable enough and this requires sparticles to be produced as pairs. Sparticles decay in a cascade like manner producing as an outcome lightest super-symmetric particles called neutralinos stable by R-parity conservation. Neutralinos showing themselves as a missing energy is what one tries to detect.

    In TGD framework right handed neutrino generates SUSY so that situation is totally different. Sparticles could decay to particle and neutrino in the detection volume. The only standard model events of this kind are decays of electroweak gauge bosons to lepton pairs. A decay event producing particle + neutrino total quantum numbers differing from those of electroweak gauge boson would serve as a unique signature of the SUSY in TGD sense. But how can we tell that neutrino rather than neutralino is in question: perhaps from masslessness? If the lifetime of the sparticle is long enough situation could be rather similar to that in standard SUSY and the deduced limits might apply. Recall that also the view about massivation is totally different in TGD framework and there are no TGD counterparts for the above mentioned SUSY parameters m0 and m1/2.

    Addition: From CMS conference page I found a link to an interesting eprint. The conclusion of the Search for New Physics with a Mono-Jet and Missing Transverse Energy in pp Collisions at s1/2 = 7 TeV is that there is no significant evidence for monojets plus missing transverse energy. This could exclude light squarks decaying to quark and neutrino. The graphs however suggest that there is small surplus at high transverse momenta. In the leptonic sector anomalous lepton + missing energy events would be the signature. In standard physics framework these decays could be erraneously interpreted in terms of anomalous production of W bosons or their heavier exotic counterparts. The article Search for First Generation Scalar Leptoquarks in the eνjj Channel in pp Collisions at s1/2 = 7 TeV excludes leptoquarks producing in their decays electgron or neutrino and quark. Unfortunately the second lepton is now always charged that the selection criteria exclude ννjj events predicted by TGD SUSY. The article Search for a W' boson decaying to a muon and a neutrino in pp collisions at s1/2 = 7 TeV reports that there is no signicicant excess of these events. Also now the graphs show a little excess for highest transverse masses.

    Addition: If the decays of the sparticle to particle to neutrino are slow, the bounds obtained from LHC apply also in TGD. The simplest guess for squark and slepton mass scales is based on the observation that charged leptons correspond to three subsequent Mersennes/Gaussian Mersennes labeled by primes (127,113,107) whereas for quarks the integer k satisfies k≥113. If ones assumes that also charged sleptons correspond to this kind of Mersennes, this leaves only the option k =(89,79,61) for charged sleptons and k≥79 for squarks. All charged sfermions apart from selectron would have masses above 13 TeV. Selectron would have mass 262 GeV and sneutrinos could have considerably lower masses. Weak gaugino masses would most naturally correspond to a mass scale near M89. This scenario satisfies the recent bounds even for small breaking of R-parity conservation. This scenario predicts correctly the g-2 anomaly of muon assuming only that sneutrino masses are low enough.

    C.3 What about TGD based explanation of family replication phenomenon

    In TGD family replication has topological explanation in terms of the genus of partonic 2-surface. Exotic bosons, which can be regarded as flavour octets characterized by pairs of genus g1 and g2 assignable to the two wormhole throats of wormhole contact carrying fermion and antifermion number are predicted. Are they there? This predicts new kind of flavor non-conserving neutral currents. There are some indications for them from forward backward asymmetry in top pair production observed both by CDF and D0.

    Addition: Tommaso Dorigo tells that CMS finds no evidence for forward-backward asymmetry in top-pair production in proton-proton collisions. Earlier both CDF and D0 reported asymmetry in proton-antiproton collisions: this was reported also by Tommaso. These findings were described in Europhysics 2011 and also in Nature.

    If both measurements are correct, one can conclude that the asymmetry must relate to the scattering of valence quarks of proton from valence antiquarks of antiproton in Tevatron. This explains why the asymmetry is not present in CMS analysis studing p-p collisions. This supports the interpretation in terms of scattering by an exchange of flavor octet of gluons (see this).

    Addition:: The exchange of flavor octet weak gauge bosons could also give additional contributions to the mechanism generating CP breaking since new box diagrams involving two exchanges of flavor octet weak boson contribute to the mixings of quark pairs in mesons. The exchanges giving rise to an intermediate state of two top quarks are expected to give the largest contribution to the mixing of the neutral quark pairs making up the meson. This involves exchange of flavor octet W boson analogous to the usual exchange of the flavor singlet boson. This might explain the reported anomalous like sign muon asymmetry in BBbar decay (see this) suggesting that the CP breaking in this system is roughly 50 times larger than predicted by CKM matrix. The new diagrams would only amplify the CP breaking associated with CKM matrix rather than bringing in any new source of CP breaking. This mechanism icreases also the CP breaking in KKbar system known to be also anomalously high.

    C.4 What about TGD based view about color

    One of the key distinctions between TGD and QCD is the different view about color. Both leptons and quarks are predicted to have colored excitations. Are heavy colored excitations of leptons there? The mass of M89 electron would be obtained by multiplying the mass of the ordinary electron by a factor 2(127-89)/2=219 and is about 250 GeV in the case that M89 characterizes these particles. One can also imagine colored excitations of quarks. Again indications exist.

    C.6 Hierarchy of Planck constants and TGD based view about dark matter

    Is TGD view about dark matter based on hierarchy of integer valued multiples of Planck constant (in effective sense) correct? This notion of darkness is much weaker than the standard view and has most dramatic implications in living matter but could also imply strange effects at LHC since dark particles could be detected only if they transform to ordinary matter in the detection volume. This might relate to the discrepancy between CDF and D0 results concerning the 145 GeV bump reported by CDF.

D. Summary

From the TGD point of view the summary would be following. The evidence for the failure of standard view about SUSY and about Higgs and massivation give strong support for the TGD based vision. The evidence for M89 physics is rather strong if one accepts the argument of Lubos described above. The crucial challenge is to understand why D0 and ATLAS do not see the 145 GeV bump. My proposal is that M89 hadrons are dark in TGD sense and can leak out the of the detection volume and depending on details of detection geometry can manifest themselves as missing energy or decay products. Various cuts applied in the analysis could eliminate this kind of signal. There are also slight indications for TGD based view about SUSY as monojets + missing energy identified erratically as decay products of W' realized as a slight excess at high transverse momenta. I have not been reading the other sessions of the conference (say QCD session) so that there might be surprises in store.

Should I regard these exciting days as a victory for TGD or is all this just reckless speculations of a diletant? I do not know!

Wednesday, July 20, 2011

Last minute prediction!

While updating the chapter about the p-adic model for hadronic masses (see this) I found besides some silly numerical errors also a gem that I had forgotten. For pion the contributions to mass squared from color-magnetic spin-spin interaction and color Coulombic interaction and super-symplectic gluons cancel and the mass is in excellent approximation given by the m2(π)=2m2(u) with m(u)=m(d) =0.1 GeV in good approximation. That only quarks contribute is the TGD counterpart for the almost Goldstone boson character of pion meaning that its mass is only due to the massivation of quarks. The value of p-adic prime is p≈ 2k, with k(u)=k(d)=113 and the mass of charged pion is predicted with error of .2 per cent.

If the reduction of pion mass to mere quark mass holds true for all scaled variants of ordinary hadron physics, one can deduce the value of u and d quark masses from the mass of the pion of M89 hadron physics and vice versa. The mass estimate is 145 GeV if one identifies the bump claimed by CDF (see this and this) as M89 pion. Recall that D0 did not detect the CDF bump (see this). I have discussed possible reasons for the discrepancy in an earlier posting in terms of the hypothesis that dark quarks are in question.

From this one can deduce that the p-adic prime p≈ 2k for the u and d quarks of M89 physics is k=93 using m(u,93)= 2(113-93)/2m(u,113), m(u,113)≈ .1 MeV. For top quark one has k=94 so that a very natural transition takes place to a new hadron physics. The predicted mass of π(89) is 144.8 GeV and consistent with the value claimed by CDF. What makes the prediction non-trivial is that possible quark masses comes as half-octaves meaning exponential sensitivity with respect to the p-adic length scale.

The common mass of u(89) and d(89) quarks is 102 GeV in good approximation and quark jets with mass peaked around 100 GeV could serve as a signature for them. The direct decays of the π(89) to M89 quarks are of course non-allowed kinematically.

For a summary about indications for M89 see appropriate section in the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". See also the short pdf article Is the new boson reported by CDF pion of M89 hadron physics? at my homepage.

Monday, July 18, 2011

Has CMS collaboration observed strange M89 baryon?

In his recent posting Lubos tells about a near 3-sigma excess of 390 GeV 3-jet RPV-gluino-like signal reported by CMS collaboration in article Searh for Three-Jet Resonances in p-p collisions at sqrt(s)=7 TeV. This represents one of the long waited results from LHC and there are good reason to consider it at least half-seriously. This posting contained in its original version some errors which have been corrected.

Gluinos are produced in pairs and in the model based on standard super-symmetry decay to three quarks. The observed 3-jets in question would correspond to a decay to uds quark triplet. The decay would be R-parity breaking. The production rate would however too high for standard SUSY so that something else is involved if the 3 sigma excess is real.

Signatures for standard gluinos correspond to signatures for M89 baryons in TGD framework

In TGD Universe gluinos would decay to ordinary gluons and right-handed neutrino mixing with the left handed one so that gluino in TGD sense is excluded as an explanation of the 3-jets. In TGD framework the gluino candidate would be naturally replaced with k=89 variant of strange baryon λ decaying to uds quark triplet. Also the 3-jets resulting from the decays of proton and neutron and Δ resonances are predicted. The mass of ordinary λ is m(λ,107)=1.115 GeV. The naive scaling by a factor 512 would give mass m(λ,107)= 571 GeV, which is considerably higher than 390 GeV. Naive scaling would predict the scaled up copies of the ordinary light hadrons so that the model is testable.

It is quite possible that the bump is a statistical fluctuation. One can however reconsider the situation to see whether a less naive scaling could allow the interpretation of 3-jets as decay products of M89 λ baryon.

Massivation of hadrons in TGD framework

Let us first look the model for the masses of nucleons in p-adic thermodynamics (see this).

  1. The basic model for baryon masses assumes that mass squared -rather than energy as in QCD and mass in naive quark model- is additive at space-time sheet corresponding to given p-adic prime whereas masses are additive if they correspond to different p-adic primes. Mass contains besides quark contributions also "gluonic contribution" which dominates in the case of baryons. The additivity of mass squared follows naturally from string mass formula and distinguishes dramatically between TGD and QCD. The value of the p-adic prime p≈ 2k characterizing quark depends on hadron: this explains the mass differences between baryons and mesons. In QCD approach the contribution of quark masses to nucleon masses is found to be less than 2 per cent from experimental constraints. In TGD framework this applies only to sea quarks for which masses are much lighter whereas the light valence quarks have masses of order 100 MeV.

    For a mass formula for quark contributions additive with respect to quark mass squared quark masses in proton would be around 100 MeV. The masses of u, d, and s quarks are in good approximation 100 MeV if p-adic prime is k=113, which characterizes the nuclear space-time sheet and also the space-time sheet of muon. The contribution to proton mass is therefore about 31/2× 100 MeV.

    Remark: The masses of u and d sea quarks must be of order 10 MeV to achieve consistency with QCD. In this case p-adic primes characterizing the quarks are considerably larger. Quarks with mass scale of order MeV are important in nuclear string model which is TGD based view about nuclear physics (see this).

  2. If color magnetic spin-spin splitting is neglected, p-adic mass calculations lead to the following additive formula for mass squared.

    M(baryon)= M(quarks)+ M(gluonic) , M2(gluonic) = nm2(107) .

    The value of the integer n can be predicted from a model for the TGD counterpart of the gluonic contribution (see this). m2(107) corresponds to p-adic mass squared associated with the Mersenne prime M107 =2107-1 characterizing hadronic space-time sheet responsible for the gluonic contribution to the mass squared. One has m(107) =233.55 MeV by scaling from electron mass me≈ 51/2× m(127)≈ 0.5 MeV and from m(107)=2(127-107)/2× m(127).

  3. For proton one has

    M(quarks)= [∑quarks m2(quark)]1/2 ≈ 31/2× 100 MeV

    for k(u)=k(d)=113 (see this).

Super-symplectic gluons as TGD counterpart for non-perturbative aspects of QCD

A key difference as compared to QCD is that the TGD counterpart for the gluonic contribution would contain also that due to "super-symplectic gluons" besides the possible contribution assignable to ordinary gluons.

  1. Super-symplectic gluons do not correspond to pairs of quark and and antiquark at the opposite throats of wormhole contact as ordinary gluons do but to single wormhole throat carrying purely bosonic excitation corresponding to color Hamiltonian for CP2. They therefore correspond directly to wave functions in WCW ("world of classical worlds") and could therefore be seen as a genuinely non-perturbative objects allowing no description in terms of a quantum field theory in fixed background space-time.

  2. The description of the massivation of super-symplectic gluons using p-adic thermodynamics allows to estimate the integer n characterizing the gluonic contribution. Also super-symplectic gluons are characterized by genus g of the partonic 2-surface and in the absence of topological mixing g=0 super-symplectic gluons are massless and do not contribute to the ground state mass squared in p-adic thermodynamics. It turns out that a more elegant model is obtained if the super-symplectic gluons suffer a topological mixing assumed to be same as for U type quarks. Their contributions to the mass squared would be (5,6,58)× m2(107) with these assumptions.

  3. The quark contribution (M(nucleon)-M(gluonic))/M(nucleon) is roughly 82 per cent of proton mass. In QCD approach experimental constraints imply that the sum of quark masses is less that 2 per cent about proton mass. Therefore one has consistency with QCD approach if one uses linearization of the mass formula as effective linear mass formula.

What happens in the transition M107→ M89?

What happens in the transition M107→ M89 depends on how the quark and gluon contributions depend on the Mersenne prime.

  1. One can also scale the "gluonic" contribution to baryon mass which should be same for proton and λ. Assuming that the color magnetic spin-spin splitting and color Coulombic conformal weight expressed in terms of conformal weight are same as for the ordinary baryons, the gluonic contribution to the mass of p(89) corresponds to conformal weight n=11 reduced from its maximal value n=3× 5=15 corresponding to three (see this). The reduction is due to the negative colour Coulombic conformal weight. This is equal to Mg= 111/2 × 512× m(107), m(107) =233.6 MeV, giving Mg= 396.7 GeV which happens to be very near to the mass about 390 GeV of CMS bump. The facts that quarks appear already in light hadrons in several p-adic length scales and quark and gluonic contributions to mass are additive, raises the question whether the state in question corresponds to p-adically hot (1/Tp∝ log(p)≈ klog(2) ) gluonic/hadronic space-time sheet with k=89 containing ordinary quarks giving a small contribution to the mass squared. Kind of overheating of hadronic space-time sheet would be in question.

  2. The option for which quarks have masses of thermally stable M89 hadrons with quark masses deduced from the questionable 150 GeV CDF bump identified as the pion of M89 physics does not work.

    1. If both gluonic and quark contributions scale up by factor 512, one obtains m(p,89)=482 GeV and m(λ,89)=571 GeV. The values are too large.

    2. A more detailed estimate gives the same result. One can deduce the scaling of the quark contribution to the baryon mass by generalizing the condition that the mass of pion is in a good approximation just m(π)=21/2m(u,107) (Goldstone property). One obtains that u and d quarks of M89 hadron physics correspond to k=93 whereas top quark corresponds to k=94: the transition between hadron physics would be therefore natural. One obtains m(u,89)=m(d,89)=102 GeV in good approximation: note that this predicts quark jets with mass around 100 GeV as a signature of M89 hadron physics.

      The contribution of quarks to proton mass would be Mq= 31/2× 2(113-93)/2m(u,107)≈ 173 GeV. By adding the quark contribution to gluonic contribution Mg=396.7 GeV, one obtains m(p,89)= 469.7 GeV which is rather near to the naively scaled mass 482 GeV and too large. For λ(89) the mass is even larger: if λ(89)-p(89) mass difference obeys the naive scaling one has m(λ,89)-m(p,89)=512× m(λ,107)-m(p,107). One obtains m(λ,89)= m(p,89)+m(s,89)-m(u,89)= 469.7+89.6 GeV = 559.3 GeV rather near to the naive scaling estimate 571 GeV. This option fails.

Maybe I would be happier if the 390 GeV bump would turn out to be a fluctuation (as it probably does) and were replaced with a bump around 570 GeV plus other bumps corresponding to nucleons and Δ resonances and heavier strange baryons. The essential point is however that the mass scale of the gluino candidate is consistent with the interpretation as λ baryon of M89 hadron physics. Quite generally, the signatures of R-parity breaking standard SUSY have interpretation as signatures for M89 hadron physics in TGD framework.

For a summary about indications for M89 see appropriate section in the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic length scale hypothesis and dark matter hierarchy". See also the short pdf article Is the new boson reported by CDF pion of M89 hadron physics? at my homepage.

Wednesday, July 13, 2011

Quantum model for remote replication of DNA

The idea about remote replication, transcription and translation of genes in terms of electromagnetic field patterns is very attractive and would be in accordance with the wave DNA vision. This requires a coding of DNA nucleotides. I have proposed several codings of this kind.

  1. In DNA as topological quantum computer model quark and anti-quark at the ends of a flux tube connecting DNA nucleotide to a lipid of the nuclear or cell membrane takes care of the coding. Also sequences of dark nucleons giving rise to dark nuclei realize the analogs of DNA, RNA, tRNA, and amino-acids as well as vertebrate genetic code (see this). Dark nucleons sequences could correspond to the phantom DNA discovered by Gariaev's group.

  2. Quantum antenna hypothesis represents one of the oldest ideas of TGD inspired quantum biology: molecules would act like quantum antennas. Frequency coding would be very natural for groups of molecules participating in the same reaction: the flux tubes connecting the molecules would carry the radiation inducing resonant antenna interaction and phase transitions reducing Planck constant would bring the reacting molecules near to each other. Magnetic flux tubes connecting the molecules would be essential element of the mechanism. Remote replication would represent an example about a situation in which hbar changing phase transition does not take place. If one wants coding of individual molecules -such as DNA nucleotides- by frequency in turned coded by the value of hbar for given photon energy (E= hf), one is forced to make ad hoc assumptions and it is difficult to find any plausible scenario. Quantum antenna mechanism could make possible remote replication for which the findings of Montagnier's group as well as remote transcription for which the work of Gariaev's group gives some evidence.

In the sequel a model for the coding of DNA in terms of radiation patterns is discussed. There are three experimental guidelines: the phantom DNA identified as dark nucleon sequences in TGD framework and the evidence for remote activation of DNA transcription - both discovered by Gariaev's group - are assumed as the first two key elements of the model. The remote replication of DNA suggested by the experimental findings of Montagnier's group serves as a further guideline in the development of the model.

Polymerase chain reaction (PCR) is the technique used in the experiments of Montagnier's group. DNA polymerase catalyzes the formation of DNA from existing DNA sequences serving as a template. Since the catalytic interaction of DNA polymerase takes place with already existing DNA sequence, the only possibility is that first some conjugate DNA sequences are generated by remote replication after which DNA polymerase uses these sequences as templates to amplify them to original DNA sequences. Whether the product consists of original DNA or its conjugate can be tested.

The model inspires the proposal that the magnetic body of a polar molecule codes for it using dark nucleon sequences assignable to the hydrogen bonds between the molecule and surrounding ordered water layer. Quantum antenna mechanism would allow the immune system to modify itself by developing ordinary DNA coding for amino-acids attaching to and thus "catching" the polar molecule. The mechanism could be behind water memory and homeopathic healing. Every polar molecule in living matter would have dark nucleon sequence or several of them (as in the case of amino-acids) serving as its name. This would also associate unique dark nucleon sequence also with the magnetic body of DNA so that DNA-dark DNA association would be automatic. Same applies to mRNA and tRNA and amino-acids.

Before continuing I want to express my gratitude to Peter Gariaev for posing a question which led to the realization of the connection between quantum antenna hypothesis, remote replication, and genetic code and its generalization.

1. The findings that one should understand

It is good to start by summarizing the experimental findings that the model should explain.

  1. One should be able to identify phantom DNA. This identification explains the findings about phantom DNA if ordinary and dark DNA have common resonance frequencies and therefore behave like resonantly interacting quantum antennae.

  2. The earlier findings of Gariaev's group suggesting remote gene expression, which becomes also possible if the DNAs of the sender can activate the DNA of the receiver by radiation. Direct activation could be based on electromagnetic signal between DNA of the sender and ordinary conjugate DNA of the receiver. Scattering from ordinary and possibly also phantom DNA and would generate this kind of signal. The challenge is to explain why the activation obeys genetic code in the sense that a given DNA sequence activates only similar DNA sequence.

  3. The claim of Montagnier's team is that the radiation generated by DNA affects water in such a manner that it behaves as if it contained the actual DNA. A brief summary of experiment of Montagnier and collaborators is in order.

    1. Two test tubes containing 100 bases long DNA fragments were studied. Both tubes were subjected to 7 Hz electromagnetic radiation. Earth's magnetic field was eliminated to prevent its possible inteference (the cyclotron frequencies of Earth's magnetic field are in EEG range and one of the family secrets of biology and neuroscience since seventies is that cyclotron frequencies in magnetic fields have biological effects on vertebrate brain). The frequencies around 7 Hz correspond to cyclotron frequencies of some biologically important ions in the endogenous magnetic field of .2 Tesla explaining the findings. This field is 2/5 of the nominal value of the Earth's magnetic field.

    2. What makes the situation so irritating for skeptics who have been laughing for decades for homepathy and water memory is that the repeated dilution process used for the homeopathic remedies was applied to DNA in the recent case. The solution containing no detectable amounts DNA (dilution factor was 10-12) was placed in second test tube whereas the first test tube contained 100 bases long DNA in the original concentration.

    3. After 16 to 18 hours both tubes were subjected to polymerase chain reaction (PCR), which builds DNA from its basic building bricks using DNA polymerase enzyme. What is so irritating from the point of view of skeptic was that DNA was generated also in the test tube containing the highly diluted water. Water in presence of second test tube seems to be able to cheat the polymerase by mimicking the presence of the actual DNA serving in the usual situation as a template for builing copies of DNA. One could also speak about the analog quantum teleportation. Note that the presence of both test tubes - and therefore some kind of communication between the samples - is absolutely esential for the process to take place: repeated dilution is not enough.

2. The model of remote replication consistent with DNA as topological quantum computer model

The basic assumptions are that the scattered radiation, the flux tubes of the magnetic body of DNA along which the radiation propagates, and quarks and antiquarks at the ends of the flux tubes from system able to serve as a template for the formation of conjugate of ordinary DNA. To understand how remote remote replication could take place, some further assumptions are necessary.

  1. The flux tubes emanating from DNA are parallel and condensed at 2-D flux sheet having DNA at is first boundary so that DNA nucleotides can attach to the flux tubes at the second boundary. The attached nucleotides would be along the same line and would form DNA sequence in remote replication process.

  2. Quantum antenna interaction takes place between group of molecules participating a given reaction so that they have common antenna frequency as resonance frequency. The frequencies characterize the radiation propagating along magnetic flux tubes connecting the molecules, and could come as sub-harmonics of the frequency of (in the case considered) visible light from the formula

    E= hnf, hn=n h , n=1,2,3,... .

    Here E is the fixed energy of photon. hn denotes value of Planck constant which in TGD Universe can have infinite number of values coming as multiplies of the ordinary Planck constant h.

    For a given photon energy E one obtains harmonics of the basic wavelength

    λ=c/f(n)= nλ0 .

    Wave length would correspond to the length of the flux tube proportional to n. DNAs with flux tubes characterized by different values of n would correspond to different levels in the evolutionary hierarchy. In TGD inspired theory of consciousness the value of hn serves as the measure for the time scale of planned action and memory span and neurons of frontal lobe would represent the highest level in the hierarchy,

  3. If resonance frequency is same for all nucleotides, frequency cannot distinguish between DNA nucleotides. In the model of DNA as topological quantum computer the quark (u or d) and antiquark (kenooverlineu or kenooverlined ) at the ends of the flux tube code for A,T,C,G. This model is the simplest one and does not require any additional assumptions about frequency coding. It also allows resonant interaction at several frequencies: the scattering of visible light from DNA indeed produces a wide spectrum of frequencies interpreted in terms of dark variants of visible photons.

    One can criticize the assumption that particular quark or antiquark is associated with the flux tube ending at particular nucleotide. At this moment this assumption does not have a convincing dynamical explanation. Presumably this explanation would rely on the minimization of the interaction energy.

  4. What is needed is a model explaining why the resonant antenna frequency does not depend on nucleotide: obviously the frequency should relate to something shared by all nucleotides. An energy level associated with sugar-phosphate backbone of DNA is what comes first in mind. A more exotic option is transition involved with quark-antiquark pair. Since electromagnetic field for non-vacuum extremals is accompanied by classical color field, the exchange of gluons between quark and antiquark suggests itself as the quantum antenna interaction distinguishing between nucleotides.

Quantum antenna mechanism is extremely general and flexible and might be a fundamental mechanism of bio-catalysis allowing also communication between visible and dark matter sectors. Antenna mechanism is of course central also in ordinary communications. If the biologically most relevant interactions of biomolecules via quantum antenna mechanism then also water memory and the claimed effects of homeopathically treated water might be understood (see this). The testing of the dark photon aspect of the hypothesis would require the detection of the dark photons somehow: the decay to a bunch of n ordinary photons with same wavelength is the obvious manner to achieve this.

1. Identification of phantom DNA

The observed residual coherent scattering from a chamber from which ordinary DNA is removed inspired the notion of phantom DNA. The questions are what phantom DNA is and is it relevant to remote replication of the ordinary DNA.

Phantom DNA identified observed in the scattering experiments could correspond to dark nucleon sequences realizing vertebrate genetic code with dark nucleons consisting of three quarks representing both DNA,RNA, tRNA, and aminoacids as particular nucleon states (see this). The resonant interaction between ordinary and dark DNA would explain why light at same frequencies scatters also from dark DNA in phantom DNA experiments. In Montagnier's experiments it could give rise to a positive feedback amplifying the radiation from second sample containing DNA. Water would be living in the sense that it contains "dark DNA" and dark DNA might allow remote transcription to ordinary DNA sequences in presence of ordinary DNA codons (triplets) and vice versa.

Skeptic can of course ask whether one could explain the experimental findings without assuming phantom DNA.

  1. In Gariaev's experiments which inspired the notion of phantom DNA part of DNA could "drop" to parallel space-time sheets and have the same effect on the scattered radiation as the ordinary DNA. This explanation would however require the many-sheeted space-time of TGD - probably equally abominable to skeptic as phantom DNA.

  2. In Montagnier's experiment the ordinary DNA contained by water droplet could diffuse to dark space-time sheets and enter the target along the same magnetic flux tubes as radiation propagates. DNA polymerase would allow to amplify this leaking DNA and produce conjugate DNA. Irradiation of the original DNA would generate the flux sheets serving as a route for the transfer. The killer test is to check whether it is indeed conjugate of the original DNA which is produced. Again many-sheeted space-time is required.

  3. For the option based on DNA as topological quantum computer hypothesis discussed above the remote replication would take place via the direct formation of conjugate DNA template and DNA polymerase produces from this copies of the original DNA whereas for "trivial" option conjugate DNA is produced. Phantom DNA would not be absolutely necessary. It is however questionable whether the intensity of the radiation is high enough and the resonant interaction with phantom DNA which could give rise to a positive feedback might be needed to amplify the radiation.

2. Dark DNA and frequency coding by quantum antenna mechanism

The remote transcription of dark DNA (phantom DNA) to ordinary DNA and vice versa would have quite far reaching implications for evolution since dark DNA/RNA/tRNA/amino-acids could define a virtual world serving as Rkeno&D lab where new DNAs could be developed and if needed translated to ordinary DNA. The dark DNA could be also transferred through cell membranes without difficulty, in particular to germ cells. Also the genetic transfer between different organisms would become possible. Second possibility is that the magnetic flux tubes mediating the dark photons traverse the cell membranes so that even the transfer of dark nucleons through the cell membrane is un-necessary. The implications for genetic engineering would be obvious.

Could one generalize the quantum antenna mechanism to the interaction between dark nucleons representing DNA triplets as entangled states of three quarks and ordinary DNA codons consisting of three unentangled nucleotides? Could similar mechanism realize genetic code assigning to dark DNA dark variants of RNA, tRNA and amino-acids via the analogs of transcription and translation processes? It seems that frequency coding, which - somewhat disappointingly - did not look natural for remote replication of ordinary DNA, is ideal for these processes so that the original idea of wave DNA would be realized at the level of dark-visible and dark-dark interactions.

The flux tubes would be associated with entire codons -DNA triplets - rather than individual nucleotides. Different DNA triplets do not form interacting groups in the sense that they should be connected by flux tubes. Therefore the simplest possibility would be frequency coding with specific resonance frequency for each DNA triplet. No quarks at the ends of the flux tubes connecting codons (not nucleotides) are needed. If one assumes that octaves correspond to the same frequency this would require odd multiples

λ(n)= (2n+1)λ0 , n=0,...,63

of λ0 so that the longest wavelength would be 127λ0. In the number theoretic model of the genetic code based on the notion of Combinatorial Hierarchy (see this codons are indeed labeled by 64 integers in the range 0,...,127=27-1. These integers are however not assumed to be odd. One can also consider the possibility that the frequencies are coded by the value of Planck constant and this option leads to an interpretation of the earlier proposed TGD inspired realization of the so called divisor code suggested by Khrennikov and Nilsson in terms of quantum antenna hypothesis. This will be discussed later on.

Support for this option comes from the phenomenon of phantom DNA demonstrating that resonant scattering of light from DNA and dark DNA occurs for the same frequencies.

Can one imagine remote transcription of dark DNA to ordinary DNA using only nucleotides as building bricks? This process would require coupling of DNA nucleotides to dark nucleons representing DNA triplets and it is not easy to imagine any simple mechanism making this possible. Already existing DNA triplets seem to be necessary.

3. Common explanation for the recent findings of Montagnier and earlier findings of Gariaev

In the experiments of Montagnier's group the outcome is remote replication whereas the earlier experiments Gariaev's group give evidence for remote activation of DNA transcription. Hence one expects a common underlying mechanism.

  1. The TGD based explanation of Montagnier's findings relies on the assumption that the homeopathic procedure generated a population of dark DNA nucleotides in the diluted system. The sequence of dilutions and shakings was like a series of environmental catastrophes driving the evolution of dark DNA and also feeding metabolic energy to the system. The outcome was dark DNA population mimicking the original DNA in the test tube B. In the presence of DNA polymerase in tube B and second test tube A containing ordinary DNA the dark DNA was somehow able to generate ordinary DNA in tube B. The detailed mechanism for this remained open.

  2. Could the scattered laser light have the same effect as the homeopathic procedure? This would require a direct transcription of dark DNA to ordinary DNA in the presence of DNA polymerase and nucleotides (only them!). It is very difficult to understand how this could happen. DNA polymerase very probably does not have the same catalyzing effect on dark DNA sequences as on ordinary DNA sequences. It is also difficult to imagine the build-up of ordinary DNA from nucleotides using dark nucleon sequences as templates: if frequency coded codons would serve as building bricks, situation would be simpler as already found.

  3. One must not forget that the presence of the test tube A was essential in the experiment of Montagnier: communications between the test tubes crucial for the outcome must have taken place. The consistency between the two experiments could be achieved if the DNA in test tube A generated the counterpart of the scattered laser signal in Gariaev's experiments but certainly as a much weaker signal.

  4. This signal should have been amplified somehow by the presence the dark DNA sequences in tube B so that it would have been able to generate critical amounts of conjugate of the original DNA amplified by DNA polymerase to the copy of the original. What suggests itself is a positive feedback loop ordinary DNA sequences → dark DNA sequences → ordinary DNA sequences..... causing the amplification of the weak signal so that it is able to induce remote replication by the proposed mechanism. This kind of feedback of signals propagating between magnetic bodies was assumed also in the model for the strange images produced by the irradiation of DNA sample by ordinary light interpreted as photographs of magnetic flux tubes containing dark matter (see this) .

What is nice from TGD point of view that the consistency between the two experiments give support also for the notion of dark DNA and its identification as phantom DNA.

4. Summary

The basic assumptions of the model of remote replication deserve a short summary.

  1. Bio-molecules would serve as receiving and sending quantum antennas forming populations with communications between members just like higher organisms. The molecules participating the same reaction would naturally have same antenna frequencies. Quarks and antiquarks at the ends of the flux tubes would code for different nucleotides and the frequencies associated with the nucleotides would be identical. The character of classical electromagnetic field would code for a particular nucleotide.

  2. Remote replication and other remote polymerization processes would differ from the ordinary one only in that the phase transition reducing the value of Planck constant for the flux tube would not take place and bring the molecules near each other.

  3. The immediate product of remote replication would be the conjugate of the original DNA sequence and DNA polymerase would amplify it to the copy of the original DNA sequence. This prediction could be tested by using very simple DNAs sequences- say sequences consisting two nucleotides which are not conjugates. For instance, one could check what happens if conjugate nucleotides are absent from the target (neither conjugate nor original DNA sequence should be produced). If the target contains conjugate nucleotides but no originals, only conjugate DNA sequences would be produced - one might hope in sufficiently large amounts to be detectable.

  4. Frequency coding would be natural for quantum antenna interactions between ordinary DNA and its dark variant and also between dark variants of DNA, RNA, tRNA, and amino-acids. The reason is that dark nucleons represent the genetic code by entanglement and it is not possible to reduce the codon to a sequence of letters.

3. Possible implications

The proposed realization of remote replication seems to have rather far reaching implications for the understanding of the mechanism of homeopathy and basic mechanisms of immune system as well as to the understanding of how DNA-dark nucleon sequence association. One can also interpret the proposed TGD based realization of the divisor code suggested by Khrennikov (see this) as frequency coding of DNA triplets by the value of Planck constant assignable to flux tubes emerging from DNA triplets.

1. Possible relevance for homeopathy and immune system

TGD inspired vision about water memory assumes that the magnetic bodies of molecules dis-solved into water represent the molecules in terms of cyclotron frequencies characterizing its magnetic body. Molecules can lose their magnetic bodies as the hydrogen bonds connecting the molecule to the magnetic body are split. The population of these lost magnetic bodies would define a representation for the dissolved substance able to mimic it.

The hitherto unanswered questions concern the detailed structure of the magnetic body of the molecule and how it codes for the molecule. The hydrogen bonds connecting the molecule to the ordered water forming a kind of ice covering the molecule in the inactive state should be crucial aspect of the coding. If dark nucleon sequences are associated with the hydrogen bonds of this "ice layer" or generated in their splitting as I have proposed, one can ask whether dark nucleon sequences could characterize the molecular magnetic body. If so, cyclotron resonance frequencies or more general frequencies associated with the dark DNA sequences could code for the molecule. DNA sequences would define a universal language allowing for the system to name for polar molecules.

Quantum antenna mechanism would in turn associate ordinary DNA sequences with the dark nucleon sequences coding for the molecule. Hence one can imagine a development of a mechanism allowing the organism to modify its DNA by adding to it genes coding for proteins characterized by the same resonance frequencies as the magnetic bodies of the invader molecules. These proteins would couple strongly to the invader molecules via quantum antenna mechanism and the phase transition reducing Planck constant would allow them to catch the invader molecules by attaching to them. The fact that the DNA of immune system evolves very rapidly conforms with this vision.

2. Frequency coding for DNA sequences by the value of Planck constant as a realization of divisor code

The realization of dark magnetic bodies of polar molecules in terms of dark nucleon sequences allows to understand the association of dark DNA with ordinary DNA, RNA, and tRNA making among other things possible the transcription of dark DNA to DNA and vice versa. Dark nucleon sequences would be associated with the magnetic bodies of DNA, mRNA, and tRNA. This would apply also to amino-acid sequences. Dark DNA would separate from ordinary DNA as it loses its magnetic body in the splitting of hydrogen bonds and suffers denaturation. Similar mechanism would cause denaturation of other biomolecules and would mean that they "lose their names" and thus information content and become mere organic molecules instead of living bio-molecules. This kind of association would make the emergence of the genetic code and its generalization to the naming of molecules by DNA sequences trivial.

Genetic code can be understood from the proposed natural correspondence between dark nucleon sequences and DNA, RNA, tRNA, and acmino-acids). I have however developed also another realizaton based on TGD based realization of so called divisor code first suggested by Khrennikov and Nilsson and the following argument allows to interpret in terms of frequency for fixed value of photon energy with frequencies coded by the value of Planck constant.

  1. The observation of Khrennikov and Nilsson is following. Consider the integers n in the range 1,...,21 and obviously labeling amino-acids and let k(n) the number of divisors of n. Define B(k) as the number of integers n for which the number of divisors is k. It turns out that the numbers B(k) are rather near to the numbers A(k) of amino-acids coded by k codons. This suggests that given amino-acid A is coded by a product of prime p(A), which alone characterizes it, and integer n(A) in the range 1,...,21. The product of integers characterizing the codon coding for A would be characterized by the product of p(A) and some factor r(A) of n(A). With these assumptions given codon would code for only single amino-acid and the number of DNAs coding for amino-acid A is the number of the factors r(A) of n(A). The codons coding for A would be coded by integers p(A)r(A) such that r(A) divides n(A). The safest assumption would be that the primes p(A) satisfy p(A)>19 so that the p(A) does not divide n(A) for any A. If p(A) is as small as possible the value spectrum of p(A) is

    {23,29,31,37, 41,43,47,53, 59,61,67,71, 73,79,83,89, 97,101,103,107, 109}.

    What is interesting is that Mersenne prime M7=27-1 = 217 appears in the model of genetic code based on the notion of Combinatorial Hierarchy (see this). This model assumes that DNA codons correspond to 64 integers in the range 1,...,127. This realization of the genetic code cannot however be consistent with the divisor code realized in the proposed manner since it would require that the integers n(A)p(A) would belong to the range 1,..,127. The prime factors of these integers can however belong to this range.

  2. The TGD inspired proposal was that the flux tube assignable to amino-acid A corresponds to hbar =p(A)× n(A)hbar0 whereas the DNA triplet (for quark-antiquark coding nucleotide rather than triplet) coding for it is characterized by hbar =p(A)× r(A)hbar0 such that r(A) divides n(A).

  3. This proposal could be interpreted in terms of frequency coding by quantum antenna mechanism. For a given photon energy E wave length would be coded by the value of hbar and one would have λn=n λ0, n=p(A)n(A) for amino-acids and n=p(A)r(A) for codons. The condition that flux tube lengths are same for different DNA triplets would be satisfied if the common length of the flux tubes is an integer multiple of λ0 proportional to the product of all integers appearing as factors in the integers coding for amino-acids. The common length of the flux tubes would be therefore proportional to the product ∏A p(A)∏ArA.

For background see the chapter Homeopathy in Many-Sheeted Space-time of "Biosystems as Conscious Holograms".

Monday, July 11, 2011

How could one calculate p-adic integrals numerically?

Riemann sum gives the simplest numerical approach to the calculation of real integrals. Also p-adic integrals should allow a numerical approach and very probably such approaches already exist and "motivic integration" presumably is the proper word to google. The attempts of an average physicist to dig out this kind of wisdom from the vastness of mathematical literature however lead to a depression and deep feeling of inferiority. The only manner to avoid the painful question "To whom should I blame for ever imagining that I could become a real mathematical physicist some day?" is a humble attempt to extrapolate real common sense to p-adic realm. One must believe that the almost trivial Riemann integral must have an almost trivial p-adic generalization although this looks far from obvious.

1. A proposal for p-adic numerical integration

The physical picture provided by quantum TGD gives strong constraints on the notion of p-adic integral.

  1. The most important integrals should be over partonic 2-surfaces. Also p-adic variants of 3-surfaces and 4-surfaces can be considered. The p-adic variant of Kähler action would be an especially interesting integral and reduces to Chern-Simons terms over 3-surfaces for preferred extremals. One should use this definition also in the p-adic context since the reduction of a total divergence to boundary term is not expected to take place in numerical approach if one begins from a 4-dimensional Kähler action since in p-adic context topological boundaries do not exist. The reduction to Chern-Simons term means also a reduction to cohomology and p-adic cohomology indeed exists.

    At the first step one could restrict the consideration to algebraic varieties - in other words zero loci for a set of polynomials Pi(x) at the boundary of causal diamond consisting of pieces of δ M4+/-× CP2. 5 equations are needed. The simplest integral would be the p-adic volume of the partonic 2-surface.

  2. The numerics must somehow rely on the p-adic topology meaning that very large powers pn are very small in p-adic sense. In the p-adic context Riemann sum makes no sense since the sum never has p-adic norm larger than the maximum p-adic norm for summands so that the limit would give just zero. Finite measurement resolution suggests that the analog for the limit Δ x→ 0 is pinary cutoff O(pn)=0, n→ ∞, for the function f to be integrated. In the spirit of algebraic geometry one must asume at least power series expansion if not even the representability as a polynomial or rational function with rational or p-adic coefficients.

  3. Number theoretic approach suggests that the calculation of the volume vol(V) of a p-adic algebraic variety V as integral should reduce to the counting of numbers for the solutions for the equations fi(x)=0 defining the variety. Together with the finite pinary cutoff this would mean counting of numbers for the solutions of equations fi(x) mod pn=0 . The p-adic volume Vol(V,n) of the variety in the measurement resolution O(pn)=0 would be simply the number of p-adic solutions to the equations fi(x) mod pn=0. Although this number is expected to become infinite as a real number at the limit n→ ∞, its p-adic norm is never larger than one. In the case that the limit is a well-defined as p-adic integer, one can say that the variety has a well-defined p-adic valued volume at the limit of infinite measurement resolution. The volume Vol(V,n) could behave like npn and exist as a well defined p-adic number only if np is divisible by p.

  4. The generalization of the formula for the volume to an integral of a function over the volume is straightforward. Let f be the function to be integrated. One considers solutions to the conditions f(x) =y, where y is p-adic number in resolution O(pn)=0, and therefore has only a finite number of values. The condition f(x)-y=0 defines a codimension 1 sub-variety Vy of the original variety and the integral is defined as the weighted sum ∑y y × vol(Vy), where y denotes the point in the finite set of allowed values of f(x) so that calculation reduces to the calculation of volumes also now.

2. General coordinate invariance

From the point of view of physics general coordinate invariance of the volume integral and more general integrals is of utmost importance.

  1. The general coordinate invariance with respect to the internal coordinates of surface is achieved by using a subset of imbedding space-cooordinates as preferred coordinates for the surface. This is of also required if one works in algebraic geometric setting. In the case of projective spaces and similar standard imbedding spaces of algebraic varieties natural preferred coordinates exist. In TGD framework the isometries of M4× CP2 define natural preferred coordinate systems.

  2. The question whether the formula can give rise to a something proportional to the volume in the induced metric in the intersection of real and rational worlds interesting. One could argue that one must include the square root of the determinant of the induced metric to the definition of volume in preferred coordinates but this might not be necessary. In fact, p-adic integration is genuine summation whereas the determinant of metric corresponds density of volume and need not make no sense in p-adic context. Could the fact that the preferred coordinates transform in simple manner under isometries of the imbedding space (linearly under maximal subgoup) alone guarantee that the information about the imbedding space metric is conveyed to the formula?

  3. Indeed, since the volume is defined as the number of p-adic points, the proposed formula should be invariant at least under coordinate transformations mediated by bijections of the preferred coordinates expressible in terms of rational functions. In fact, even more general bijections mapping p-adic numbers to p-adic numbers could be allowed since they effectively mean the introduction of new summation indices. Since the determinant of metric changes in coordinate transformations this requires that the metric determinant is not present at all. Thus summation is what allows to achieve the p-adic variant of general coordinate invariance.

  4. This definition of volume and more general integrals amounts to solving the remaining coordinates of imbedding space as (in general) many-valued functions of these coordinates. In the integral those branches contribute to the integral for which the solution is p-adic number or belongs to the extension of p-adic numbers in question. By p-adic continuity the number of p-adic value solutions is locally constant. In the case that one integrates function over the surface one obtains effectively many-valued function of the preferred coordinates and can perform separate integrals over the branches.

3. Numerical iteration procedure

A convenient iteration procedure is based on the representation of integrand f as sum ∑kfk of functions associated with different p-adic valued branches zk=zk(x) for the surface in the coordinates chosen and identified as a subset of preferred imbedding space coordinates. The number of branches zk contributing is by p-adic continuity locally constant.

The function fk - call it g for simplicity - can in turn be decomposed into a sum of piecewise constant functions by introducing first the piecewise constant pinary cutoffs gn(x) obtained in the approximation O(pn+1)=0. One can write g as

g(x)= ∑ hn (x) , h0(x)=g0(x) ,

hn=gn(x)-gn-1(x) for n>0 .

Note that hn(x) is of form gn(x)= an(x)pn, an(x) ∈ {0,p-1} so that the representation for integral as a sum of integrals for piecewise constant functions hn converge rapidly. The technical problem is the determination of the boundaries of the regions inside which these functions contribute.

The integral reduces to the calculation of the number of points for given value of hn(x) and by the local constancy for the number of p-adic valued roots zk(x) the number of points for N0k≥ 0 pk= N0/(1-p), where N0 is the number of points x with the property that not all points y= x(1+O(p)) represent p-adic points z(x). Hence a finite number of calculational steps is enough to determine completely the contribution of given value to the integral and the only approximation comes from the cutoff in n for hn(x).

4. Number theoretical universality

This picture looks nice but it is far from clear whether the resulting integral is that what physicist wants. It is not clear whether the limit Vol(V,n), n→ ∞, exists or even should exist always.

  1. In TGD Universe a rather natural condition is algebraic universality requiring that the p-adic integral is proportional to a real integral in the intersection of real and p-adic worlds defined by varieties identified as loci of polynomials with integer/rational coefficients. Number theoretical universality would require that the value of the p-adic integral is p-adic rational (or algebraic number for extensions of p-adic numbers) equal to the value of the real integral and in algebraic sense independent of the number field. In the eyes of physicist this condition looks highly non-trivial. For a mathematician it should be extremely easy to show that this condition cannot hold true. If true the equality would represent extremely profound number theoretic truth.

    The basic idea of the motivic approach to integration is to generalize integral formulas so that the same formula applies in any number field: the specialization of the formula to given number field would give the integral in that particular number field. This is of course nothing but number theoretical universality. Note that the existence of this kind of formula requires that in the intersection of the real and p-adic worlds real and p-adic integrals reduce to the same rational or transcendentals (such as log(1+x) and polylogarithms).

  2. If number theoretical universality holds true one can imagine that one just takes the real integral, expresses it as a function of the rational number valued parameters (continuable to real numbers) characterizing the integrand and the variety and algebraically continues this expression to p-adic number fields. This would give the universal formula which can be specified to any number field. But it is not at all clear whether this definition is consistent with the proposed numerical definition.

  3. There is also an intuitive expectation in an apparent conflict with the number theoretic universality. The existence of the limit for a finite number p-adic primes could be interpreted as mathematical realization of the physical intuition suggesting that one can assign to a given partonic 2-surface only a finite number of p-adic primes. Indeed, quantum classical correspondence combined with the p-adic mass calculations suggests that the partonic 2-surfaces assignable to a given elementary particle in the intersection of real and p-adic worlds corresponds to a finite number of p-adic primes somehow coded by the geometry of the partonic 2-surface.

    One way out of the difficulty is that the functions - say polynomials - defining the surface have as coefficients powers of en. For given prime p only the powers of ep exist p-adically so that only the primes p dividing n would be allowed. The transcendenteals of form log(1+px) and their polylogarithmic generalizations resulting from integrals in the intersection of real and p-adic worlds would have the same effect. Second way out of the difficulty would be based on the condition that the functional integral over WCW ("world of classical worlds") converges. There is a good argument stating that the exponent of Kähler action reduces to an exponent of integer n and since all powers of n appear the convergence is achieved only for p-adic primes dividing n.

5. Can number theoretical universality be consistent with the proposed numerical definition of the p-adic integral?

The equivalence of the proposed numerical integral with the algebraic definition of p-adic integral motivated by the algebraic formula in real context expressed in terms of various parameters defining the variety and the integrand and continued to all number fields would be such a number theoretical miracle that it deserves italics around it:

For algebraic surfaces the real volume of the variety equals apart from constant C to the number of p-adic points of the variety in the case that the volume is expressible as p-adic integer.

The proportionality constant C can depend on p-adic number field , and the previous numerical argument suggests that the constant could be simply the factor 1/(1-p) resulting from the sum of p-adic points in p-adic scales so short that the number of the p-adic branches zk(x) is locally constant. This constant is indeed needed: without it the real integrals in the intersection of real and p-adic worlds giving integer valued result I=m would correspond to functions for which the number of p-adic valued points is finite.

The statement generalizes to apply also to the integrals of rational and perhaps even more general functions. The equivalence should be considered in a weak form by allowing the transcendentals contained by the formulas have different meanings in real and p-adic number fields. Already the integrals of rational functions contain this kind of transcendentals.

The basic objection that number of p-adic points without cannot give something proportional to real volume with an appropriate interpretation cannot hold true since real integral contains the determinant of the induced metric. As already noticed the preferred coordinates for the imbedding space are fixed by the isometries of the imbedding space and therefore the information about metric is actually present. For constant function the correspondence holds true and since the recipe for performing of the integral reduce to that for an infinite sum of constant functions, it might be that the miracle indeed happens.

The proposal can be tested in a very simple manner. The simplest possible algebraic variety is unit circle defined by the condition x2+y2=1.

  1. In the real context the circumference is 2π and p-adic transcendental requiring an infinite-dimensional algebraic extension defined in terms of powers of 2π. Does this mean that the number of p-adic points of circle at the limit n→ ∞ for the pinary cutoff O(pn)=0 is ill-defined? Should one define 2π as this integral and say that the motivic integral calculus based on manipulation of formulas reduces the integrals to a combination of p-adically existing numbers and 2π? In motivic integration the outcome of the integration is indeed formula rather than number and only a specialization gives it a value in a particular number field. Does 2π have a specialization to the original p-adic number field or should one introduce it via transcendental extension?

  2. The rational points (x,y)=(k/m,l/m) of the p-adic unit circle would correspond to Pythagorean triangles satisfying k2+l2= m2 with the general solution k= r2-s2, l=2rs, m=r2+s2. Besides this there is an infinite number of p-adic points satisfying the same equation: some of the integers k,l,m would be however infinite as real integers. These points can be solved by starting from O(p)=0 approximation (k,l,m)→ (k,l,m)~ mod~ p== (k0,l0,m0). One must assume that the equations are satisfied only modulo p so that Pythagorean triangles modulo p are the basic objects. Pythagorean triangles can be also degenerate modulo p so that either k0,l0 or evenm0 vanishes. Note that for surfaces xn+yn=zn no non-trivial solutions exists for xn,yn,zn<p for n> 2 and all p-adic points are infinite as real integers.

    The Pythagorean condition would give a constraint between higher powers in the expressions for k,l and m. The challenge would be to calculate the number of this kind of points. If one can choose the integers k-(k mod p) and l-(l mod p) freely and solve m-(m mod p) from the quadratic equations uniquely, the number of points of the unit circle consisting of p-adic integers must be of form N0/(1-p). At the limit n→ ∞ the p-adic length of the unit circle would be in p-adic topology equal to the number of modulo p Pythagorean triangles (r,s) satisfying the condition (r2+s2)2<p. The p-adic counterpart of 2π would be ordinary p-adic number depending on p. This definition of the length of unit circle as number of its modulo p Pythagorean points also Pythagoras would have agreed with since in the Pythagorean world view only rational triangles were accepted.

  3. One can look the situation also directly solving y as y=+/- (1-x2)1/2. The p-adic square root exists always for x=O(pn), n>0. The number of these points x is 2/(1-p) taking into account the minus sign. For x=O(p0) the square root exist for roughly one half of the integers n∈ {0,p-1}. The number of integers (x2)0 is therefore roughly (p-1)/2. The study of p=5 case suggests that the number of integers (1-(x2)0)0∈ {0,p-1} which are squares is about (p-1)/4. Taking into account the +/- sign the number of these points by N0≈ (p-1)/2. In this case the higher O(p) contribution to x is arbitrary and one obtains total contribution N0/(1-p). Altogether one would have (N0+2)/(1-p) so that eliminating the proportionality factor the estimate for the p-adic counterpart of 2π would be (p+3)/2.

  4. One could also try a trick. Express the points of circle as (x,y)=(cos(t),sin(t)) such that t is any p-adic number with norm smaller than one in p-adic case. This unit circle is definitely not the same object as the one defined as algebraic variety in plane. One can however calculate the number of p-adic points at the limit n→ ∞. Besides t=0, all p-adic numbers with norm larger than p-n and smaller than 1 are acceptable and one obtains as a result N(n)= 1+ pn-1, where "1" comes from overall important point t=0. One has N(n)→ 1 in p-adic sense. If t=0 is not allowed the length vanishes p-adically. The circumference of circle in p-adic context would have length equal to 1 in p-adic topology so that no problems would be encountered (numbers exp(i2π/n) would require algebraic extension of p-adic numbers and would not exist as power series).

    The replacement of the coordinates (x,y) with coordinate t does not respect the rules of algebraic geometry since trigonometric functions are not algebraic functions. Should one allow also exponential and trigonometric functions and their inverses besides rational functions and define circle also in terms of these. Note that these functions are exceptional in that corresponding transcendental extensions -say that containing e and its powers- are finite-dimensional?

  5. To make things more complicated, one could allow algebraic extensions of p-adic numbers containing roots Un=exp(i2π/n) of unity. This would affect the count too but give a well-defined answer if one accepts that the points of unit circle correspond to the Pythagorean points multiplied by the roots of unity.

A question inspired by this example is whether the values of p-adic integrals as p-adic numbers could be determined by the few lowest powers of p with higher order contribution giving something proportional to an infinite power of p.

6. p-Adic thermodynamics for measurement resolution?

The proposed definition is rather attractive number theoretically since everything would reduce to the counting of p-adic points of algebraic varieties. The approach generalizes also to algebraic extensions of p-adic numbers. Mathematicians and also physicists love partition functions, and one can indeed assign to the volume integral a partition function as p-adic valued power series in powers Z(t)=∑ vntn with the coefficients vn giving the volume in O(pn)=0 cutoff. One can also define partition functions Zf(t)= ∑ fntn, with fn giving the integral of f in the same approximation.

Could this kind of partition functions have a physical interpretation as averages over physical measurements over different pinary cutoffs? p-Adic temperature can be identified as t = p1/T, T=1/k. For p-adically small temperatures the lowest terms corresponding to the worst measurement resolution dominate. At first this sounds counter-intuitive since usually low temperatures are thought to make possible good measurement resolution. One can however argue that one must excite p-adic short range degrees of freedom to get information about them. These degrees of freedom correspond to the higher pinary digits by p-adic length scale hypothesis and high energies by Uncertainty Principle. Hence high p-adic temperatures are needed. Also measurement resolution would be subject to p-adic thermodynamics rather than being freely fixed by the experimentalist.