Monday, September 28, 2015

How the fourth phase of water discovered by Pollack is formed?

The fourth phase of Pollack is new phase of water whose existence is demonstrated convincingly by Pollack. This phase might be behing various other phases of water - such as ordered water, the Brown gas, etc... claimed by free energy researchers.

  1. The fourth phase is formed in the experiments of Pollack in a system consisting of water bounded by gel phase and the new phase itself is gel like. It consists of negatively charged regions of thickness, which can be as thick as 200 micrometers and even thicker exclusion zones (EZs) have been reported. "EZ" comes from the observation that the negatively charged regions exclude some atoms, ions, and molecules. The stoichiometry of EZ is H1.5O meaning that there hydrogen bonded pairs of H2O molecules with one proton kicked away.

  2. Where do the protons go? TGD proposal is that it becomes dark proton at the flux tubes outside the exclusion zone. This generates nanoscopic or even macroscopic quantum phase. DNA double strands with negative charge would represent an example of this kind of phase with 2 negative charges per base pair. Cell is negatively charged and would also represent an example about fourth phase inside cell. In TGD inspired prebiology protocells would be formed by the bubbles of fourth phase of water as I have proposed in the article "More Precise TGD Based View about Quantum Biology and Prebiotic Evolution". One ends up with a rather detailed model of prebiotic evolution involving also clay minerals known as phyllosilicates.

  3. irradiation of water using IR photons with energies in the range of metabolic energy quanta - nominal value is about .5 eV) - generates fourth phase. This suggests that metabolic energy might be used to generate them around DNA strands for instance.

    Also microwave radiation in the range of few GHz generates fourth phase and many other mechanisms have been suggested. Microwaves also generate the burning of water which might therefore also involve the formation of EZs and microwave frequenzies affect strongly microtubules (see this and this). It seems that the energy feed is the key concept. Maybe a spontaenously occurring self-organization process induced by a feed of energy is in question.

How the fourth phase is formed? I have considered several alternative answers.
  1. Physicist del Giudice has proposed the existence of coherence domains of size of order 1 micrometer in water. The proposal would be that coherence domains of water containing water molecules with one O-H bond per two molecules at criticality for splitting so that metabolic energy quantum can split it and kick the proton out as dark proton. The problem is how to achieve the almost ionization in such a manner that highly regular H1.5 stochiometry is obtained.

  2. I have also proposed that so called water clathrates might be involved and serve at least as seeds for the generation of fourth phase of water. Water clathrates can be see as cages containing hydrogen-bonded water - forming a crystal structure analogous to ice. Maybe fourth phase involving also hydrogen bonds between two water molecules before the loss of proton could be formed from clathrates by a phase transition liberating the dark protons.

I have not considered earlier a proposal inspired by the fact that the sequences of dark protons at dark magnetic flux tubes outside the fourth phase can be regarded as dark nuclei.
  1. Dark nuclei are characterized by dark binding energy. If the dark binding energy scales like Coulomb energy it behaves like 1/size scale and thus like 1/heff. The model for dark genetic code as dark proton sequences generated as DNA becomes negatively charged by the generaton of fourth phase of water around it suggests that the size of dark protons is of order nanometer. This implies that the dark nuclear binding energy is in the UV region. Just the energy about 5 eV needed to kick O-H bond near criticality against splitting by a small energy dose such as metabolic energy quantum.

  2. Could it be that the formation of EZs proceeds as a chain reaction, that is dark nuclear fusion? If so the basic entities of life - EZs - would be generated spontaneously as Negentropy Maximization Principle indeed predicts! Note that this involves also formation of dark nucleus analogs of DNA, RNA, aminoacids and realization of genetic code! As dark proton is added to a growing dark proton sequence, a (dark or) ordinary photon with energy about 5 eV is liberated as binding energy and could (transform to ordinary photon and) kick new O-H bond near near criticality or over it and external IR radiation at metabolic energy takes care of the rest. If the system is near criticality, many other manners to get it over the border can be imagined. Just a feed of energy generating IR radiation is enough.

  3. The resulting dark nuclei can be transform to ordinary nuclei. and this would give rise to ordinary cold fusion (see "Biological transmutations, and their applications in chemistry, physics, biology, ecology, medicine, nutrition, agriculture, geology" by Kervran and "The secret life of plants" by Tompkins and Bird). Biofusion of nuclei of biologically important ions such as Ca in living matter has been reported. Also in systems in which hydrolysis splits water to yield hydrogen gas, the same phenomenon is reported. For instance, Kanarev and Mizuno report cold fusion in this kind of systems. The explanation would be in terms of fourth phase of water: in the splitting of water not only hydrogen atoms but also dark protons and dark nuclei would be formed.

For a summary of earlier postings see Links to the latest progress in TGD.

Are the zeros of Riemann zeta number theoretically universal?

I have already posted the following piece of text as part of earlier posting. Since the outcome of simple argument leads to very powerful statement about number theoretic anatomy of
the zeros of Riemann zeta, I thought that it would be approriate to represent it separately.

Dyson's comment about Fourier transform of Riemann Zeta is very interesting concerning Number Theoretic Universality (NTU) for Riemann zeta.

  1. The numerical calculation of Fourier transform for the distribution of the imaginary parts iy of zeros s=1/2+iy of zeta shows that it is concentrated at discrete set of frequencies coming as log(pn), p prime. This translates to the statement that the zeros of zeta form a 1-dimensional quasicrystal, a discrete structure Fourier spectrum by definition is also discrete (this of course holds for ordinary crystals as a special case). Also the logarithms of powers of primes would form a quasicrystal, which is very interesting from the point of view of p-adic length scale hypothesis. Primes label the "energies" of elementary fermions and bosons in arithmetic number theory, whose repeated second quantization gives rise to the hierarchy of infinite primes. The energies for general states are logarithms of integers.

  2. Powers pn label the points of quasicrystal defined by points log(pn) and Riemann zeta has interpretation as partition function for boson case with this spectrum. Could pn label also the points of the dual lattice defined by iy?

  3. The existence of Fourier transform for points log(pin) for any vector ya requires piiya to be a root of unity. This could define the sense in which zeros of zeta are universal. This condition also guarantees that the factor n-1/2-iy appearing in zeta at critical line are number theoretically universal (p1/2 is problematic for Qp: the problem might be solved by eliminating from p-adic analog of zeta the factor 1-p-s.

    1. One obtains for the pair (pi,sa) the condition log(pi)ya= qia2π, where qia is a rational number. Dividing the conditions for (i,a) and (j,a) gives

      pi= pjqia/qja

      for every zero sa so that the ratios qia/qja do not depend on sa. Since the exponent is rational number one obtains piM= pjN for some integers, which cannot be true.

    2. Dividing the conditions for (i,a) and (i,b) one obtains

      ya/yb= qia/qib

      so that the ratios qia/qib do not depend on pi. The ratios of the imaginary parts of zeta would be therefore rational number which is very strong prediction and zeros could be mapped by scaling ya/y1 where y1 is the zero which smallest imaginary part to rationals.

    3. The impossible consistency conditions for (i,a) and (j,a) can be avoided if each prime and its powers correspond to its own subset of zeros and these subsets of zeros are disjoint: one would have infinite union of sub-quasicrystals labelled by primes and each p-adic number field would correspond to its own subset of zeros: this might be seen as an abstract analog for the decomposition of rational to powers of primes. This decomposition would be natural if for ordinary complex numbers the contibution in the complement of this set to the Fourier trasform vanishes. The conditions (i,a) and (i,b) require now that the ratios of zeros are rationals only in the subset associated with pi.
For the general option the Fourier transform can be delta function for x=log(pk) and the set {ya(p)} contains Np zeros. The following argument inspires the conjecture that for each p there is an infinite number Np of zeros ya(p) satisfying

piya(p)=u(p)=e(r(p)/m(p))i2π ,

where u(p) is a root of unity that is ya(p)=2π (m(a)+r(p))/log(p) and forming a subset of a lattice with a lattice constant y0=2π/log(p), which itself need not be a zero.

In terms of stationary phase approximation the zeros ya(p) associated with p would have constant stationary phase whereas for ya(pi≠ p)) the phase piya(pi) would fail to be stationary. The phase eixy would be non-stationary also for x≠ log(pk) as function of y.

  1. Assume that for x =qlog(p), q not a rational, the phases eixy fail to be roots of unity and are random implying the vanishing/smallness of F(x) .

  2. Assume that for a given p all powers piy for y not in {ya(p)} fail to be roots of unity and are also random so that the contribution of the set y not in {ya(p)} to F(p) vanishes/is small.

  3. For x= log(pk/m) the Fourier transform should vanish or be small for m different from 1 (rational roots of primes) and give a non-vanishing contribution for m=1. One has

    F(x= log(pk/m ) =∑1≤ n≤ N(p) e[kM(n,p)/mN(n,p)]i2π .

    Obviously one can always choose N(n,p)=N(p).

  4. For the simplest option N(p)=1 one would obtain delta function distribution for x=log(pk). The sum of the phases associated with ya(p) and -ya(p) from the half axes of the critical line would give

    F(x= log(pn)) ∝ X(pn)==2cos(n× (r(p)/m(p))× 2π) .

    The sign of F would vary.

  5. The rational r(p)/m(p) would characterize given prime (one can require that r(p) and m(p) have no common divisors). F(x) is non-vanishing for all powers x=log(pn) for m(p) odd. For p=2, also m(2)=2 allows to have |X(2n)|=2. An interesting ad hoc ansatz is m(p)=p or ps(p). One has periodicity in n with period m(p) that is logarithmic wave. This periodicity serves as a test and in principle allows to deduce the value of r(p)/m(p) from the Fourier transform.

What could one conclude from the data (see this)?

  1. The first graph gives |F(x=log(pk| and second graph displays a zoomed up part of |F(x| for small powers of primes in the range [2,19]. For the first graph the eighth peak (p=11) is the largest one but in the zoomed graphs this is not the case. Hence something is wrong or the graphs correspond to different approximations suggesting that one should not take them too seriously.

    In any case, the modulus is not constant as function of pk. For small values of pk the envelope of the curve decreases and seems to approach constant for large values of pk (one has x< 15 (e15≈ 3.3× 106).

  2. According to the first graph | F(x)| decreases for x=klog(p)<8, is largest for small primes, and remains below a fixed maximum for 8<x<15. According to the second graph the amplitude decreases for powers of a given prime (say p=2). Clearly, the small primes and their powers have much larger | F(x)| than large primes.

There are many possible reasons for this behavior. Most plausible reason is that the sums involved converge slowly and the approximation used is not good. The inclusion of only 104 zeros would show the positions of peaks but would not allow reliable estimate for their intensities.
  1. The distribution of zeros could be such that for small primes and their powers the number of zeros is large in the set of 104 zeros considered. This would be the case if the distribution of zeros ya(p) is fractal and gets "thinner" with p so that the number of contributing zeros scales down with p as a power of p, say 1/p, as suggested by the envelope in the first figure.

  2. The infinite sum, which should vanish, converges only very slowly to zero. Consider the contribution Δ F(pk,p1) of zeros not belonging to the class p1\ne p to F(x=log(pk)) =∑pi Δ F(pk,pi), which includes also pi=p. Δ F(pk,pi), p≠ p1 should vanish in exact calculation.

    1. By the proposed hypothesis this contribution reads as

      l Δ F(p,p1)= ∑a cos[X(pk,p1)(M(a,p1)+ r(p1)/m(p1))2π)t] .

      X(pk,p1)=log(pk)/log(p1).

      Here a labels the zeros associated with p1. If pk is "approximately divisible" by p1 in other words, pk≈ np1, the sum over finite number of terms gives a large contribution since interference effects are small, and a large number of terms are needed to give a nearly vanishing contribution suggested by the non-stationarity of the phase. This happens in several situations.

    2. The number π(x) of primes smaller than x goes asymptotically like π(x) ≈ x/log(x) and prime density approximately like 1/log(x)-1/log(x)2 so that the problem is worst for the small primes. The problematic situation is encountered most often for powers pk of small primes p near larger prime and primes p (also large) near a power of small prime (the envelope of | F(x)| seems to become constant above x∼ 103).

    3. The worst situation is encountered for p=2 and p1=2k-1 - a Mersenne prime and p1= 22k+1, k≤ 4 - Fermat prime. For (p,p1)=(2k,Mk) one encounters X(2k,Mk)= (log(2k)/log(2k-1) factor very near to unity for large Mersennes primes. For (p,p1)=(Mk,2) one encounters X(Mk,2)= (log(2k-1)/log(2) ≈ k. Examples of Mersennes and Fermats are (3,2),(5,2),(7,2),(17,2),(31,2), (127,2),(257,2),... Powers 2k, k=2,3,4,5,7,8,.. are also problematic.

    4. Also twin primes are problematic since in this case one has factor X(p=p1+2,p1)=log(p1+2)/log(p1). The region of small primes contains many twin prime pairs: (3,5), (5,7), (11,13), (17,19), (29,31),....

    These observations suggest that the problems might be understood as resulting from including too small number of zeros.
  3. The predicted periodicity of the distribution with respect to the exponent k of pk is not consistent with the graph for small values of prime unless the periodic m(p) for small primes is large enough. The above mentioned effects can quite well mask the periodicity. If the first graph is taken at face value for small primes, r(p)/m(p) is near zero, and m(p) is so large that the periodicity does not become manifest for small primes. For p=2 this would require m(2)>21 since the largest power 2n≈ e15 corresponds to n∼ 21.

To summarize, the prediction is that for zeros of zeta should divide into disjoint classes {ya(p)\ labelled by primes such that within the class labelled by p one has piya(p)=e(r(p)/m(p))i2π so that has ya(p) = [M(a,p) +r(p)/m(p))] 2π/log(p).

What this speculative picture from the point of view of TGD?

  1. A possible formulation for number theoretic universality for the poles of fermionic Riemann zeta ζF(s)= ζ(s)/ζ(2s) could be as a condition that is that the exponents pksn(p)/2= pk/4pikyn(p)/2 exist in a number theoretically universal manner for the zeros sn(p) for given p-adic prime p and for some subset of integers k. If the proposed conditions hold true, exponent reduces pk/4 requiring that k is a multiple of 4. The number of the non-trivial generating elements of super-symplectic algebra in the monomial creating physical state would be a multiple of 4. These monomials would have real part of conformal weight -1. Conformal confinement suggests that these monomials are products of pairs of generators for which imaginary parts cancel. The conformal weights are however effectively real for the exponents automatically. Could the exponential formulation of the number theoretic universality effectively reduce the generating elements to those with conformal weight -1/4 and make the operators in question hermitian?

  2. Quasi-crystal property might have an application to TGD. The functions of light-like radial coordinate appearing in the generators of supersymplectic algebra could be of form rs, s zero of zeta or rather, its imaginary part. The eigenstate property with respect to the radial scaling rd/dr is natural by radial conformal invariance.

    The idea that arithmetic QFT assignable to infinite primes is behind the scenes in turn suggests light-like momenta assignable to the radial coordinate have energies with the dual spectrum log(pn). This is also suggested by the interpretation of ζ as square root of thermodynamical partition function for boson gas with momentum log(p) and analogous interpretation of ζF.

    The two spectra would be associated with radial scalings and with light-like translations of light-cone boundary respecting the direction and light-likeness of the light-like radial vector. log(pn) spectrum would be associated with light-like momenta whereas p-adic mass scales would characterize states with thermal mass. Note that generalization of p-adic length scale hypothesis raises the scales defined by pn to a special physical position: this might relate to ideal structure of adeles.

  3. Finite measurement resolution suggests that the approximations of Fourier transforms over the distribution of zeros taking into account only a finite number of zeros might have a physical meaning. This might provide additional understand about the origins of generalized p-adic length scale hypothesis stating that primes p≈ p1k, p1 small prime - say Mersenne primes - have a special physical role.

See the chapter Unified Number Theoretic Vision of "TGD as Generalized Number Theory" or the article Could one realize number theoretical universality for functional integral?.

For a summary of earlier postings see Links to the latest progress in TGD.

Thursday, September 24, 2015

Where they are - the gravitational waves?

One hundred years since Einstein proposed gravitational waves as part of his general theory of relativity, an 11-year search performed with CSIRO's Parkes telescope has failed to detect them, casting doubt on our understanding of galaxies and black holes. The work, led by Dr Ryan Shannon (of CSIRO and the International Centre for Radio Astronomy Research), is published today in the journal Science, see the article Gravitational waves from binary supermassive black holes missing in pulsar observations. See also the popular article 11-year cosmic search leads to black hole rethink.

This finding is consistent with TGD view about blackhole like entities ( I wrote 3 blog articles inspired by the most recent Hawking hype: see this, this and this).

In TGD Universe ideal blackhole a space-time region with Euclidian(!) signature of induced metric and horizon would correspond to the light-like 3-surface at which the signature of the metric changes. Ideal blackhole (or rather its surface) would consist solely of dark matter. The large values of gravitational Planck constant hgr= GMm/v0, M here is the mass of blackhole and m is the mass of say electron, would be associated with the flux tubes mediating gravitational interaction and gravitational radiation. v0 is a parameter with dimensions of velocity - some characteristic rotational velocity -say the rotation velocity of blackhole- would be in question.

The quanta of dark gravitational radiation would have much large energies E= heff than one would expect on basis of the rotation frequency, which corresponds to a macroscopic time scale. Dark gravitons would arrive as highly energetic particles along flux tubes and could decay to bunches of ordinary low energy gravitons in the detection. These bunches would be bursts rather than continuous background and would be probably interpreted as a noise. I have considered a model for this in here.

See the article TGD view about blackholes and Hawking radiation.

For a summary of earlier postings see Links to the latest progress in TGD.

Some applications of Number Theoretical Universality

Number theoretic universality (NTU) in the strongest form says that all numbers involved at "basic level" (whatever this means!) of adelic TGD are products of roots of unity and of power of a root of e defining finite-dimensional extensions of p-adic numbers (ep is ordinary p-adic number). This is extremely powerful physics inspired conjecture with a wide range of possible mathematical applications.

  1. For instance, vacuum functional defined as an exponent of Kähler action for preferred externals would be number of this kind. One could define functional integral as adelic operation in all number fields: essentially as sum of exponents of Kähler action for stationary preferred extremals since Gaussian and metric determinants potentially spoiling NTU would cancel each other leaving only the exponent.

  2. The implications of NTU for the zeros of Riemann zeta expected to be closely related to super-symplectic conformal weights will be discussed below.

  3. NTU generalises to all Lie groups. Exponents exp(iniJi/n) of lie-algebra generators define generalisations of number theoretically universal group elements and generate a discrete subgroup of compact Lie group. Also hyperbolic "phases" based on the roots em/n are possible and make possible discretized NTU versions of all Lie-groups expected to play a key role in adelization of TGD.

    NTU generalises also to quaternions and octonions and allows to define them as number theoretically universal entities. Note that ordinary p-adic variants of quaternions and octonions do not give rise to a number field: inverse of quaternion can have vanishing p-adic variant of norm squared satisfying ∑n xn2=0.

    NTU allows to define also the notion of Hilbert space as an adelic notion. The exponents of angles characterising unit vector of Hilbert space would correspond to roots of unity.

Super-symplectic conformal weights and Riemann zeta

The existence of WCW geometry highly nontrivial already in the case of loop spaces. Maximal group of isometries required and is infinite-dimensional. Super-symplectic algebra is excellent candidate for the isometry algebra. There is also extended conformal algebra associated with δ CD. These algebras have fractal structure. Conformal weights for isomorphic subalgebra n-multiples of those for entire algebra. Infinite hierarchy labelled by integer n>0. Generating conformal weights could be poles of fermionic zeta ζF. This demands n>0. Infinite number of generators with different non-vanishing conformal weight with other quantum numbers fixed. For ordinary conformal algebras there are only finite number of generating elements (n=1).

If the radial conformal weights for the generators of g consist of poles of ζF, the situation changes. ζF is suggested by the observation that fermions are the only fundamental particles in TGD.

  1. Riemann Zeta ζ(s)= ∏p(1/(1-p-s) identifiable formally as a partition function ζB(s) of arithmetic boson gas with bosons with energy log(p) and temperature 1/s= 1/(1/2+iy) should be replaced with that of arithmetic fermionic gas given in the product representation by ζF(s) =∏p (1+p-s) so that the identity ζB(s))/ζF(s) =ζB(2s) follows. This gives

    ζB(s)/ζB(2s) .

    ζF(s) has zeros at zeros sn of ζ (s) and at the pole s=1/2 of zeta(2s). ζF(s) has poles at zeros sn/2 of ζ(2s) and at pole s=1 of ζ(s).

    The spectrum of 1/T would be for the generators of algebra {(-1/2+iy)/2, n>0, -1}. In p-adic thermodynamics the p-adic temperature is 1/T=1/n and corresponds to "trivial" poles of ζF. Complex values of temperature does not make sense in ordinary thermodynamics. In ZEO quantum theory can be regarded as a square root of thermodynamics and complex temperature parameter makes sense.

  2. If the spectrum of conformal weights of generators of algebra (not the entire algebra!) corresponds to poles serving as analogs of propagator poles, it consists of the "trivial" conformal h=n>0- the standard spectrum with h=0 assignable to massless particles excluded - and "non-trivial" h=-1/4+iy/2. There is also a pole at h=-1.

    Both the non-trivial pole with real part hR= -1/4 and the pole h=-1 correspond to tachyons. I have earlier proposed conformal confinement meaning that the total conformal weight for the state is real. If so, one obtains for a conformally confined two-particle states corresponding to conjugate non-trivial zeros in minimal situation hR= -1/2 assignable to N-S representation.

    In p-adic mass calculations ground state conformal weight must be -5/2. The negative fermion ground state weight could explain why the ground state conformal weight must be tachyonic -5/2. With the required 5 tensor factors one would indeed obtain this with minimal conformal confinement. In fact, arbitrarily large tachyonic conformal weight is possible but physical state should always have conformal weights h>0.

  3. h=0 is not possible for generators, which reminds of Higgs mechanism for which the naive ground states corresponds to tachyonic Higgs. h=0 conformally confined massless states are necessarily composites obtained by applying the generators of Kac-Moody algebra or super-symplectic algebra to the ground state. This is the case according to p-adic mass calculations, and would suggest that the negative ground state conformal weight can be associated with super-symplectic algebra and the remaining contribution comes from ordinary super-conformal generators. Hadronic masses whose origin is poorly understood could come from super-symplectic degrees of freedom. There is no need for p-adic thermodynamics in super-symplectic degrees of freedom.

Are the zeros of Riemann zeta number theoretically universal?

Dyson's comment about Fourier transform of Riemann Zeta is very interesting concerning NTU for Riemann zeta.

  1. The numerical calculation of Fourier transform for the distribution of the imaginary parts iy of zeros s=1/2+iy of zeta shows that it is concentrated at discrete set of frequencies coming as log(pn), p prime. This translates to the statement that the zeros of zeta form a 1-dimensional quasicrystal, a discrete structure Fourier spectrum by definition is also discrete (this of course holds for ordinary crystals as a special case). Also the logarithms of powers of primes would form a quasicrystal, which is very interesting from the point of view of p-adic length scale hypothesis. Primes label the "energies" of elementary fermions and bosons in arithmetic number theory, whose repeated second quantization gives rise to the hierarchy of infinite primes. The energies for general states are logarithms of integers.

  2. Powers pn label the points of quasicrystal defined by points log(pn) and Riemann zeta has interpretation as partition function for boson case with this spectrum. Could pn label also the points of the dual lattice defined by iy?

  3. The existence of Fourier transform for points log(pin) for any vector ya requires piiya to be a root of unity. This could define the sense in which zeros of zeta are universal. This condition also guarantees that the factor n-1/2-iy appearing in zeta at critical line are number theoretically universal (p1/2 is problematic for Qp: the problem might be solved by eliminating from p-adic analog of zeta the factor 1-p-s.

    1. One obtains for the pair (pi,sa) the condition log(pi)ya= qia2π, where qia is a rational number. Dividing the conditions for (i,a) and (j,a) gives

      pi= pjqia/qja

      for every zero sa so that the ratios qia/qja do not depend on sa. Since the exponent is rational number one obtains piM= pjN for some integers, which cannot be true.

    2. Dividing the conditions for (i,a) and (i,b) one obtains

      ya/yb= qia/qib

      so that the ratios qia/qib do not depend on pi. The ratios of the imaginary parts of zeta would be therefore rational number which is very strong prediction and zeros could be mapped by scaling ya/y1 where y1 is the zero which smallest imaginary part to rationals.

    3. The impossible consistency conditions for (i,a) and (j,a) can be avoided if each prime and its powers correspond to its own subset of zeros and these subsets of zeros are disjoint: one would have infinite union of sub-quasicrystals labelled by primes and each p-adic number field would correspond to its own subset of zeros: this might be seen as an abstract analog for the decomposition of rational to powers of primes. This decomposition would be natural if for ordinary complex numbers the contibution in the complement of this set to the Fourier trasform vanishes. The conditions (i,a) and (i,b) require now that the ratios of zeros are rationals only in the subset associated with pi.

For the general option the Fourier transform can be delta function for x=log(pk) and the
set {ya(p)} contains Np zeros. The following argument inspires the conjecture that
for each p there is an infinite number Np of zeros ya(p) satisfying

piya(p)=u(p)=e(r(p)/m(p))i2π ,

where u(p) is a root of unity that is ya(p)=2π (m(a)+r(p))/log(p) and forming a subset of a lattice
with a lattice constant y0=2π/log(p), which itself need not be a zero.

In terms of stationary phase approximation the zeros ya(p) associated with p would have constant stationary phase whereas for ya(pi≠ p)) the phase piya(pi) would fail to be stationary. The phase eixy would be non-stationary also for x≠ log(pk) as function of y.

  1. Assume that for x =qlog(p), q not a rational, the phases eixy fail to be roots of unity and are random implying the vanishing/smallness of F(x) .

  2. Assume that for a given p all powers piy for y not in {ya(p)} fail to be roots of unity and are also random so that the contribution of the set y not in {ya(p)} to F(p) vanishes/is small.

  3. For x= log(pk/m) the Fourier transform should vanish or be small for m different from 1 (rational roots of primes) and give a non-vanishing contribution for m=1. One has

    F(x= log(pk/m ) =∑1≤ n≤ N(p) e[kM(n,p)/mN(n,p)]i2π .

    Obviously one can always choose N(n,p)=N(p).

  4. For the simplest option N(p)=1 one would obtain delta function distribution for x=log(pk). The sum of the phases associated with ya(p) and -ya(p) from the half axes of the critical line would give

    F(x= log(pn)) ∝ X(pn)==2cos(n× (r(p)/m(p))× 2π) .

    The sign of F would vary.

  5. The rational r(p)/m(p) would characterize given prime (one can require that r(p) and m(p) have no common divisors). F(x) is non-vanishing for all powers x=log(pn) for m(p) odd. For p=2, also m(2)=2 allows to have |X(2n)|=2. An interesting ad hoc ansatz is m(p)=p or ps(p). One has periodicity in n with period m(p) that is logarithmic wave. This periodicity serves as a test and in principle allows to deduce the value of r(p)/m(p) from the Fourier transform.

What could one conclude from the data (see this)?
  1. The first graph gives |F(x=log(pk| and second graph displays a zoomed up part of |F(x| for small powers of primes in the range [2,19]. For the first graph the eighth peak (p=11) is the largest one but in the zoomed graphs this is not the case. Hence something is wrong or the graphs correspond to different approximations suggesting that one should not take them too seriously.

    In any case, the modulus is not constant as function of pk. For small values of pk the envelope of the curve decreases and seems to approach constant for large values of pk (one has x< 15 (e15≈ 3.3× 106).

  2. According to the first graph | F(x)| decreases for x=klog(p)<8, is largest for small primes, and remains below a fixed maximum for 8<x<15. According to the second graph the amplitude decreases for powers of a given prime (say p=2). Clearly, the small primes and their powers have much larger | F(x)| than large primes.

There are many possible reasons for this behavior. Most plausible reason is that the sums involved converge slowly and the approximation used is not good. The inclusion of only 104 zeros would show the positions of peaks but would not allow reliable estimate for their intensities.
  1. The distribution of zeros could be such that for small primes and their powers the number of zeros is large in the set of 104 zeros considered. This would be the case if the distribution of zeros ya(p) is fractal and gets "thinner" with p so that the number of contributing zeros scales down with p as a power of p, say 1/p, as suggested by the envelope in the first figure.

  2. The infinite sum, which should vanish, converges only very slowly to zero. Consider the contribution Δ F(pk,p1) of zeros not belonging to the class p1≠ p to F(x=log(pk)) =∑pi Δ F(pk,pi), which includes also pi=p. Δ F(pk,pi), p≠ p1 should vanish in exact calculation.

    1. By the proposed hypothesis this contribution reads as

      l Δ F(p,p1)= ∑a cos[X(pk,p1)(M(a,p1)+ r(p1)/m(p1))2π)t] .

      X(pk,p1)=log(pk)/log(p1).

      Here a labels the zeros associated with p1. If pk is "approximately divisible" by p1 in other words, pk≈ np1, the sum over finite number of terms gives a large contribution since interference effects are small, and a large number of terms are needed to give a nearly vanishing contribution suggested by the non-stationarity of the phase. This happens in several situations.

    2. The number π(x) of primes smaller than x goes asymptotically like π(x) ≈ x/log(x) and prime density approximately like 1/log(x)-1/log(x)2 so that the problem is worst for the small primes. The problematic situation is encountered most often for powers pk of small primes p near larger prime and primes p (also large) near a power of small prime (the envelope of | F(x)| seems to become constant above x∼ 103).

    3. The worst situation is encountered for p=2 and p1=2k-1 - a Mersenne prime and p1= 22k+1, k≤ 4 - Fermat prime. For (p,p1)=(2k,Mk) one encounters X(2k,Mk)= (log(2k)/log(2k-1) factor very near to unity for large Mersennes primes. For (p,p1)=(Mk,2) one encounters X(Mk,2)= (log(2k-1)/log(2) ≈ k. Examples of Mersennes and Fermats are (3,2),(5,2),(7,2),(17,2),(31,2), (127,2),(257,2),... Powers 2k, k=2,3,4,5,7,8,.. are also problematic.

    4. Also twin primes are problematic since in this case one has factor X(p=p1+2,p1)=log(p1+2)/log(p1). The region of small primes contains many twin prime pairs: (3,5), (5,7), (11,13), (17,19), (29,31),....

    These observations suggest that the problems might be understood as resulting from including too small number of zeros.
  3. The predicted periodicity of the distribution with respect to the exponent k of pk is not consistent with the graph for small values of prime unless the periodic m(p) for small primes is large enough. The above mentioned effects can quite well mask the periodicity. If the first graph is taken at face value for small primes, r(p)/m(p) is near zero, and m(p) is so large that the periodicity does not become manifest for small primes. For p=2 this would require m(2)>21 since the largest power 2n≈ e15 corresponds to n∼ 21.

To summarize, the prediction is that for zeros of zeta should divide into disjoint classes {ya(p)\ labelled by primes such that within the class labelled by p one has piya(p)=e(r(p)/m(p))i2π so that has ya(p) = [M(a,p) +r(p)/m(p))] 2π/log(p).

What this speculative picture from the point of view of TGD?

  1. A possible formulation for number theoretic universality for the poles of fermionic Riemann zeta ζF(s)= ζ(s)/ζ(2s) could be as a condition that is that the exponents pksn(p)/2= pk/4pikyn(p)/2 exist in a number theoretically universal manner for the zeros sn(p) for given p-adic prime p and for some subset of integers k. If the proposed conditions hold true, exponent reduces pk/4 requiring that k is a multiple of 4. The number of the non-trivial generating elements of super-symplectic algebra in the monomial creating physical state would be a multiple of 4. These monomials would have real part of conformal weight -1. Conformal confinement suggests that these monomials are products of pairs of generators for which imaginary parts cancel. The conformal weights are however effectively real for the exponents automatically. Could the exponential formulation of the number theoretic universality effectively reduce the generating elements to those with conformal weight -1/4 and make the operators in question hermitian?

  2. Quasi-crystal property might have an application to TGD. The functions of light-like radial coordinate appearing in the generators of supersymplectic algebra could be of form rs, s zero of zeta or rather, its imaginary part. The eigenstate property with respect to the radial scaling rd/dr is natural by radial conformal invariance.

    The idea that arithmetic QFT assignable to infinite primes is behind the scenes in turn suggests light-like momenta assignable to the radial coordinate have energies with the dual spectrum log(pn). This is also suggested by the interpretation of ζ as square root of thermodynamical partition function for boson gas with momentum log(p) and analogous interpretation of ζF.

    The two spectra would be associated with radial scalings and with light-like translations of light-cone boundary respecting the direction and light-likeness of the light-like radial vector. log(pn) spectrum would be associated with light-like momenta whereas p-adic mass scales would characterize states with thermal mass. Note that generalization of p-adic length scale hypothesis raises the scales defined by pn to a special physical position: this might relate to ideal structure of adeles.

  3. Finite measurement resolution suggests that the approximations of Fourier transforms over the distribution of zeros taking into account only a finite number of zeros might have a physical meaning. This might provide additional understand about the origins of generalized p-adic length scale hypothesis stating that primes p≈ p1k, p1 small prime - say Mersenne primes - have a special physical role.

See the chapter Unified Number Theoretic Vision of "TGD as Generalized Number Theory" or the article Could one realize number theoretical universality for functional integral?.

For a summary of earlier postings see Links to the latest progress in TGD.

Sunday, September 20, 2015

"Invisible magnetic fields" as dark magnetic fields

A further victory for the notion of dark magnetic fields: scientstists create a "portal" that conceals electromagnetic fields (see this). The popular article talks about wormholes and invisible magnetic fields. A couple of comments about the official terminology are in order.

"Wormhole" is translation of "flux tube carrying monopole current". Since TGD is "bad" science, "wormhole" is the correct wording although it does not have much to do with reality. Similar practice is applied by stringy people when they speak of wormholes connecting blackholes. The original TGD terminology talks about partonic 2-surfaces and magnetic flux tubes but again: this is "bad" science. Reader can invent an appropriate translation for "bad".

"Invisible magnetic field" translates to "dark magnetic field carrying monopole flux". Dark magnetic fields give rise to one of the basic differences between TGD and Maxwellian theory. These magnetic fluxes can exist without generating currents. This makes them especially important in early cosmology since they explain the long range magnetic fields having no explanation in standard cosmology. Super-conductivity is second application and central in TGD inspired quantum biology.

What is fantastic that technology based on TGD is now being created without a slightest idea that it is technology based on TGD (really;-)?- sorry for my alter ego, which does not understand the importance of political correctness). Experimenters have now created a magnetic field for which flux travels from position to another as invisible flux. The idea is to use flux, which propagates radially from point (in good approximation) through a spherical super-conductor and is forced to run along dark flux tubes by Meissner effect.

Indeed, in TGD based model of super-conductor the supracurrents flow along dark flux tubes as dark Cooper pairs through super-conductor (see the earlier posting). Since magnetic field cannot penerate on super-conductor they travel along dark flux tubes carrying magnetic monopole fluxes and supra currents.

One of the applications is to guide magnetic fluxes to desired positions in MRI. Precisely targeted communication and control in general. This is one of the basic ideas of TGD inspired quantum biology.

See the chapter Criticality and dark matter.

For a summary of earlier postings see Links to the latest progress in TGD.

Saturday, September 19, 2015

Macroscopically quantum coherent fluid dynamics at criticality?

Evidence for the hierarchy of Planck constants implying macroscopic quantum coherence in quantum critical systems is rapidly accumulating. Also people having the courage to refer to TGD in their articles are gradually emerging. The most recent fluid dynamics experiment providing this kind of evidence is performed by Yves Couder and Emmanuel Fort (see for instance the article Single particle diffraction in macroscopic scale). Mathematician John W. M. Bush has commented these findings in the Proceedings of National Academy of Sciences and the article provides references to a series of papers by Couder and collaborators.

The system studied consist of a tray containing water at a surface, which is oscillating. The intensity of vibration is just below the critical value inducing so called Faraday waves at the surface of water. Although the water surface is calm, water droplet begins to bounce and generates waves propagating along the water surface - "walkers". Walkers behave like classical particles at Bohr orbits. As they pass through a pair of slits they behave they choose random slit but several experiments produce interference pattern. Walkers exhibit an effect analogous to quantum tunneling and even the analogs of quantum mechanical bound states of walkers realized as circular orbits emerge as the water tray rotates!

The proposed interpretation of the findings is in terms of Bohm's theory. Personally I find it very difficult to believe in this since Bohm's theory has profound mathematical difficulties. Bohm's theory was inspired by Einstein's belief on classical determinism and the idea that quantum non-determinism is not actual but reduces to the presence of hidden variables. Unfortunately, this idea led to no progress.

TGD is analogous to Bohm's theory in that classical theory is exact but quantum theory is now only an exact classical correlate: there is no attempt to eliminate quantum non-determinism. Quantum jumps are between superpositions of entire classical time evolutions rather than their time=constant snapshots: this solves the basic paradox of Copenhagen interpretation. A more refined formulation is in terms of zero energy ontology, which in turn forces to generalize quantum measurement theory to a theory of consciousness.

Macroscopic quantum coherence associated with the behavior of droplets bouncing on the surface of water is suggested by the experiments. For instance, quantum measurement theory seems to apply to the behavior of single droplet as it passes through slit. In TGD the prerequisite for macroscopic quantum coherence would be quantum criticality at which large heff=n×h is possible. There indeed is an external oscillation of the tray containing water with an amplitude just below the criticality for the generation of Faraday waves at the surface of water. Quantum classical correspondence states that the quantum behavior should have a classical correlate. The basic structure of classical TGD is that of hydrodynamics in the sense that dynamics reduces to conservation laws plus conditions expressing the vanishing of an infinite number of so called super-symplectic charges - the conditions guarantee strong form of holography and express quantum criticality. The generic solution of classical field equations could reduce to Frobenius integrability conditions guaranteing that the conserved isometry currents are integrable and thus define global coordinates varying along the flow lines.

One should be of course very cautious. For ordinary Schrödinger equation the system is closed. Now the system is open. This is not a problem if the only function of external vibration is to induce quantum criticality. The experiment brings in mind the old vision of Frölich about external vibrations as induced of what looks like quantum coherence. In TGD framework this coherence would be forced coherence at the level of visible matter but the oscillation itself would correspond to genuine macroscopic quantum coherence and large value of heff. A standard example are penduli, which gradually start to oscillate in unisono in presence of weak synchronizing signal. In brain neurons would start to oscillator synchronously by the presence of dark photons with large heff.

See the chapter Criticality and dark matter of of "Hyperfinite factors and dark matter hierarchy".

See the chapter Criticality and dark matter.

For a summary of earlier postings see Links to the latest progress in TGD.

Thursday, September 17, 2015

Algebraic universality and the value of Kähler coupling strength

With the development of the vision about number theoretically universal view about functional integration in WCW , a concrete vision about the exponent of Kähler action in Euclidian and Minkowskian space-time regions. The basic requirement is that exponent of Kähler action belongs to an algebraic extension of rationals and therefore to that of p-adic numbers and does not depend on the ordinary p-adic numbers at all - this at least for sufficiently large primes p. Functional integral would reduce in Euclidian regions to a sum over maxima since the troublesome Gaussian determinants that could spoil number theoretic universality are cancelled by the metric determinant for WCW.

The adelically exceptional properties of Neper number e, Kähler metric of WCW, and strong form of holography posing extremely strong constraints on preferred extremals, could make this possible. In Minkowskian regions the exponent of imaginary Kähler action would be root of unity. In Euclidian space-time regions expressible as power of some root of e which is is unique in sense that ep is ordinary p-adic number so that e is p-adically an algebraic number - p:th root of ep.

These conditions give conditions on Kähler coupling strength αK= gK2/4π (hbar=1)) identifiable as an analog of critical temperature. Quantum criticality of TGD would thus make possible number theoretical universality (or vice versa).

  1. In Euclidian regions the natural starting point is CP2 vacuum extremal for which the maximum value of Kähler action is

    SK= π2/2gK2= π/8αK .

    The condition reads SK=q =m/n if one allows roots of e in the extension. If one requires minimal extension of involving only e and its powers one would have SK=n. One obtains

    1/αK= 8q/π ,

    where the rational q=m/n can also reduce to integer. One cannot exclude the possibiity that q depends on the algebraic extension of rationals defining the adele in question.

    For CP2 type extremals the value of p-adic prime should be larger than pmin=53. One can consider a situation in which large number of CP2 type vacuum extremals contribute and in this case the condition would be more stringent. The condition that the action for CP2 extremal is smaller than 2 gives

    1/αK≤ 16/π ≈ 5.09 .

    It seems there is lower bound for the p-adic prime assignable to a given space-time surface inside CD suggesting that p-adic prime is larger than 53× N, where N is particle number.

    This bound has not practical significance. In condensed matter particle number is proportional to (L/a)3 - the volume divided by atomic volume. On basis p-adic mass calculations p-adic prime can be estimated to be of order (L/R)2. Here a is atomic size of about 10 Angstroms and R CP2 "radius. Using R≈ 104 LPlanck this gives as upper bound for the size L of condensed matter blob a completely super-astronomical distance L≤ a3/R2 ∼ 1025 ly to be compared with the distance of about 1010 ly travelled by light during the lifetime of the Universe. For a blackhole of radius rS= 2GM with p∼ (2GM/R)2 and consisting of particles with mass above M≈ hbar/R one would obtain the rough estimate M>(27/2)× 10-12mPlanck ∼ 13.5× 103 TeV trivially satisfied.

  2. The physically motivated expectation from earlier arguments - not necessarily consistent with the recent ones - is that the value αK is quite near to fine structure constant at electron length scale: αK≈ αem≈ 137.035999074(44).

    The latter condition gives n=54=2× 33 and 1/αK≈ 137.51. The deviation from the fine structure constant is Δ α/α= 3× 10-3 -- .3 per cent. For n=53 one obtains 1/αK= 134.96 with error of 1.5 per cent. For n=55 one obtains 1/αK= 150.06 with error of 2.2 per cent. Is the relatively good prediction could be a mere accident or there is something deeper involved?

What about Minkowskian regions? It is difficult to say anything definite. For cosmic string like objects the action is non-vanishing but proportional to the area A of the string like object and the conditions would give quantization of the area. The area of geodesic sphere of CP2 is proportional to π. If the value of gK is same for Minkowskian and Euclidian regions, gK2∝ π2 implies SK ∝ A/R2π so that A/R2∝ π2 is required.

This approach leads to different algebraic structure of αK than the earlier arguments.

  1. αK is rational multiple of π so that gK2 is proportional to π2. At the level of quantum TGD the theory is completely integrable by the definition of WCW integration(!) and there are no radiative corrections in WCW integration. Hence αK does not appear in vertices and therefore does not produce any problems in p-adic sectors.

  2. This approach is consistent with the proposed formula relating gravitational constant and p-adic length scale. G/Lp2 for p=M127 would be rational power of e now and number theoretically universally. A good guess is that G does not depend on p. As found this could be achieved also if the volume of CP2 type extremal depends on p so that the formula holds for all primes. αK could also depend on algebraic extension of rationals to guarantee the independence of G on p. Note that preferred p-adic primes correspond to ramified primes of the extension so that extensions are labelled by collections of ramified primes, and the ramimified prime corresponding to gravitonic space-time sheets should appear in the formula for G/Lp2.

  3. Also the speculative scenario for coupling constant evolution could remain as such. Could the p-adic coupling constant evolution for the gauge coupling strengths be due to the breaking of number theoretical universality bringing in dependence on p? This would require mapping of p-adic coupling strength to their real counterparts and the variant of canonical identification used is not unique.

  4. A more attractive possibility is that coupling constants are algebraically universal (no dependence on number field). Even the value of αK, although number theoretically universal, could depend on the algebraic extension of rationals defining the adele. In this case coupling constant evolution would reflect the evolution assignable to the increasing complexity of algebraic extension of rationals. The dependence of coupling constants on p-adic prime would be induced by the fact that so called ramified primes are physically favored and characterize the algebraic extension of rationals used.

  5. One must also remember that the running coupling constants are associated with QFT limit of TGD obtained by lumping the sheets of many-sheeted space-time to single region of Minkowski space. Coupling constant evolution would emerge at this limit. Whether this evolution reflects number theoretical evolution as function of algebraic extension of rationals, is an interesting question.

See the chapter Coupling Constant Evolution in Quantum TGD and the chapter Unified Number Theoretic Vision of "Physics as Generalized Number Theory" or the article Could one realize number theoretical universality for functional integral?.

For a summary of earlier postings see Links to the latest progress in TGD.

Could one realize number theoretical universality for functional integral?

Number theoretical vision relies on the notion of number theoretical universality (NTU). In fermionic sector NTU is necessary: one cannot speak about real and p-adic fermions as separate entities and fermionic anti-commutation relations are indeed number theoretically universal.

By supersymmetry NTU should apply also to functional integral over WCW (or its sector defined by given causal diamond CD) involved with the definition of scattering amplitudes. The expression for the integral should make sense in all number fields simultaneously. At first this condition looks horrible but the Kähler structure of WCW and the identification of vacuum functional as exponent of Kähler function, and the unique adelic properties of Neper number e give excellent hopes about NTU and also predict the general forms of the functional integral and of the value spectrum of Kähler action for preferred extremals.

See the chapter Unified Number Theoretic Vision of "Physics
as Generalized Number Theory" or the article Could one realize number theoretical universality for functional integral?.

For a summary of earlier postings see Links to the latest progress in TGD.

Tuesday, September 15, 2015

The effects of psychedelics as a key to the understanding of remote mental interactions?

There is a book about psychedelics titled as "Inner paths to outer space: Journies to Alien Worlds through Psychedelics and Other Spiritual Technics" written by Rick Strassman, Slawek Wojtowicz, Luis Eduardo Luna and Ede Frecska (see this). The basic message of the book is that psychedelics might make possible instantaneous remote communications with distant parts of the Universe. The basic objection is that light velocity sets stringent limits on classical communications. Second objection is that the communications require huge amount of energy unless they are precisely targeted. The third objection is that quantum coherence in very long, even astrophysical scales is required. In TGD framework this argument does not apply.

In Zero Energy Ontology (ZEO) communications in both directions of geometric time are possible and kind of time-like zig-zag curves make possible apparent superluminal velocities. Negentropic quantum entanglement provides second manner to share mental images, say sensory information remotely. The proposed model leads to a general idea that the attachment of information molecules such as neurotransmitters and psychedelics to a receptor as a manner to induce a remote connection involving transfer of dark potons signals in both directions of geometric time to arbitrarily long distances. The formation of magnetic flux tube contact is a prerequisite for the connection having interpretation as direct attention or sense of presence. One can see living organisms as systems continually trying to build this kind of connections created by a reconnection of U-shaped flux tubes serving as magnetic tentacles. Dark matter as a hierarchy of phases with arbitrary large value of Planck constants guarantees quantum coherence in arbitrary long scales.

The natural TGD inspired hypothesis about what happens at the level of brain to be discussed in sequel in detail goes as follows.

  1. Psychedelics bind to the same receptors as the neurotransmitters with similar aromatic rings (weaker assumption is that neurotransmitters in question possess aromatic rings). This is presumably consistent with the standard explanation of the effect of classical psychedelics as a modification of serotonin uptake. This binding replaces the flux tube connection via neurotransmitter to some part of the personal magnetic body with a connection via psychedelic to some other system, which might be even in outer space. A communication line is created making among other things possible remote sensory experiences.

    Magnetic fields extending to arbitrary large distances in Maxwell's theory are replaced with flux tubes in TGD framework. The magnetic bodies of psychedelics would carry very weak magnetic fields and would have very large heff - maybe serving as a kind of intelligence quotient.

  2. This would be like replacing the connection to the nearby computer server with a connection to a server at the other side of the globe. This would affect the usual function of transmitter and possibly induce negative side effects. Clearly, TGD inspired hypothesis gives for the psychedelics much more active role than standard hypothesis.

  3. Phychedelics can be classified into two groups depending on whether they contain derivative of amino-acid trp with two aromatic rings or phe with one aromatic ring. Also DNA nucleotide resp. its conjugate have 2 resp. 1 similar aromatic rings. This suggests that the coupling between information molecule and receptor is universal and same as the coupling between the two bases in DNA double strand and consists of hydrogen bonds. This hypothesis is testable since it requires that the trp:s/phe:s of the information molecule can be brought to same positions as phe:s/trp:s in the receptor. If also protein folding relies on this coupling, one might be able to predict the folding to a high degree.

  4. A highly suggestive idea is that molecules with aromatic rings are fundamental conscious entities at the level of molecular biology, and that more complex conscious entities are created from them by reconnection of flux tubes. DNA/RNA sequences and microtubules would be basic examples about this architecture of consciousness. If so, protein folding would be dictated by the formation trp-phe contacts giving rise to larger conscious entities.

See the chapter Meditation, Mind-Body Medicine and Placebo: TGD point of view of "TGD Based View about Consciousness, Living Matter, and Remote Mental Interactions" or the revised article Psychedelic induced experiences as key to the understanding of the connection between magnetic body and information molecules?.

For a summary of earlier postings see Links to the latest progress in TGD.

Monday, September 07, 2015

First indications for the breaking of lepton universality due to the higher weak boson generations

Lepton and quark universality of weak interactions is basic tenet of the standard model. Now the first indications for the breaking of this symmetry have been found.

  1. Lubos tells that LHCb has released a preprint with title Measurement of the ratio of branching ratios (Bbar0→ Dbar *+ τ ντ)/ (Bbar0→ Dbar*+ μ νμ). The news is that the measured branching ratio is is about 33 per cent instead of 25 percent determined by mass ratios if standard model is correct. The outcome differs by 2.1 standard deviations from the prediction so that it might be a statistical fluke.

  2. There are also indications for second Bbar0 anomaly (see this). B mesons have to long and short-lived variants oscillating to their antiparticles and back - this relates to CP breaking. The surprise is that the second B meson - I could not figure out was it short- or long-lived - prefers to decay to eν instead of μnu;.

  3. There are also indications for the breaking of universality (see this) from B+ → K+e+ e- and B+ → K+ μ+mu;- decays.

In TGD framework my first - and wrong - guess for an explanation was CKM mixing for leptons. TGD predicts that also leptons should suffer CKM mixing induced by the different mixings of topologies of the partonic 2-surfaces assignable to charged and neutral leptons. The experimental result would give valuable information about the values of leptonic CKM matrix. What new this brings is that the decays of W bosons to lepton pairs involve the mixing matrix and CKM matrix whose deviation from unit matrix brings effects anomalous in standard model framework.

The origin of the mixing would be topological - usually it is postulated in completely ad hoc manner for fermion fields. Particles correspond to partonic 2-surfaces- actually several of them but in the case of fermions the standard model quantum numbers can be assigned to one of the partonic surfaces so that its topology becomes especially relevant. The topology of this partonic 2- surface at the end of causal diamond (CD) is characterized by its genus - the number of handles attached to sphere - and by its conformal equivalene class characterized by conformal moduli.

Electron and its muon correspond to spherical topology before mixing, muon and its neutrino to torus before mixing etc. Leptons are modelled assuming conformal invariance meaning that the leptons have wave functions - elementary particle vacuum functionals - in the moduli space of conformal equivalence classes known as Teichmueller space.

Contrary to the naive expection mixing alone does not explain the experimental finding. Taking into account mass corrections, the rates should be same to different charged leptons since neutrinos are not identified. That mixing does not have any implications follows from the unitary of the CKM matrix.

The next trial is based on the prediction of 3 generations of weak bosons suggested by TGD.

  1. TGD based explanation of family replication phenomenon in terms of genus-generation correspondence forces to ask whether gauge bosons identifiable as pairs of fermion and antifermion at opposite throats of wormhole contact could have bosonic counterpart for family replication. Dynamical SU(3) assignable to three lowest fermion generations/genera labelled by the genus of partonic 2-surface (wormhole throat) means that fermions are combinatorially SU(3) triplets. Could 2.9 TeV state - if it would exist - correspond to this kind of state in the tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic mass scale also on the structure of SU(3) state so that the mass would be different. This difference should be very small.

  2. Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats would be more massive than those with the same genera. This would give SU(3) singlet and two neutral states, which are analogs of η′ and η and π0 in Gell-Mann's quark model. The masses of the analogs of η and π0 and the the analog of η′, which I have identified as standard weak boson would have different masses. But how large is the mass difference?

  3. These 3 states are expected to have identical mass for the same p-adic mass scale, if the mass comes mostly from the analog of hadronic string tension assignable to magnetic flux tube. connecting the two wormhole contacts associates with any elementary particle in TGD framework (this is forced by the condition that the flux tube carrying monopole flux is closed and makes a very flattened square shaped structure with the long sides of the square at different space-time sheets). p-Adic thermodynamics would give a very small contribution genus dependent contribution to mass if p-adic temperature is T=1/2 as one must assume for gauge bosons (T=1 for fermions). Hence 2.95 TeV state for which there are some indications could indeed correspond to second Z generation. W should have
    similar state at 2.5 TeV.
The orthogonality of the 3 weak bosons implies that their charge matrices are orthogonal. As a consequence, the higher generations of weak bosons do not have universal couplings to leptons and quarks. The breaking of universality implies a small breaking of universality in weak decays of hadrons due to the presence of virtual MG,79 boson decaying to lepton pair. These anomalies should be seen both in the weak decays of hadrons producing Lν pairs via the decay of virtual W or its partner WG,79 and via the decay of virtual Z or its partner Zg,79 to L+ L-. Also γG,79 could be involved.

This could explain the three anomalies associated with the neutral B mesons, which are analogs of neutral K mesons having long- and short-lived variants.

  1. The two anomalies involving W bosons could be understood if some fraction of decays takes place via the decay b→ c+WG,79 followed by WG,79→ L+ν. The charge matrix of WG,79 is not universal and CP breaking is involved. Hence one could have interference effects, which increase the branching fraction to τν or eν relative to μν depending on whether the state is long- or short-lived B meson.

  2. The anomaly in decays producing charged lepton pairs in decayse of B+ does not involve
    CP breaking and would be due to the non-universality of ZG,79 charge matrix.
TGD allows also to consider leptoquarks as pairs of leptons and quarks and there is some evidence for them too! I wrote a blog posting about this too (for an article see this). Also indications for M89 and MG,79 hadron physics with scaled up mass scales are accumulating and QCD is shifting to the verge of revolution (see this).

It seems that TGD is really there and nothing can prevent it showing up. I predict that next decades in physics will be a New Golden Age of both experimental and theoretical physics. I am eagerly and impatiently waiting that theoretical colleagues finally wake up from their 40 year long sleep and CERN will again be full of working physicists also during weekends (see this);-).

See the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis".

For a summary of earlier postings see Links to the latest progress in TGD.

Thursday, September 03, 2015

Indication for a scaled up variant of Z boson

Both Tommaso Dorigo and Lubos Motl tell about a spectacular 2.9 TeV di-electron event not observed in previous LHC runs. Single event of this kind is of course most probably just a fluctuation but human mind is such that it tries to see something deeper in it - even if practically all trials of this kind are chasing of mirages.

Since the decay is leptonic, the typical question is whether the dreamed for state could be an exotic Z boson. This is also the reaction in TGD framework. The first question to ask is whether weak bosons assignable to Mersenne prime M89 have scaled up copies assignable to Gaussian Mersenne M79. The scaling factor for mass would be 2(89-89)/2= 32. When applied to Z mass equal to about .09 TeV one obtains 2.88 TeV, not far from 2.9 TeV. Eureka!? Looks like a direct scaled up version of Z!? W should have similar variant around 2.6 TeV.

TGD indeed predicts exotic weak bosons and also gluons. TGD based explanation of family replication phenomenon in terms of genus-generation correspondence forces to ask whether gauge bosons identifiable as pairs of fermion and antifermion at opposite throats of wormhole contact could have bosonic counterpart for family replication. Dynamical SU(3) assignable to three lowest fermion generations/genera labelled by the genus of partonic 2-surface (wormhole throat) means that fermions are combinatorially SU(3) triplets. Could 2.9 TeV state - if it would exist - correspond to this kind of state in the tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic mass scale also on the structure of SU(3) state so that the mass would be different. This difference should be very small.

Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats would be more massive than those with the same genera. This would give SU(3) singlet and two neutral states, which are analogs of η′ and η and π0 in Gell-Mann's quark model. The masses of the analogs of η and π0 and the the analog of η′, which I have identified as standard weak boson would have different masses. But how large is the mass difference?

These 3 states are expected top have identical mass for the same p-adic mass scale, if the mass comes mostly from the analog of hadronic string tension assignable to magnetic flux tube. connecting the two wormhole contacts associates with any elementary particle in TGD framework (this is forced by the condition that the flux tube carrying monopole flux is closed and makes a very flattened square shaped structure with the long sides of the square at different space-time sheets). p-Adic thermodynamics would give a very small contribution genus dependent contribution to mass if p-adic temperature is T=1/2 as one must assume for gauge bosons (T=1 for fermions). Hence 2.95 TeV state could indeed correspond to this kind of state.

Can one imagine any pattern for the Mersennes and Gaussian Mersennes involved? Charged leptons correspond to electron (M127), muon (MG,113) and tau (M107): Mersenne- Gaussian Mersenne-Mersenne. Does one have similar pattern for gauge bosons too: M89- MG,79 - M61?

Recall that Lubos reported a dijet at 5.2 TeV: see the earlier posting. Dijet structure suggests some meson. One can imagine several candidates but no perfect fit if one assumes M89 meson and one applies naive scaling. For instance, if kaon mass is scaled by factor 210 rather than 512 - just like the mass of pion to get mass of the proposed M89 pion candidate, one obtains 4.9 TeV. Naive scaling of 940 MeV mass of nucleon by 512 would predict that M89 has mass of 4.8 TeV.

See the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis".

For a summary of earlier postings see Links to the latest progress in TGD.