Thursday, September 29, 2011

Do we really understand the solar system?

The recent experimental findings have shown that our understanding of the solar system is surprisingly fragmentary. As a matter fact, so fragmentary that even new physics might find place in the description of phenomena like the precession of equinoxes (I am grateful for my friend Pertti Kärkkäinen for telling me about the problem) and the recent discoveries about the bullet like shape of heliosphere and strong magnetic fields near its boundary bringing in mind incompressible fluid flow around obstacle.

TGD inspired model is based on the heuristic idea that stars are like pearls in a necklace defined by long magnetic flux tubes carrying dark matter and strong magnetic field and possibly accompanied by the analog of solar wind. Heliosphere would be like bubble in the flow defined by magnetic field in the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy: this would provide a beautiful overall view about the emergence of stars and their heliospheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays.

For details and background see the article Do we really understand the solar system? and the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time".

Sunday, September 25, 2011

More about nasty superluminal neutrinos

The superluminal speed of neutrino has stimulated intense email debates and blog discussions. There is now an article by OPERA collaboration in arXiv so that superluminal neutrinos are not a rumour anymore. Even the finnish tabloid "Iltalehti" reacted to the news and this is really something unheard! Maybe the finding could even stimulate colloquium in physics department of Helsinki University!

The reactions to the potential discovery depend on whether the person can imagine some explanation for the finding or not. In the latter case the reaction is denial: most physics bloggers have chosen this option for understandable reasons. What else could they do? The six sigma statistics does not leave much room for objections but there could of course be some very delicate systematical error involved. Lubos wrote quite an interesting piece about possible errors of this kind and classified the possible errors to timing errors either at CERN or Italy or to errors in distance measurement.

The neutrinos are highly relativistic having average energy 17 GeV much larger than the mass scale of neutrinos of order .1 eV. The distance between CERN and Gran Sasso is roughly 750 km, which corresponds to the time of travel equal to T=2.4 milliseconds. The nasty neutrinos arrived to Gran Sasso &Delta T=60.7+/-6.9 (statistical) +/-7.4 (systematic) ns before they should have done so. This time corresponds to a distance Δ L= 18 m. From this is is clear that the distance and timing measurements must be extremely accurate. The claimed distance precision is 20 cm (see this.

Experimentalists tell that they have searched for al possible systematic errors that they are able to imagine. The relative deviation of neutrino speed from the speed of light is (c-v)/v= (5.1+/- 2.9)×10-5 which is much larger than the uncertainty related to the value of the speed of light. The effect does not depend on neutrino energy. 6.1 sigma result is in question (for sigmas see this) so that it can be a statistical fluctuation with probability of 10-9 in the case that there is no systematic error.

I already wrote about TGD based explanation of the effect assuming that it is real. The tachyonic explanations of the finding fail because different tachyonic mass is required to explain SN1987A and recent anomaly and other similar anomalies. Tachyons are of course also in conflict with causality. I repeat here the main points here and andd some new points that have emerged in numerous email discussions and blog discussion in viXra log.

  1. It is sub-manifold geometry which allows to fuse the good aspects of both special relativity (the existence of well-defined conserved quantities due to the isometries of imbedding space) and general relativity (geometrization of gravitation in terms of the induced metric). As an additional bonus one obtains a geometrization of the electro-weak and color interactions and of standard model quantum numbers. The choice of the imbedding space is unique. The new element is the generalization of the notion of space-time: space-time identified as a four-surface has shape as seen from the perspective of the imbedding space M4×CP2. The study of field equations leads among other things to the notion of many-sheeted space-time.

  2. For many-sheeted space-time light velocity is assigned to light-like geodesic of space-time sheet rather than light-like geodesics of imbedding space M4×CP2. The effective velocity determined from time to travel from point A to B along different space time sheets is different and therefore also the signal velocity determined in this manner. The light-like geodesics of space-time sheet corresponds in the generic case time-like curves of the imbedding space so that the light-velocity is reduced from the maximal signal velocity.

  3. For Robertson-Walker cosmology imbedded as 4-surface (this is crucial!) in M4×CP2 (see this) the light velocity would be about 73 per cent from the maximal one which would be along light-like geodesics of M4 factor as the simple estimate of the previous posting demonstrates.

    This leaves a lot of room to explain various anomalies (problems with determination of Hubble constant, apparent growth of the Moon-Earth distance indicated by the measurement of distance by laser signal,....). The effective velocity can depend on the scale of space-time sheet along which the relativistic particles arrive (and thus on distance distinguishing between OPERA experiment and SN1987A), it can depend on the character of ultra relativistic particle (photon, neutrino, electron,...), etc. The effect is testable by using other relativistic particles -say electrons.

  4. The energy independence of the results fits perfectly with the predictions of the model since the neutrinos are relativistic. There can be dependence on length scale: in other words distance scale and this is needed to explain SN1987A -CERN difference in Δ c/c. For SN1987A neutrinos were also relativistic and travelled a distance is L=cT=168,000 light years and the neutrinos arrived about Δ T=2-3 hours earlier than photons (see this). This gives Δ c/c = Δ T/T≈ .8-1.2 ×10-6 which is considerably smaller than for the recent experiment. Hence the tachyonic model fails but scale and particle dependent maximal signal velocity can explain the findings easily.

  5. The space-time sheet along which particles propagate would most naturally correspond to a small deformation of a "massless extremal" ("topological light ray", see this) assignable to the particle in question. Many-sheeted space-time could act like a spectroscope forcing each (free) particle type at its own kind of "massless extremal". The effect is predicted to be present for any relativistic particle. A more detailed model requires a model for the propagation of the particles having as basic building bricks wormhole throats at which the induced metric changes its signature from Minkowskian to Euclidian: the Euclidian regions have interpretation in terms of lines of generalized Feynman graphs. The presence of wormhole contact between two space-time sheets implies the presence of two wormhole throats carrying fermionic quantum numbers and the massless extremal is deformed in the regions surrounding the wormhole throat. At this stage I am not able to construct detailed model for deformed MEs carrying photons, neutrinos or some other relativistic particles.

If I were a boss at CERN, I would suggest that the experiment would be carried out for relativistic electrons whose detection would be much easier and for which one could use much shorter scale.

  1. Could one use both photon and electron signal simultaneously to eliminate the need to measure precisely the distance between points A and B.
  2. Can one imagine using mirrors for photons and relativistic electrons and comparing the times for A→ B→ A?

As a matter fact, there is an old result by electric engineer Obolensky that I have mentioned earlier (see this), and which states that in circuits signals seem to travel at superluminal speed. The study continues the tradition initiated by Tesla who started the study of what happens when relays are switched on or off in circuits.

  1. The experimental arrangement of Obolensky suggest that that part of circuit - the base of the so called Obolensky triangle- behaves as a single coherent quantum unit in the sense that the interaction between the relays defining the ends of the base is instantaneous: the swithing of the relay induces simultaneously a signal from both ends of the base.
  2. There are electromagnetic signals propagating with velocities c0(with values 271 +/- 1.8× 106 m/s and 278 +/- 2.2× 106 m/s) and c1 (200.110× 106 m/s): these velocities are referred to as Maxwellian velocities and they are below light velocity in vacuum equal to c=3× 108 m/s. c0 and c1 would naturally correspond to light velocities affected by the interaction of light with the charges of the circuit.
  3. There is also a signal propagating with a velocity c2 ((620+/- 2.7) × 106 m/s), which is slightly more than twice the light velocity in vacuum. Does the identification c2=cmax, where cmax is the maximal signal velocity in M4× CP2, make sense? Could the light velocity c in vacuum correspond to light velocity, which has been reduced from the light velocity c#= .73 cmax in cosmic length scales due to the presence of matter to c#=.48cmax. Note that this interpretation does not require that electrons propagate with a super-luminal speed.
  4. If Obolensky's findings are true and interpreted correctly, simple electric circuits might allow the study of many-sheeted space-time in garage!

To conclude, if the finding turns out to be true it will mean for TGD what Mickelson-Morley meant for special relativity.

For background see the chapter TGD and GRT of the online book "Physics in Many-Sheeted Space-time" or the article Are neutrinos superluminal?.

Addition: Those string theorists are simply incredible. Here is This Week's hype which appeared in New Scientist.

So if OPERA’s results hold up, they could provide support for the existence of sterile neutrinos, extra dimensions and perhaps string theory. Such theories could also explain why gravity is so weak compared with the other fundamental forces. The theoretical particles that mediate gravity, known as gravitons, may also be closed loops of string that leak off into the bulk. “If, in the end, nobody sees anything wrong and other people reproduce OPERA’s results, then I think it’s evidence for string theory, in that string theory is what makes extra dimensions credible in the first place,” Weiler says.

This is absolute nonsense. What is wrong in physics community: why lying has become everyday practice?

Addition: From the comment section in Peter Woit's blog I learned that M-theorists have already represented a direct modification of the TGD explanation for neutrino super-luminality by replacing space-time surfaces with branes: web is a very effective communication tool;-). My guess was that the "discovery" takes place within a week. I would not be surprised if neutrino super-luminality would become the last straw for drowning M-theory. Sad that we must still tolerate a decade M-theoretic non-sense.

Tuesday, September 20, 2011

Speed of neutrino larger than speed of light?

The newest particle physics rumour is that the CERN OPERA team working in Gran Sasso, Italy has reported 6.1 sigma evidence that neutrinos move with a super-luminal speed. The total travel time is measured in milliseconds and the deviation from the speed of the light is nanoseconds meaning Δ c/c ≈ 10-6 which is roughly 103 times larger than the uncertainty 4.5× 10-9 in the measured value of the speed of light. If the result is true it means a revolution in the fundamental physics.

The result is not the first of this kind and the often proposed interpretation is that neutrinos behave like tachyons. The following is the abstract of the article giving a summary about the earlier evidence that neutrinos can move faster than the speed of light.

From a mathematical point of view velocities can be larger than c. It has been shown that Lorentz transformations are easily extended in Minkowski space to address velocities beyond the speed of light. Energy and momentum conservation fixes the relation between masses and velocities larger than c, leading to the possible observation of negative mass squared particles from a standard reference frame. Current data on neutrino mass squared yield negative values, making neutrinos as possible candidates for having speed larger than c. In this paper, an original analysis of the SN1987A supernova data is proposed. It is shown that all the data measured in '87 by all the experiments are consistent with the quantistic description of neutrinos as combination of superluminal mass eigenstates. The well known enigma on the arrival times of the neutrino bursts detected at LSD, several hours earlier than at IMB, K2 and Baksan, is explained naturally. It is concluded that experimental evidence for superluminal neutrinos was recorded since the SN1987A explosion, and that data are quantitatively consistent with the introduction of tachyons in Einstein's equation.

Personally I cannot take tachyonic neutrinos seriously. I would not however choose the easy option and argue that the result is due to a bad experimentation as Lubos and Jester do. This kind of effect is actually one of the basic predictions of TGD and emerge for more than 20 years ago. Also several Hubble constants are predicted and explanation for why the distance between Earth and Moon seems to increasing as an apparent phenomenon emerges. There are many other strange phenomena which find an explanation.

In TGD Universe space-time is many-sheeted 4-surface in 8-D imbedding space M4× CP2 and since the light-like geodesics of space-time sheet are not light-like geodesics of Minkowski space it takes in general longer time to travel from point A to B along them than along imbedding space geodesics. Space-time sheet is bumpy and wiggled so that the path is longer. Each space-time sheet corresponds to different light velocity as determined from the travel time. The maximal signal velocity is reached only in an ideal situation when the space-time geodesics are geodesics of Minkowski space.

Robertson-Walker cosmology gives a good estimate for the light velocity in cosmological scales.

  1. One can use the relationship

    da/dt= gaa-1/2

    relating the curvature radius a of RW cosmology space (equal to M4 light-cone proper time, the light-like boundary of the cone corresponds to the moment of Big Bang) and cosmic time t appearing in Robertson-Walker line element

    ds2=dt2- a223.

  2. If one believes that Einstein's equations in long scales, one obtains

    (8πG/3)× ρ =(gaa-1-1)/a2.

    One can solve from this equation gaa and therefore get an estimate the cosmological speed of light as

    c#=(gaa)1/2.

  3. By plugging in the estimates

    a= ≈ t≈ 13.8× Gy (the actual value is around 10 Gy)

    ρ≈ 5 mp/m3 (5 protons per cubic meter)

    G= 6.7× 1o-11 m3kg-1s-2

    one obtains the estimate

    (gaa)1/2 ≈ .73.

What can we conclude from the result? The light velocity identified as cosmic light velocity would 27 per cent smaller than the maximal signal velocity. This could easily explain why neutrinos arrived from SN1987A few hours earlier than photons: they just arrived along different space-time sheet containing somewhat less matter. One could also understand OPERA results.

If these findings survive they will provide an additional powerful empirical support for the notion of many-sheeted space-time. Sad that TGD predictions must still be verified via accidental experimental findings. It would be much easier to do the verification of TGD systematically. In any case, Laws of Nature do not care about science policy, and I dare hope that the mighty powerholders of particle physics are sooner or later forced to accept TGD as the most respectable known candidate for a theory unifying standard model and General Relativity.

For background see the chapter TGD and GRT of the online book "Physics in Many-Sheeted Space-time" or the article Are neutrinos superluminal?.

Sunday, September 11, 2011

Could TGD be an integrable theory?

During years evidence supporting the idea that TGD could be an integrable theory in some sense has accumulated. The challenge is to show that various ideas about what integrability means form pieces of a bigger coherent picture. Of course, some of the ideas are doomed to be only partially correct or simply wrong. Since it is not possible to know beforehand what ideas are wrong and what are right the situation is very much like in experimental physics and it is easy to claim (and has been and will be claimed) that all this argumentation is useless speculation. This is the price that must be paid for the luxury of genuine thinking.

Integrable theories allow to solve nonlinear classical dynamics in terms of scattering data for a linear system. In TGD framework this translates to quantum classical correspondence. The solutions of modified Dirac equation define the scattering data. The conjecture is that octonionic real-analyticity with space-time surfaces identified as surfaces for which the imaginary part of the biquaternion representing the octonion vanishes solves the field equations. This conjecture generalizes the conformal invariance to its octonionic analog. If this conjecture is correct, the scattering data should define a real analytic function whose octonionic extension defines the space-time surface as a surface for which its imaginary part in the representation as bi-quaternion vanishes. There are excellent hopes about this thanks to the reduction of the modified Dirac equation to geometric optics.

I do not bother to type 10 pages of text here but refer to the article An attempt to understand preferred extremals of Kähler action and to the chapter TGD as a Generalized Number Theory II: Quaternions, Octonions, and their Hyper Counterparts of "Physics as Generalized Number Theory".

Friday, September 02, 2011

Where did the Lithium go?

In the comment section of my blog Ulla gave an interesting link concerning Lithium problem to an article by Elisabetta Caffau et al titled "An extremely primitive halo star".

What has been found is a star which is extremely poor on metallic elements: ("metallic" refers to elements heavier than Li). The mystery is that not only elements heavier than Li but also Li itself, whose average abundance is believed to be determined by cosmological rather than stellar nucleosynthesis, is very scarcely present in these stars.

This finding can be coupled with too other observations about anomalies in Li abundance.

  1. The average abundance of Li in cosmos is lower than predicted by standard cosmology by a factor between 2 and 3. See this.
  2. Also Sun has too low Li abundance. See this.

I have proposed years ago (see this) that part of Li has transformed to dark matter (gained larger value of Planck constant) and therefore effectively disappeared. This process would have occurred both in interstellar medium and in stars so that all three Li problems would be solved at once.

One might even imaginge a process in which star gradually loses its heavier elements as they transform to dark matter and are "eaten" by a dark star formed at the same time in the vicinity. If this were the case, dark matter stars would tend to be near ordinary start with anomalously low abundance of Li and heavier elements.

Many question marks remain. What about the rate for the phase transition to dark matter? Also the nuclei lighter than Li should be able to transform to dark form. Why the cosmological abundances for them are however essentially those predicted by the standard model of primordial nucleosynthesis? Is the reason that Li their fusion to Li was much faster than transformation to dark matter during primordial nucleosynthesis whereas Li fused very slowly and had time to transform to dark Li?

Authors think that some process could have created very high temperature destroying the Li in this kind of stars: maybe dark matter annihilation might have caused this. This looks rather artificial to me and would not explain too low Li abundance for other stars and for interstellar medium.

Li problem would rather sharply distinguish between two very different views about dark matter: dark matter as some exotic elementary particles on one hand and dark matter as phases of ordinary matter implied by generalization of quantum theory on the other hand.

For background see the chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Thursday, September 01, 2011

Entropic gravity in difficulties?

Eric Verlinde's Entropic Gravity is one of the fashions of recent day theoretical physics which come and go (who still remembers Lisi's "Exceptionally simple theory of everything", which raised Lisi for a moment a potential follower of Einstein?) That this would happen was rather clear to me from the beginning and I expressed my views in several postings: see this, this, and this. The idea that gravitons are there not all and gravitational force is purely thermodynamical force looks nonsensical to me on purely mathematical grounds. But what about physics? Kobakhidze wrote a paper in which he demonstrated that the neutron interferometry experiments disfavor the notion of entropic gravity. Neutron behaves like a quantal particle obeying Schrödinger equation in the gravitational field of Earth and it is difficult to understand this if gravitation is entropic force.

I wrote detailed comments about this in the second posting and proposed different interpretation of the basic formulas for gravitational temperature and entropy based on zero energy ontology predicting that even elementary particle are at least mathematically analogous to thermodynamical objects. The temperature and entropy would be associated with the ensemble of gravitons assigned with the flux tubes mediating gravitational interaction and temperature behaves naturally as 1/r2 in absence of other graviton/heat sourcers and entropy is naturally proportional to the flux tube length and therefore to the radial distance r. This allows to understand the formulas deduced by Sabine Hossenfelder who has written one of the rather few clear expositions about entropic gravity (Somehow it reflects the attitudes towards women in physics that her excellent contribution was not mentiond in the reference list of the Wikipedia article. Disgusting.). Entropic gravitons are of course quite different thing than gravitation as entropic force.

The question about the proper interpretation of the formulas was extremely rewarding since it also led to ask what is the GRT limit of TGD could be. This led to beautiful answer and in turn forced to ask what black holes really are in TGD Universe. We have no empirical information about their interiors so that general relativistic answer can be taken only as one possibility which is even plagued by mathematical difficulties. Blackhole horizon is quite concretely the door to the new physics so that one should have be very openminded here- we really do not know what is behind the door!

The TGD based answer was surprising: black holes in TGD Universe correspond to the regions of space-time with Euclidian signature of the induced metric. In particular, the lines of generalized Feynman diagrams are blackholes in this sense. This view would unify elementary particles and blackholes. This proposal also leads to a concrete proposal for how to understand the extremely small value of the cosmological constant as the average value of cosmological constant which vanishes for Minkowskian regions but is large for Euclidian regions and determined by CP2 size.

The first article of Kobakhidze appeared in arXiv already two years ago but was not noticed by bloggers (except me but as a dissident I am of course not counted;-). Here the fact that I was asked to act as a referee helped considerably. Unfortunately I did not have time for this!).) . The new article Once more: gravity is not an entropic force of Kobakhidze was however noticed by media and also by physics bloggers.

Lubos came first: Lubos however had read the article carelessly (even its abstract) and went to claim that M. Chaichian, M. Oksanen, and A. Tureanu state in their article that Kobakhidze's claim is wrong and that they support entropic gravity. This was of course not the case: the authors agreed with Kobakhidze about entropic gravity but argued that there was a mistake in his reasoning. In honor of Lubos one must say that he noticed the problems caused by the lack of quantum mechanical interferene effects already much earlier.

Also Johannes Koelman wrote about the topic with inspiration coming from the popular web article Experiments Show Gravity Is Not an Emergent Phenomenon inspired by Kobakhidze's article.

To my opinion Verlinde's view is wrong but it would be a pity if one would not try to explain the highly suggestive formulas for entropy and temperature like parameters nicely abstracted by Sabine Hossenfelder from Verlinde's work. I have already described briefly my own interpretation inspired by zero energy ontology. In TGD framework it seems impossible to avoid the conclusion that also the mediators of other interactions are in thermal equilibrium at corresponding space-time sheets and that the temperature is universally the Unruh temperature determined by acceleration. Also the expression for the entropy can be deduced as the following little argument shows.

What makes the situation so interesting is that the sign of both temperature and entropy are negative for repulsive interactions suggesting thermo-dynamical instability. This leads to the question whether matter antimatter separation could relate to are reversal of the arrow of geometric time at space-time sheets mediating repulsive long range interactions. This statement makes sense in zero energy ontology and the arrow of time has a concrete mathematical content as a property of zero energy states. In the following I will consider identification of the temperature and entropy assignable to the flux tubes mediating gravitational or other interactions. I was too lazy to deduce explicit formulas in the original version of the article about this topic and added the formulas also into it.

Graviton temperature

Consider first the gravitonic temperature. The natural guess for the temperature parameter would be as Unruh temperature

Tgr= (hbar/2π) a ,

where a is the projection of the gravitational acceleration along the normal of the gravitational potential = constant surface. In the Newtonian limit it would be acceleration associated with the relative coordinates and correspond to the reduced mass and equal to a=G(m1+m2)/r2.

One could identify Tgr also as the magnitude of gravitational acceleration. In this case the definition would involved only be purely local. This is in accordance with the character of temperature as intensive property.

The general relativistic objection against the generalization is that gravitation is not a genuine force: only a genuine acceleration due to other interactions than gravity should contribute to the Unruh temperature so that gravitonic Unruh temperature should vanish. On the other hand, any genuine force should give rise to an acceleration. The sign of the temperature parameter would be different for attractive and repulsive forces so that negative temperatures would become possible. Also the lack of general coordinate invariance is a heavy objection against the formula.

In TGD Universe the situation is different. In this case the definition of temperature as magnitude of local acceleration is more natural.

  1. Space-time surface is sub-manifold of the imbedding space and one can talk about acceleration of a point like particle in imbedding space M4× CP2. This acceleration corresponds to the trace of the second fundamental form for the imbedding and is completely well-defined and general coordinate invariant quantity and vanishes for the geodesics of the imbedding space. Since acceleration is a purely geometric quantity this temperature would be same for flux sheets irrespective of whether they mediate gravitational or some other interactions so that all kinds of virtual particles would be characterized by this same temperature.

  2. One could even generalize Tgr to a purely local position dependent parameter by identifying it as the magnitude of second fundamental form at given point of space-time surface. This would mean that the temperature in question would have purely geometric correlate. This temperature would be alwas non-negative. This purely local definition would also save from possible inconsistencies in the definition of temperature resulting from the assumption that its sign depends on whether the interaction is repulsive or attractive.

  3. The trace of the second fundamental form -call it H- and thus Tgr vanishes for minimal surfaces. Examples of minimal surfaces are cosmic strings, massless extremals and CP2 vacuum extremals with M4 projection which is light-like geodesic. Vacuum extremals with at most 2-D Lagrangian CP2 projection has a non-vanishing H and this is true also for their deformations defining the counterpart of GRT space-time. Also the deformations of cosmic strings with 2-D M4 projection to magnetic flux tubes with 4-D M4 projection are expected to be non-minimal surfaces. Same applies to the deformations of CP2 vacuum extremals near the region where the signature of the induced metric changes. The predicted cosmic string dominated phase of primordial cosmology would correspond to the vanishing gravitonic temperature. Also generic CP2 type vacuum extremals have non-vanishing H.

  4. Massless extremals are excellent macroscopic space-time correlate for gravitons. The massivation of gravitons is however strongly suggested by simple considerations encouraged by twistorial picture and wormhole throats connecting parallel MEs definine the basic building bricks of gravitons and would bring in non-vanishing geometric temperature, (extremely small but non-vanishing) graviton mass, and gravitonic entropy.

    1. The M4 projection of CP2 type vacuum extremal is random light-like curve rather than geodesic of M4 (this gives rise to Virasoro conditions). The mass scale defined by the second fundamental form describing acceleration is non-vanishing. I have indeed assigned this scale as well as the mixing of M4 and CP2 gamma matrices inducing mixing of M4 chiralities giving rise to massivation. The original proposal was that the trace of second fundamental form could be identifiable as classical counterpart of Higgs field. One can speak of light-like randomness above a given length scale defined by the inverse of the length of the acceleration vector.

    2. This suggests a connection with p-adic mass calculations: the p-adic mass scale mp is proportional to the acceleration and thus could be given by the geometric temperature: mp=n R-1p-1/2∼ hbar H=hbar a, where R∼ 104LPl is CP2 radius, and n some numerical constant of order unity. This would determine the mass scale of the particle and relate it to the momentum exchange along corresponding CP2 type vacuum extremal. Local graviton mass scale at the flux tubes mediating gravitational interaction would be essentially the geometric temperature.

    3. Interestingly, for photons at the flux tubes mediating Coulomb interactions in hydrogen atom this mass scale would be of order

      hbar a ∼ e2hbar/[mpn4a02]∼ 10-5 /n4 eV,

      which is of same order of magnitude as Lamb shift, which corresponds to 10-6 eV energy scale for n=2 level of hydrogen atom. Hence it might be possible to kill the hypothesis rather easily.

    4. Note that momentum exchange is space-like for Coulomb interaction and the trace Hk of the second fundamental form would be space-like vector. It seems that one define mass scale as H=(-HkHk)1/2 to get a real quantity.

    5. This picture is in line with the view that also the bosons usually regarded as massless possess a small mass serving as an IR cufoff. This vision is inspired by zero energy ontology and twistorial considerations kenociteallb/twistor. The prediction that Higgs is completely eaten by gauge bosons in massivation is a prediction perhaps testable at LHC already during year 2011.

Remark: In MOND theory of dark matter a critical value of acceleration is introduced. I do not believe personally to MOND and TGD explains galactic rotation curves without any modification of Newtonian dynamics in terms of dark matter assignable to cosmic strings containing galaxies like around it like pearls in necklace. In TGD framework the critical acceleration would be the acceleration above which the gravitational acceleration caused by the dark matter associated with the cosmic strings traversing along galactic plane orthogonally and behaving as 1/ρ overcomes the acceleration caused by the galactic matter and behaving as 1/ρ2. Could this critical acceleration correspond to a critical temperature Tgr and could critical value of H perhaps characterize also a critical magnitude for the deformation from minimal surface extremal? The critical acceleration in Milgrom's model is about 1.2*10-10 m/s2 and corresponds to a time scale of 1012 years, which is of the order of the age of the Universe.

The formula contains Planck constant and the obvious question of the inhabitant of TGD Universe is whether the Planck constant can be identified with the ordinary Planck constant or with the effective Planck constant coming as integer multiple of it (see this).

  1. For the ordinary value of hbar the gravitational Unruh temperature is extremely small. To make things more concrete one can express the Unruh temperature in gravitational case in terms of Schwartschild radius rS=2GMm at Newtonian limit. This gives

    Tgr= (hbar/4π rS) [ (M+m)/M] (rS/r)2 .

    Even at Schwartschild radius the temperature corresponds to Compton length of order 4π rS for m<<M.

  2. Suppose that Planck constant is gravitational Planck constant hbargr= GMm/v0, where v0≈ 2-11 holds true for inner planets in solar system (see this). This would give

    Tgr= (m/8π v0) [(M+m)/M] (rS/r)2 .

    The value is gigantic so that one must assume that the temperature parameter corresponds to the minimum value of Planck constant.

Gravitonic entropy

A good guess for the value of gravitational entropy (gravitonic entropy associated with the flux tube mediating gravitational interaction) comes from the observation that it should be proportional to the flux tube length. The relationship dE= dS/T suggests S∝ φgr/Tgr as the first guess in Newtonian limit. A better guess would be

Sgr= -Vgr/Tgr= [(M+m)/M] (r/hbar m) ,

The replacement M→ M+m appearing in the Newtonian equations of motion for the reduced mass has been performed to obtain symmetry with respect to the exchange of the masses.

The entropy would depend on the interaction mediated by the space-time sheet in question which suggests that the generalization is

S=-V(r)/Tgr .

Here V(r) is the potential energy of the interaction. The sign of S depends on whether the interaction is attractive or repulsive and also on the sign of the temperature. For a repulsive interaction the entropy would be negative so that the state would be thermodynamically unstable in ordinary thermodynamics.

The integration of dE= TdS in the case of Coulomb potential gives E= V(r)-V(0) for both options. If the charge density near origin is constant, one has V(r) proportional to r2 in this region implying V(0)=0 so that one obtains Coulombic interaction energy E=V(r). Hence thermodynamical interpretation makes sense formally.

The challenge is to generalize the formula of entropy in Lorentz invariant and general coordinate invariant manner. Basically the challenge is to express the interaction energy in this manner. Entropy characterizes the entire flux tube and is therefore a non-local quantity. This justifies the use of interaction energy in the formula. In principle the dynamics defined by the extremals of Kähler action predicts the dependence of the interaction energy on Minkowskian length of the flux tube, which is well-defined in TGD Universe. Entropy should be also a scalar. This is achieved since the rest frame is fixed uniquely by the time direction defined by the time-like line connecting the tips of CD: the interaction energy in rest frame of CD defines a scalar. Note that the sign of entropy correlates with the sign of interaction energy so that the repulsive situation would be thermodynamically unstable and this suggests that matter antimatter asymmetry could relate to thermal instability.

See the article TGD inspired vision about entropic gravity. For background see the chapter TGD and GRT of "Physics in Many-Sheeted Space-Time".