Wednesday, August 29, 2007

To say that it is good to be good is not enough

Tommaso Dorigo wrote a nice posting about the talk of Lisa Randall the topic of which was black holes in LHC. Tommaso however made a mistake: he went on describing in a positive tone Lisa Randall as an attractive human being and even said something about her clothing. I enjoyed Tommasos critical posting and congratulate Tommaso for courage and ability to say something about the more human side in the life of scientists. We are after all women and men and see each other as women and men and I see nothing bad in saying this aloud.

Clifford Johnson, one of the many hypocrits in blog community, did not like this and represented Tommaso as a brutal macho. I disagree. Tommaso Dorigo is one of physicists who has had a courage to say something even in blogs of people with no academic status (like mine). He has presented comments in Kea's blog, and as Kea expressed it, is one of the very few physicist males who really listens to a woman physicist. After a couple of posts Clifford Johnson censored the comments of Kea, which does not quite conform with the imago he wants to create about himself.

Some ethics is good for your career but do it only publicly

When I was young I believed that world would become better if sufficiently many people would say aloud that it is good to be good. It took some time to realize that in this world we have to live in, an intelligent opportunist finds the talk about noble ethical principles a perfect tool to gain success and power. I also learned that these people un-ashamedly break these rules when no-one is seeing.

These fellows can say nice words about the position of women in science, about freedom of thought, about need for forums for new ideas and original point of views. If there is something good in these blogs it is that they reveal the darker side of reality quite soon. If you are a woman and want to even post real comments to these blogs (not just applaudes), it is better to have a lot of academic position and name. And irrespective of your gender, if you have something genuinely new to say and try to do it without these social qualifications, your comment is doomed to be out of topic irrespective of how good your argument is. You also soon realize that this is about who is the king of the hill.

Crackpot hunting makes good for your scientific status

There is also a sub-species of physics bloggers, who have discovered crackpot hunting as an easy manner to get scientific respect and a lot of blood-thirsty audience. Select some really bad buzz word: UFOs, cold fusion, water memory (homeopathy is even better), consciousness (you can add the attribute quantum to achieve a more powerful effect on mediocrit), paranormal phenomena,.... Excellent victims are also those thinking that we might not understand everything about second law, that physics might not be completely well-understood above intermediate boson length scale, or that biology might not be mere complexity and nonlinearity. There are many other options in the menu. If you are satisfied with a more restricted audience consisting of string model fans you can select as a victim anyone who dares to think that M-theory is not the only possible theory of everything and dares to propose an alternative.

The basic point is that there is no need to know anything about what is really done in these pariah fields of science, and - as I have seen - the ignorance of these crackpot hunters is indeed colossal. Why should the young and busy opportunist bother to see the trouble of learning what is really involved when a nasty comment about the mental health of scientists on the wrong side of fence gains stormy applaudes from the crowd of idiots in the audience and colleagues begin to see the aura of profound scientific realism around your head. You can also safely forget all the basic rules of science: just spout out the bad word. If someone disagrees with good justifications, you can censor the carefully constructed argument out of discussion and the joy of misused power comes as additional bonus.

What is behind those bad words?

During these years I have learned that these bad words hide an entire community of serious researchers who pay a high price for their intellectual honesty and know that they will not receive during their lifetime the respect they would deserve. I respect these people much more than these self-satisfied conformistic career builders from Harward who have never been forced to make personal sacrifices for possibility to continue their work.

The following gives some concrete content to my claim. I spend a few days in Röros, Nowray: The 7th European SSE Meeting (Society for Scientific Exploration) was the title of the little conference devoted to various anomalies.

  • I heard excellent lectures about anomalous luminous phenomena ("UFOs" and little green men for the political purposes of crackpotters).

  • There were several talks related to biology. A very nice lecture about research relating to electromagnetic communications between biomolecules by Yolene Thomas, the collaborator of Jacques Benveniste (crackpotter can use "homeopathy" here). There was a wonderful old-fashioned lecture of Antonio Giudetta (think of it: no powerpoints, no slides, no written text, just you) about the history of theories of biological evolution and about empirical facts demonstrating the failure of the recently popular view. I heard also the nice talks by young scientists Antonella Vannini and Ulisse di Corpo related to the possibility that we might not understand all about time's arrow in living matter (notions of syntropy and entropy, signals propagating backwards in time).

  • There were several interesting talks about remote mental interactions by people who are qualified researchers. For instance, the talk by Bob Jahn, the leader of PEAR project, about machine-human interactions (here crackpotters - Peter Woit amongst them - earned easy scientific respect by allowing to understand that the project had been stopped as a scientific scandal and of course received applaudes from the empty heads in the audience).

  • There was a talk about experimental findings related to Allais effect by Dimitri Olenice. This effect is an anomaly appearing during solar eclipses and possibly having gravitational origin, discovered originally by Allais, a nobelist in economy. There is no good bad word availabe here but well-informed stringy reader is of course expected to be able to conclude that Allais and all his followers are crackpots.

    One might think that an anomaly like Allais effect might be especially interesting for those who seriously try to understand quantum gravitation. Unfortunately not. It is better to develop career by building crazy 11-dimensional ad hoc constructions as proposals for fundamental physics: take a high enough number of branes and put thim in various relative positions and in some intersection something resembling our low energy phenomenology might pop up. The complete ad hoc character of these constructions brings in my mind ancient myths about how universe was created. These myths were of course not meant to be scientific theories but poetic visions: no one argued seriously that the world was created from the egg of some bird. These weird constructs of the lost generation of theoretical physicists are however meant to be taken completely seriously, they lack completely the poetic aspect, and they are are mathematically and physically ugly to the point of inducing a vomiting reflex.

To sum up, I am convinced that without taking seriously these anomalies, which all demonstrate in their own way the failure of the reductionistic dogma, a real progress in physics and biology is not possible, and I am working myself in attempt to understand these anomalies in the wider view about physics and quantum theory. It is really sad that there are so many easy-riders in the theoretical community and that it is so easy to gain scientific respectability by using these cheap tricks. Perhaps it is time to re-establish the rules old good science also in blogs: scientific arguments must be based on real content and justifications and playing with negatively emotional buzz words is taken as non-professional behavior.

Monday, August 27, 2007

A little crazy speculation about knots and infinite primes

Kea told about some mathematical results related to knots.
  1. Knots are very algebraic objects. Product of knots is defined in terms of connected sum. Connected sum quite generally defines a commutative and associative product (or sum, as you wish), and one can decompose any knot into prime knots.

  2. Knots can be mapped to Jones polynomials J(K) (for instance -there are many other polynomials and there are very general mathematical results about them which go over my head) and the product of knots is mapped to a product of corresponding polynomials. The polynomials assignable to prime knots should be prime in a well-defined sense, and one can indeed define the notion of primeness for polynomials J(K): prime polynomial does not factor to a product of polynomials of lower degree in the extension of rationals considered.

This raises the idea that one could define the notion of zeta function for knots. It would be simply the product of factors 1/(1-J(K)-s) where K runs over prime knots. The new (to me) but very natural element in the definition would be that ordinary prime is replaced with a polynomial prime.

1. Do knots correspond to the hierarchy of infinite primes?

I have been pondering the problem how to define the counterpart of zeta for infinite primes. The idea of replacing primes with prime polynomials would resolve the problem since infinite primes can be mapped to polynomials. For some reason this idea however did not occur to me.

The correspondence of both knots and infinite primes with polynomials inspires the question whether d=1-dimensional prime knots might be in correspondence (not necessarily 1-1) with infinite primes. Rational or Gaussian rational infinite primes would be naturally selected: these are also selected by physical considerations as representatives of physical states although quaternionic and octonionic variants of infinite primes can be considered.

If so, knots could correspond to the subset of states of a super-symmetric arithmetic quantum field theory with bosonic single particle states and fermionic states labelled by quaternionic primes.

  1. The free Fock states of this QFT are mapped to first order polynomials and irreducible polynomials of higher degree have interpretation as bound states so that the non-decomposability to a product in a given extension of rationals would correspond physically to the non-decomposability into many-particle state. What is fascinating that apparently free arithmetic QFT allows huge number of bound states.

  2. Infinite primes form an infinite hierarchy which corresponds to an infinite hierarchy of second quantizations for infinite primes meaning that n-particle states of the previous level define single particle states of the next level. At space-time level this hierarchy corresponds to a hierarchy defined by space-time sheets of the topological condensate: space-time sheet containing a galaxy can behave like an elementary particle at the next level of hierarchy.

  3. Could this hierarchy have some counterpart for knots?In one realization as polynomials, the polynomials corresponding to infinite prime hierarchy have increasing number of variables. Hence the first thing that comes into my uneducated mind is as the hierarchy defined by the increasing dimension d of knot. All knots of dimension d would in some sense serve as building bricks for prime knots of dimension d+1. A canonical construction recipe for knots of higher dimensions should exist.

  4. One could also wonder whether the replacement of spherical topologies for d-dimensional knot and d+2-dimensional imbedding space with more general topologies could correspond to algebraic extensions at various levels of the hierarchy bringing into the game more general infinite primes. The units of these extensions would correspond to knots which involve in an essential manner the global topology (say knotted non-contractible circles in 3-torus). Since the knots defining the product would in general have topology different from spherical topology the product of knots should be replaced with its category theoretical generalization making higher-dimensional knots a groupoid in which spherical knots would act diagonally leaving the topology of knot invariant. The assignment of d-knots with the notion of n-category, n-groupoid, etc.. by putting d=n is a highly suggestive idea. This is indeed natural since are an outcome of a repeated abstraction process: statements about statements about .....

  5. The lowest d=1, D=3 level would be the fundamental one and the rest would be somewhat boring repeated second quantization;-). This is why dimension D=3 (number theoretic braids at light-like 3-surfaces!) would be fundamental for physics.

2. Further speculations

Some further comments about the proposed structure of all structures are in order.

  1. The possibility that algebraic extensions of infinite primes could allow to describe the refinements related to the varying topologies of knot and imbedding space would mean a deep connection between number theory, manifold topology, sub-manifold topology, and n-category theory.

  2. n-structures would have very direct correspondence with the physics of TGD Universe if one assumes that repeated second quantization makes sense and corresponds to the hierarchical structure of many-sheeted space-time where even galaxy corresponds to elementary fermion or boson at some level of hierarchy. This however requires that the unions of light-like 3-surfaces and of their sub-manifolds at different levels of topological condensate should be able to represent higher-dimensional manifolds physically albeit not in the standard geometric sense since imbedding space dimension is just 8. This might be possible.

    1. As far as physics is considered, the disjoint union of submanifolds of dimensions d1 and d2 behaves like a d1+d2-dimensional Cartesian product of the corresponding manifolds. This is of course used in standard manner in wave mechanics (the configuration space of n-particle system is identified as E3n/Sn with division coming from statistics).

    2. If the surfaces have intersection points, one has a union of Cartesian product with punctures (intersection points) and of lower-dimensional manifold corresponding to the intersection points.

    3. Note also that by posing symmetries on classical fields one can effectively obtain from a given n-manifold manifolds (and orbifolds) with quotient topologies.

    The megalomanic conjecture is that this kind of physical representation of d-knots and their imbedding spaces is possible using many-sheeted space-time. Perhaps even the entire magnificient mathematics of n-manifolds and their sub-manifolds might have a physical representation in terms of sub-manifolds of 8-D M4×CP2 with dimension not higher than space-time dimension d=4. Could crazy TOE builder dream of anything more ouf of edge;-)!

3. The idea survives the most obvious killer test

All this looks nice and the question is how to give a death blow to all this reckless speculation. Torus knots are an excellent candidate for permorming this unpleasant task but the hypothesis survives!

  1. Torus knots are labelled by a pair integers (m,n), which are relatively prime. These are prime knots. Torus knots for which one has m/n= r/s are isotopic so that any torus knot is isotopic with a knot for which m and n have no common prime power factors.

  2. The simplest infinite primes correspond to free Fock states of the supersymmetric arithmetic QFT and are labelled by pairs (m,n) of integers such that m and n do not have any common prime factors. Thus torus knots would correspond to free Fock states! Note that the prime power pkp appearing in m corresponds to kp-boson state with boson "momentum" pk and the corresponding power in n corresponds to fermion state plus kp-1 bosons.

  3. A further property of torus knots is that (m,n) and (n,m) are isotopic: this would correspond at the level of infinite primes to the symmetry mX +n→nX+m, X product of all finite primes. Thus infinite primes are in 2→ correspondence with torus knots and the hypothesis survives also this murder attempt.

4. How to realize the representation of the braid hierarchy in many-sheeted space-time?

One can consider a concrete construction of higher-dimensional knots and braids in terms of the many-sheeted space-time concept.

  1. The basic observation is that ordinary knots can be constructed as closed braids so that everything reduces to the construction of braids. In particular, any torus knot labelled by (m,n) can be made from a braid with m strands: the braid word in question is (σ1....σm-1)n or by (m,n)=(n,m) equivalence from n strands. The construction of infinite primes suggests that also the notion of d-braid makes sense as a collection of d-knots in d+2-space, which move and and define d+1-braid in d+3 space (the additional dimension being defined by time coordinate).

  2. The notion of topological condensate should allow a concrete construction of the pairs of d- and d+2-dimensional manifolds. The 2-D character of the fundamental objects (partons) might indeed make this possible. Also the notion of length scale cutoff fundamental for the notion of topological condensate is a crucial element of the proposed construction.

The concrete construction would proceed as follows.

  1. Consider first the lowest non-trivial level in the hierarchy. One has a collection of 3-D lightlike 3-surfaces X3i representing ordinary braids. The challenge is to assign to them a 5-D imbedding space in a natural manner. Where do the additional two dimensions come from? The obvious answer is that the new dimensions correspond to the 2-d dimensional partonic 2-surface X2 assignable to the 3-D lightlike surface at which these surfaces have suffered topological condensation. The geometric picture is that X3i grow like plants from ground defined by X2 at 7-dimensional δM4+×CP2.

  2. The degrees of freedom of X2 should be combined with the degrees of freedom of X3i to form a 5-dimensional space X5. The natural idea is that one first forms the Cartesian products X5i =X3i×X2 and then the desired 5-manifold X5 as their union by posing suitable additional conditions. Braiding means a translational motion of X3i inside X2 defining braid as the orbit in X5. It can happen that X3i and X3j intersect in this process. At these points of the union one must obviously pose some additional conditions.

    Finite (p-adic) length scale resolution suggests that all points of the union at which an intersection between two or more light-like 3-surfaces occurs must be regarded as identical. In general the intersections would occur in a 2-d region of X2 so that the gluing would take place along 5-D regions of X5i and there are therefore good hopes that the resulting 5-D space is indeed a manifold. The imbedding of the surfaces X3i to X5 would define the braiding.

  3. At the next level one would consider the 5-d structures obtained in this manner and allow them to topologically condense at larger 2-D partonic surfaces in the similar manner. The outcome would be a hierarchy consisting of 2n+1-knots in 2n+3 spaces. A similar construction applied to partonic surfaces gives a hierarchy of 2n-knots in 2n+2-spaces.

  4. The notion of length scale cutoff is an essential element of the many-sheeted space-time concept. In the recent context it suggests that d-knots represented as space-time sheets topologically condensed at the larger space-time sheet representing d+2-dimensional imbedding space could be also regarded effectively point-like objects (0-knots) and that their d-knottiness and internal topology could be characterized in terms of additional quantum numbers. If so then d-knots could be also regarded as ordinary colored braids and the construction at higher levels would indeed be very much analogous to that for infinite primes.

    For details see the chapter TGD as a Generalized Number Theory III: Infinite Primes of "TGD as a Generalized Number Theory".

    Sunday, August 26, 2007

    Could one demonstrate the existence of large Planck constant photons using ordinary camera or even bare eyes?

    If ordinary light sources generate also dark photons with same energy but with scaled up wavelength, this might have effects detectable with camera and even with bare eyes. In the following I consider in a rather light-hearted and speculative spirit two possible effects of this kind appearing in both visual perception and in photos. For crackpotters possibly present in the audience I want to make clear that I love to play with ideas to see whether they work or not, and that I am ready to accept some convincing mundane explanation of these effects and I would be happy to hear about this kind of explanations. I was not able to find any such explanation from Wikipedia using words like camera, digital camera, lense, aberrations....

    Why light from an intense light source seems to decompose into rays?

    If one also assumes that ordinary radiation fields decompose in TGD Universe into topological light rays ("massless extremals", MEs) even stronger predictions follow. If Planck constant equals to hbar= q×hbar0, q=na/nb, MEs should possess Zna as an exact discrete symmetry group acting as rotations along the direction of propagation for the induced gauge fields inside ME.

    The structure of MEs should somewhat realize this symmetry and one possibility is that MEs has a wheel like structure decomposing into radial spokes with angular distance Δφ= 2π/na related by the symmetries in question. This brings strongly in mind phenomenon which everyone can observe anytime: the light from a bright source decomposes into radial rays as if one were seeing the profile of the light rays emitted in a plane orthogonal to the line connecting eye and the light source. The effect is especially strong if eyes are stirred.

    Could this apparent decomposition to light rays reflect directly the structure of dark MEs and could one deduce the value of na by just counting the number of rays in camera picture, where the phenomenon turned to be also visible? Note that the size of these wheel like MEs would be macroscopic and diffractive effects do not seem to be involved. The simplest assumption is that most of photons giving rise to the wheel like appearance are transformed to ordinary photons before their detection.

    The discussions about this led to a little experimentation with camera at the summer cottage of my friend Samppa Pentikäinen, quite a magician in technical affairs. When I mentioned the decomposition of light from an intense light source to rays at the level of visual percept and wondered whether the same occurs also in camera, Samppa decided to take photos with a digi camera directed to Sun. The effect occurred also in this case and might correspond to decomposition to MEs with various values of na but with same quantization axis so that the effect is not smoothed out.

    What was interesting was the presence of some stronger almost vertical "rays" located symmetrically near the vertical axis of the camera. The shutter mechanism determining the exposure time is based on the opening of the first shutter followed by closing a second shutter after the exposure time so that every point of sensor receives input for equally long time. The area of the region determining input is bounded by a vertical line. If macroscopic MEs are involved, the contribution of vertical rays is either nothing or all unlike that of other rays and this might somehow explain why their contribution is enhanced.

    Addition: I learned from Samppa that the shutter mechanism is un-necessary in digi cameras since the time for the reset of sensors is what matters. Something in the geometry of the camera or in the reset mechanism must select vertical direction in a preferred position. For instance, the outer "aperture" of the camera had the geometry of a flattened square.

    Anomalous diffraction of dark photons

    Second prediction is the possibility of diffractive effects in length scales where they should not occur. A good example is the diffraction of light coming from a small aperature of radius d. The diffraction pattern is determined by the Bessel function

    J1(x), x=kdsin(θ), k= 2π/λ.

    There is a strong light spot in the center and light rings around whose radii increase in size as the distance of the screen from the aperture increases. Dark rings correspond to the zeros of J1(x) at x=xn and the following scaling law for the nodes holds true

    sin(θn)= xnλ/2πd.

    For very small wavelengths the central spot is almost pointlike and contains most light intensity.

    If photons of visible light correspond to large Planck constant hbar= q× hbar0 transformed to ordinary photons in the detector (say camera film or eye), their wavelength is scaled by q and one has

    sin(θn)→ q× sin(θn)

    The size of the diffraction pattern for visible light is scaled up by q.

    This effect might make it possible to detect dark photons with energies of visible photons and possibly present in the ordinary light.

    1. What is needed is an intense light source and Sun is an excellent candidate in this respect. Dark photon beam is also needed and n dark photons with a given visible wavelength λ could result when dark photon with hbar= n×q×hbar0 decays to n dark photons with same wavelength but smaller Planck constant hbar= q×hbar0. If this beam enters the camera or eye one has a beam of n dark photons which forms a diffraction pattern producing camera picture in the decoherence to ordinary photons.

    2. In the case of an aperture with a geometry of a circular hole, the first dark ring for ordinary visible photons would be at sin(θ)≈ (π/36)λ/d. For a distance of r=2 cm between the sensor plane ("film") and effective circular hole this would mean radius of R ≈ rsin(θ)≈ 1.7 micrometers for micron wavelegnth. The actual size of spots is of order R≈ 1 mm so that the value of q would be around 1000: q=210 and q=211 belong to the favored values for q.

    3. One can imagine also an alternative situation. If photons responsible for the spot arrive along single ME, the transversal thickness R of ME is smaller than the radius of hole, say of of order of wavelength, ME itself effectively defines the hole with radius R and the value of sin(θn) does not depend on the value of d for d>R. Even ordinary photons arriving along MEs of this kind could give rise to an anomalous diffraction pattern. Note that the transversal thickness of ME need not be fixed however. It however seems that MEs are now macroscopic.

    4. A similar effect results as one looks at an intense light source: bright spots appear in the visual field as one closes the eyes. If there is some more mundane explanation (I do not doubt this!), it must apply in both cases and explain also why the spots have precisely defined color rather than being white.

    5. The only mention about effects of diffractive aberration effects are colored rings around say disk like objects analogous to colors around shadow of say disk like object. The radii of these diffraction rings in this case scale like wavelengths and distance from the object.

    6. Wikipedia contains an article from which one learns that the effect in question is known as lens flares. The article states that flares typically manifest as several starbursts, circles, and rings across the picture and result in internal reflection and scattering from material inhomogenities in lens (such as multiple surfaces). The shape of the flares also depends on the shape of aperture. These features conform at least qualitatively with what one would expect from a diffraction if Planck constant is large enough for photons with energy of visible photon.

      Second article defines flares in a more restrictive manner: lense flares result when non-image forming light enters the lens and subsequently hits the camera's film or digital sensor and produces typically polygonal shape with sides which depend on the shape of lense diaphgram. The identification as a flare applies also to the apparent decomposition to rays and this dependence indeed fits with the observations.

    The experimentation of Samppa using digi camera demonstrated the appearance of colored spots in the pictures. If I have understood correctly, the sensors defining the pixels of the picture are in the focal plane and the diffraction for large Planck constant might explain the phenomenon. Since I did not have the idea about diffractive mechanism in mind, I did not check whether fainter colored rings might surround the bright spot.

    1. In any case, the readily testable prediction is that zooming to bright light source by reducing the size of the aperture should increase the size and number of the colored spots. As a matter fact, experimentation demonstrated that focusing brought in large number of these spots but we did not check whether the size was increased.

    2. Standard explanation predicts that the bright spots are present also with weaker illumination but with so weak intensity that they are not detected by eye. The positions of spots should also depend only on the illumination and camera. The explanation in terms of beams of large Planck constant photons predicts this if the flux of dark photons from any light source is constant.

    For background see the chapter Dark Nuclear Physics and Condensed Matter of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

    Saturday, August 25, 2007

    MAGIC gamma ray anomaly as evidence for many-sheeted space-time?

    Kea wrote about two recent anomalies in cosmology. The first comment was about the cosmic microwave background anomaly having explanation in terms of a large void of size of orde 108 light years existing much before it should have existed in standard cosmology.

    TGD explains this in terms of astrophysical quantum coherence of dark matter predicting that cosmic expansion for quantum coherent dark matter, in particular that in large voids, occurs via discrete quantum leaps increasing the value of gravitational Planck constant and thus the quantum size of system. The smooth expansion predicted by classical cosmology would be obtained only in the average sense. I commented this in previous posting.

    Kea mentioned also the recently found gamma ray anomaly.

    The MAGIC gamma-ray telescope team has just released an eye-popping preprint (following up earlier work) describing a search for an observational hint of quantum gravity. What they've seen is that higher-energy gamma rays from an extragalactic flare arrive later than lower-energy ones. Is this because they travel through space a little bit slower, contrary to one of the postulates underlying Einstein's special theory of relativity -- namely, that radiation travels through the vacuum at the same speed no matter what? ...

    Either the high-energy gammas were released later (because of how they were generated) or they propagated more slowly. The team ruled out the most obvious conventional effect, but will have to do more to prove that new physics is at work -- this is one of those "extraordinary claims require extraordinary evidence" situations. ...

    1. TGD based explanation at qualitative level

    One of the oldest predictions of many-sheeted space-time is that the time for photons to propagate from point A to B along given space-time sheet depends on space-time sheet because photon travels along lightlike geodesic of space-time sheet rather than lightlike geodesic of the imbedding space and thus increases so that the travel time is in general longer than using maximal signal velocity.

    Many-sheetedness predicts a spectrum of Hubble constants and gamma ray anomaly might be a demonstration for the many-sheetedness. The spectroscopy of arrival times would give information about how many sheets are involved.

    Before one can accept this explanation, one must have a good argument for why the space-time sheet along which gamma rays travel depends on their energy and why higher energy gamma rays would move along space-time sheet along which the distance is longer.

    1. Shorter wavelength means that that the wave oscillates faster. Space-time sheet should reflect in its geometry the matter present at it. Could this mean that the space-time sheet is more "wiggly" for higher energy gamma rays and therefore the distance travelled longer? A natural TGD inspired guess is that the p-adic length scales assignable to gamma ray energy defines the p-adic length scale assignable to the space-time sheet of gamma ray connecting two systems so that effective velocities of propagation would correspond to p-adic length scales coming as half octaves. Note that there is no breaking of Lorentz invariance since gamma ray connects the two system and the rest system of receiver defines a unique coordinate system in which the energy of gamma ray has Lorentz invariant physical meaning.

    2. One can invent also an objection. In TGD classical radiation field decomposes into topological light rays ("massless extremals", MEs) which could quite well be characterized by a large Planck constant in which case the decay to ordinary photons would take place at the receiving end via decoherence (Allais effect discussed in previous posting is an application of this picture in the case of gravitonal interaction). Gamma rays could propagate very much like a laser beam along the ME. For the simplest MEs the velocity of propagation corresponds to the maximal signal velocity and there would be no variation of propagation time. One can imagine two manners to circumvent to the counter argument.
      1. Also topological light rays for which light-like geodesics are replaced with light-like curves of M4 are highly suggestive as solutions of field equations. For these MEs the distance travelled would be in general longer than for the simplest MEs.
      2. The gluing of ME to background space-time by wormhole contacts (actually representation for photons!) could force the classical signal to propagate along a zigzag curve formed by simple MEs with maximal signal velocity. The length of each piece would be of order p-adic length scale. The zigzag character of the path of arrival would increase the distance between source and receiver.

    2. Quantitative argument

    A quantitative estimate runs as follows.

    1. The source in question is quasar Makarian 501 with redshift z= .034. Gamma flares of duration about 2 minutes were observed with energies in bands .25-.6 TeV and 1.2-10 TeV. The gamma rays in the higher energy band were near to its upper end and were delayed by about Δ τ=4 min with respect to those in the lower band. Using Hubble law v=Hct with H= 71 km/Mparsec/s, one obtains the estimate Δτ/τ= 1.6×10-14.

    2. A simple model for the induced metric of the space-time sheet along which gamma rays propagate is as a flat metric associated with the flat imbedding Φ= ωt, where Φ is the angle coordinate of the geodesic circle of CP2. The time component of the metric is given by

      gtt=1-R2ω2.

      ω appears as a parameter in the model. Also the embeddings of Reissner-Norström and Schwartschild metrics contain frequency as free parameter and space-time sheets are quite generally parametrized by frequencies and momentum or angular momentum like vacuum quantum numbers.

    3. ω is assumed to be expressible in terms of the p-adic prime characterizing the space-time sheet. The parametrization to assumed in the following is

      ω2R2=Kp-r.

      It turns out that r=1/2 is the only option consistent with the p-adic length scale hypothesis. The naive expectation would have been r=1. The result suggests the formula

      ω2 = m0mp with m0= K/R

      so that ω would be the geometric mean of a slowly varying large p-adic mass scale and p-adic mass scale.

      The explanation for the p-adic length scale hypothesis leading also to a generalization of Hawking-Bekenstein formula assumes that for the strong form of p-adic length scale hypothesis stating p≈ 2k, k prime, there are two p-adic length scales involved with a given elementary particle. Lp characterizes particle's Compton length and Lk the size of the wormhole contact or throat representing the elementary particle. The guess is that ω is proportional to the geometric mean of these two p-adic length scales:

      ω2R2 = x/[2k/2k1/2].

    4. A relatively weak form of the p-adic length scale hypothesis would be p≈ 2k, k an odd integer. M127 corresponds to the mass scale me5-1/2 in a reasonable approximation. Using me≈.5 MeV one finds that the mass scales m(k) for k=89-2n, n=0,1,2...,6 are m(k)/TeV= x, with x=0.12, 0.23, 0.47, 0.94, 1.88, 3.76, 7.50. The lower energy range contains the scales corresponding to k=87 and 85. The higher energy range contains the scales corresponding to k=83,81,79,77. In this case the proposed formula does not make sense.

    5. The strong form of p-adic length scale hypothesis allows only prime values for k. This would allow Mersenne prime M89 (intermediate gauge boson mass scale) for the lower energy range and k=83 and 79 for the upper energy range. A rough estimate is obtained by assuming that the two energy ranges correspond to k1=89 and k2=79.

    6. The expression for τ reads as τ= (gtt)1/2t. The expression for Δτ/τ is given by

      Δ τ/τ=(gtt)-1/2Δ gtt/2≈ R2Δ ω2 = x[(k2p2)-1/2-(k1p1)-1/2] ≈x(k2p2)-1/2= x×2-79/2(79)-1/2.

      Using the experimental value for Δτ/τ one obtains x≈.45. x=1/2 is an attractive guess.

    It seems that one can fairly well say that standard cosmology is making a crash down while TGD is making a breakthrough after breakthrough as the interpretation becomes more and more accurate. TGD is patiently waiting;-). Interesting to see how long it still will take before sociology of science finally gives up and the unavoidable happens.

    For background see the chapter The Relationship Between TGD and GRT.

    Allais effect as evidence for large values of gravitational Planck constant?

    I have considered two models for Allais effect. The first model was constructed for several years ago and was based on classical Z0 force. For a couple of weeks ago I considered a model based on gravitational screening. It however turned that this model does not work. The next step was the realization that the effect might be a genuine quantum effect made possible by the gigantic value of the gravitational Planck constant: the pendulum would act as a highly sensitive gravitational interferometer.

    One can represent rather general counter arguments against the models based on Z0 conductivity and gravitational screening if one takes seriously the puzzling experimental findings concerning frequency change.

    1. Allais effect identified as a rotation of oscillation plane seems to be established and seems to be present always and can be understood in terms of torque implying limiting oscillation plane.

    2. During solar eclipses Allais effect however becomes much stronger. According to Olenici's experimental work the effect appears always when massive objects form collinear structures.

    3. The behavior of the change of oscillation frequency seems puzzling. The sign of the the frequency increment varies from experiment to experiment and its magnitude varies within five orders of magnitude.
    4. There is also quite recent finding by Popescu and Olenici, which they interpret as a quantization of the plane of oscillation of paraconical oscillator during solar eclipse(see this). There is also evidence that the effect is present also before and after the full eclipse. The time scale is 1 hour. There is also evidence that the effect is present also before and after the full eclipse. The time scale is 1 hour.

    1. What one an conclude about general pattern for Δf/f?

    The above findings allow to make some important conclusions about the nature of Allais effect.

    1. Some genuinely new dynamical effect should take place when the objects are collinear. If gravitational screening would cause the effect the frequency would always grow but this is not the case.

    2. If stellar objects and also ring like dark matter structures possibly assignable to their orbits are Z0 conductors, one obtains screening effect by polarization and for the ring like structure the resulting effectively 2-D dipole field behaves as 1/\rho2 so that there are hopes of obtaining large screening effects and if the Z0 charge of pendulum is allow to have both signs, one might hope of being to able to explain the effect. It is however difficult to understand why this effect should become so strong in the collinear case.

    3. The apparent randomness of the frequency change suggests that interference effect made possible by the gigantic value of gravitational Planck constant is in question. On the other hand, the dependence of Δg/g on pendulum suggests a breaking of Equivalence Principle. It however turns out that the variation of the distances of the pendulum to Sun and Moon can explain the experimental findings since the pendulum turns out to act as a sensitive gravitational interferometer. An apparent breaking of Equivalence Principle could result if the effect is partially caused by genuine gauge forces, say dark classical Z0 force, which can have arbitrarily long range in TGD Universe.

    4. If topological light rays (MEs) provide a microscopic description for gravitation and other gauge interactions one can envision these interactions in terms of MEs extending from Sun/Moon radially to pendulum system. What comes in mind that in a collinear configuration the signals along S-P MEs and M-P MEs superpose linearly so that amplitudes are summed and interference terms give rise to an anomalous effect with a very sensitive dependence on the difference of S-P and M-P distances and possible other parameters of the problem. One can imagine several detailed variants of the mechanism. It is possible that signal from Sun combines with a signal from Earth and propagates along Moon-Earth ME or that the interferences of these signals occurs at Earth and pendulum.

    5. Interference suggests macroscopic quantum effect in astrophysical length scales and thus gravitational Planck constants given by hbargr= GMm/v0, where v0=2-11 is the favored value, should appear in the model. Since hbargr= GMm/v0 depends on both masses this could give also a sensitive dependence on mass of the pendulum. One expects that the anomalous force is proportional to hbargr and is therefore gigantic as compared to the effect predicted for the ordinary value of Planck constant.

    2. Model for interaction via gravitational MEs with large Planck constant

    Restricting the consideration for simplicity only gravitational MEs, a concrete model for the situation would be as follows.

    1. The picture based on topological light rays suggests that the gravitational force between two objects M and m has the following expression

      FM,m=GMm/r2= ∫|S(λ,r)|2 p(λ)dλ

      p(λ)=hgr(M,m)2π/λ , hbargr= GMm/v0(M,m) .

      p(λ) denotes the momentum of the gravitational wave propagating along ME. v0 can depend on (M,m) pair. The interpretation is that |S(λ,r)|2 gives the rate for the emission of gravitational waves propagating along ME connecting the masses, having wave length λ, and being absorbed by m at distance r.

    2. Assume that S(λ,r) has the decomposition

      S(λ,r)= R(λ)exp[iΦ(λ)]exp[ik(λ)r]/r,

      exp[ik(λ)r]=exp[ip(λ)r/hbargr(M,m)],

      R(λ)= |S(λ,r)|.

      To simply the treatment the phases exp(iΦ(λ)) are assumed to be equal to unity in the sequel. This assumption turns out to be consistent with the experimental findings. Also the assumption v0(M,P)/v0(S,P)=1 will be made for simplicity: these conditions guarantee Equivalence Principle. The substitution of this expression to the above formula gives the condition

      ∫ |R(λ)|2dλ/λ =v0 .

    Consider now a model for the Allais effect based on this picture.

    1. In the non-collinear case one obtains just the standard Newtonian prediction for the net forces caused by Sun and Moon on the pendulum since ZS,P and ZM,P correspond to non-parallel MEs and there is no interference.

    2. In the collinear case the interference takes place. If interference occurs for identical momenta, the interfering wavelengths are related by the condition

      p(λS,P)=p(λM,P) .

      This gives

      λM,PS,P= hbarM,P/hbarS,P =MM/MS .

    3. The net gravitational force is given by

      Fgr= ∫ |Z(λ,rS,P)+ Z(λ/x,rM)|2 p(λ) dλ

      =Fgr(S,P)+ Fgr(M,P) + ΔFgr ,

      ΔFgr= 2∫ Re[S(λ,rS,P)S*(λ/x,rM,P))] (hbargr(S,P)2π/λ)dλ,

      x=hbarS,P/hbarM,P= MS/MM.

      Here rM,P is the distance between Moon and pendulum. The anomalous term Δ Fgr would be responsible for the Allais effect and change of the frequency of the oscillator.

    4. The anomalous gravitational acceleration can be written explicitly as

      Δagr= (2GMS/rSrM)×(1/v0(S,P))× I ,

      I= ∫ R(λ)×R(λ/x)× cos[2π(ySrS-xyMrM)/λ] dλ/λ ,

      yM= rM,P/rM , yS=rS,P/rS.

      Here the parameter yM (yS) is used express the distance rM,P (rS,P) between pendulum and Moon (Sun) in terms of the semi-major axis rM (rS)) of Moon's (Earth's) orbit. The interference term is sensitive to the ratio 2π(ySrS-xyMrM)/λ. For short wave lengths the integral is expected to not give a considerable contribution so that the main contribution should come from long wave lengths. The gigantic value of gravitational Planck constant and its dependence on the masses implies that the anomalous force has correct form and can also be large enough.

    5. If one poses no boundary conditions on MEs the full continuum of wavelengths is allowed. For very long wave lengths the sign of the cosine terms oscillates so that the value of the integral is very sensitive to the values of various parameters appearing in it. This could explain random looking outcome of experiments measuring Δf/f. One can also consider the possibility that MEs satisfy periodic boundary conditions so that only wave lengths λn= 2rS/n are allowed: this implies sin(2π ySrS/λ)=0. Assuming this, one can write the magnitude of the anomalous gravitational acceleration as

      Δagr= (2GMS/rS,PrM,P)×(1/v0(S,P)) × I ,

      I=∑n=1 R(2rS,P/n)×R(2rS,P/nx)× (-1)n × cos[nπx×(yM/yS)×(rM/rS)].

      If R(λ) decreases as λk, k>0, at short wavelengths, the dominating contribution corresponds to the lowest harmonics. In all terms except cosine terms one can approximate rS,P resp. rM,P with rS resp. rM.

    6. The presence of the alternating sum gives hopes for explaining the strong dependence of the anomaly term on the experimental arrangement. The reason is that the value of xyrM/rS appearing in the argument of cosine is rather large:

      x(yM/yS))rM/rS)= (yM/yS) (MS/MM)(rM/rS)(v0(M,P)/v0(S,P)) ≈ 6.95671837× 104× (yM/yS) .

      The values of cosine terms are very sensitive to the exact value of the factor MSrM/MMrS and the above expression is probably not quite accurate value. As a consequence, the values and signs of the cosine terms are very sensitive to the value of yM/yS.

      The value of yM/yS varies from experiment to experiment and this alone could explain the high variability of Δf/f. The experimental arrangement would act like interferometer measuring the distance ratio rM,P/rS,P.

    3. Scaling law

    The assumption of the scaling law

    R(λ)=R0 (λ/λ0)k

    is very natural in light of conformal invariance and masslessness of gravitons and allows to make the model more explicit. With the choice λ0=rS the anomaly term can be expressed in the form

    Δ agr≈ (GMS/rSrM) × (22k+1/v0)×(MM/MS)k× R0(S,P)× R0(M,P)× ∑n=1 ((-1)n/n2k)× cos[nπK] ,

    K= x× (rM/rS)× (yM/yS).

    The normalization condition reads in this case as

    R02=v0/[2π∑n (1/n)2k+1]=v0/πζ(2k+1) .

    Note the shorthand v0(S/M,P)= v0. The anomalous gravitational acceleration is given by

    Δagr=(GMS/rS2) × X Y× ∑n=1 ×[(-1)n/n2k]cos[nπK] ,

    X= 22k × (rS/rM)× (MM/MS)k ,

    Y=1/π∑n (1/n)2k+1=1/πζ(2k+1).

    It is clear that a reasonable order of magnitude for the effect can be obtained if k is small enough and that this is essentially due to the gigantic value of gravitational Planck constant.

    The simplest model consistent with experimental findings assumes v0(M,P)= v0(S,P) and Φ(n)=0 and gives

    Δagr/gcos(Θ)=(GMS/rS2g)× X Y× ∑n=1 [(-1)n/n2k]×cos(nπ K) ,

    X= 22k × (rS/rM)× (MM/MS)k,

    Y=1/π ∑n (1/n)2k+1 =1/πζ(2k+1) ,

    K=x× (rM/rS)× (yM/yS) , x=MS/MM .

    Θ denotes in the formula above the angle between the direction of Sun and horizontal plane.

    4. Numerical estimates

    To get a numerical grasp to the situation one can use MS/MM≈ 2.71× 107, rS/rM≈ 389.1, and (MSrM/MMrS)≈ 1.74× 104. The overall order of magnitude of the effect would be

    Δ g/g≈ XY× GMS/RS2gcos(Θ) ,

    (GMS/RS2g) ≈6× 10-4 .

    The overall magnitude of the effect is determined by the factor XY.

    For k=1 and 1/2 the effect is too small. For k=1/4 the expression for Δ agr reads as

    (Δagr/gcos(Θ))≈1.97× 10-4n=1 ((-1)n/n1/2)×cos(nπK),

    K= (yM/yS)u , u=(MS/MM)(rM/rS)≈ 6.95671837× 104 .

    The sensitivity of cosine terms to the precise value of yM/yS gives good hopes of explaining the strong variation of Δf/f and also the findings of Jeverdan. Numerical experimentation indeed shows that the sign of cosine sum alternates and its value increases as yM/yS increases in the range [1,2].

    The eccentricities of the orbits of Moon resp. Earth are eM=.0549 resp. eE=.017. Denoting semimajor and semiminor axes by a and b one has Δ=(a-b)/a=1-(1-e2)1/2. ΔM=15× 10-4 resp. ΔE=1.4× 10-4 characterizes the variation of yM resp. yM due to the non-circularity of the orbits of Moon resp. Earth. The ratio RE/rM= .0166 characterizes the range of the variation ΔyM =ΔrM,P/rM< RE/rM due to the variation of the position of the laboratory. All these numbers are large enough to imply large variation of the argument of cosine term even for n=1 and the variation due to the position at the surface of Earth is especially large.

    5. Other effects

    1. One should explain also the recent finding by Popescu and Olenici, which they interpret as a quantization of the plane of oscillation of paraconic oscillator during solar eclipse (see this). A possible TGD based explanation would be in terms of quantization of Δg and thus of the limiting oscillation plane. This quantization could reflect the quantization of the angular momentum of the dark gravitons decaying into bunches of ordinary gravitons and providing to the pendulum the angular momentum inducing the change of the oscillation plane. The knowledge of friction coefficients associated with the rotation of the oscillation plane would allow to deduce the value of the gravitational Planck constant if one assumes that each dark graviton corresponds to its own approach to asymptotic oscillation plane.
    2. There is also evidence for the effect before and after the main eclipse. The time scale is 1 hour. A possible explanation is in terms of a dark matter ring analogous to rings of Jupiter surrounding Moon. From the average orbital velocity v = 1.022 km/s of the Moon one obtains that the distance traversed by moon during 1 hour is R1 = 3679 km. The mean radius of moon is R=1737.10 km so that one has R1=2R with 5 per cent accuracy (2×R = 3474 km). The Bohr quantization of the orbits of inner planets discussed in with the value (h/2p)gr = GMm/v0 of the gravitational Planck constant predicts rn µ n2GM/v02 and gives the orbital radius of Mercury correctly for the principal quantum number n=3 and v0/c = 4.6×10-4 @ 2-11. From the proportionality rn µ n2GM/v02 one can deduce by scaling that in the case of Moon with M(moon)/M(Sun) = 3.4×10-8 the prediction for the radius of n=1 Bohr orbit would be r1 = (M(Moon)/M(Sun))×RM/9 @ .0238 km for the same value of v0. This is too small by a factor 6.45×10-6. r1=3679 km would require n ~ 382 or n=n(Earth)=5 and v0(Moon)/v0(Sun) @ 2-4.
    For details see the chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-time".

    Tuesday, August 21, 2007

    Back at home

    I returned home from a little conference in Röros, Norway. The conference was about various anomalous phenomena and organized by the Society of Scientific Exploration. Participants were science professionals and the atmosphere very very warm. It is amazing to see that scientists can be critical without debunking, crackpotting, and casting personal insults as the discussion culture in so many physics blogs would suggest. I am starting to believe again that scientists can behave like civilized human beings rather than barking and biting like mad dogs! Lectures were absolutely excellent and I got for the first time in my life through the entire lecture of mine;-)! This was a very enjoyable event and I have a lot of new ideas to digest.

    In her blog Kea mentions a continuous geometry in a sense of von Neumann. It is obtained by taking a finite field of order q=pn and taking the so called pro-finite limit of projective geometries P(1,q)→ P(2,q)→P(4,q)→...P(2n,q)→... At the limit this geometry contains subspaces of any dimension d in the interval [0,1]. For Jones inclusions the indices are quantized in as M:N= 4cos2(π/n), n=3,4,...

    I should check what this continuous geometry of von Neumann really means before saying anything but I cannot avoid the temptation to say that this brings strongly in mind TGD related notions suggesting some generalizations. Well...! I should stop here but I will take the risk of making fool of myself as so many times before.

    1. Another manner to see the continuous geometry

    The inclusion sequence of tensor powers of Clifford algebra defining hyper-finite factors is a counterpart for the inclusion sequence defining the continuous geometry. 22n corresponds for Jones inclusions the dimension of matrix algebra obtained as a tensor power of 2×2 matrix/Clifford algebra.

    Something very closely related to the spinor counterpart associated with infinite-D Clifford algebra should be in question. Complex 2-spinors define S2 as a 1-D complex projective space. Is quantum version of this space in question? For quantum spinors the dimension would vary in range (1,2) in discrete manner (square root of index 1≤M:N≤4) and for quantum S2 it would not be larger than 1 if complex numbers are involved. One could also consider a restriction to real numbers.

    2. Generalizing the continuous geometry keeping q=pn fixed

    I would bet that this construction generalizes considerably since Jones inclusions correspond to spinor representation of SU(2) and all compact groups and all their representations define inclusion series for all all values of dimension n of Abelian group Zn defining quantum phase. The properties of HFFs suggest that the powers of 2 could be replaced with powers of any integer and primes are especially interesting in this respect. All quantum counterparts of various projective spaces associated with spinor representations of various compact Lie groups (at least subset of ADE type groups) might be obtained by allowing n in q=pn to vary. n would also correspond to quantum phase Q= exp(i2π/n).

    3. Could one replace finite fields with extensions of p-adic numbers and glue together p-adic continuous geometries?

    p-Adic TGD for a given p suggests that one could generalize the continuous geometries by regarding finite field G(p,n) as n-dimensional algebraic extension of G(p,1) and replacing G(p,n) with an n-dimensional algebraic extension of p-adic numbers. This could give p-adic variants of a quantum projective geometry. For prime values of n the powers of quantum phase Q=exp(i2π/n) would define a concrete representation for the units of G(p,n).

    The appearance of quantum phases Q=exp(i2π/n) also bring in mind a generalization of the notion of imbedding space involving hierarchy of cyclic groups inspired by dark matter hierarchy and hierarchy of Planck constants.

    A further extension inspired by quantum TGD would be the gluing structures with different values of p together (generalization of the notion of number by gluing reals and p-adics along common rationals and perhaps also algebraics).

    4. The stochastic process associated with Riemann Zeta

    In my own primitive physicist's way I "know" that Riemann Zeta has a fundamental role in the construction of quantum TGD and it appears in the concrete formulas involving number theoretic braids involving also several number theoretical conjectures. Unfortunately, I have no rigorous articulation for these gut feelings.

    Kea mentions also a family of stochastic processes with integer valued dynamical variable n=1,2,... The processes are parameterized by a positive integer s=1,2,... and the probabilities are given by n-s so that the partition function is Riemann Zeta for a positive integer valued argument. Thermodynamical interpretation requires that log(n) are the eigenvalues of "energy". In arithmetic quantum field theory log(n) has indeed interpretation as "energy" of a many particle state (this relates closely to infinite primes). One might hope that a proper generalization of this stochastic process might help to get additional insights also to the role of Zeta in TGD.

    The generalization of the stochastic process to M-matrix inspired by the zero energy ontology is natural. s is analogous to the inverse temperature and analytic continuation would mean that also complex temperatures are considered. Partition function would have interpretation as a complex square root of density matrix with complex phases identified as elements of S-matrix in the diagonal representation. This would with with the zero energy ontology inspired unification of the density matrix and S-matrix to Matrix defining the coefficients of time like entanglement between positive and negative energy states. Zeta(s) for complex values of s would thus define naturally elements of a particular M-matrix.

    There are several questions.

    1. The first questions relate to the identification of the "energy" having values log(n). Does n label the tensor powers of the 2×2 Clifford algebra appearing in the sequence of inclusions appearing in the definition of the hyper-finite factor of type II1? Or could it correspond to n in G(p,n) and thus to the quantum phase Q and more generally, to the dimension of algebraic extension of p-adic numbers?

      It would seem that only the first interpretation makes sense since there exists a large number of algebraic extensions of rationals with dimension n. Hence the first attempt to interpret would be that p(n) gives the probability that the system's state corresponds is created by the n:th tensor power of 2×2 Clifford algebra. Different n:s should define independent states and this is not necessarily consistent with the inclusion sequence in which lower dimensional tensor powers are included to higher dimensional ones unless the probabilities correspond to states obtained by identifying states which differ by a state created by a lower tensor power.

    2. For the zeros of Zeta the interpretation as a stochastic process or M-matrix obviously fails. What could this mean?

      1. In statistical physics partition function vanishes at criticality and in TGD zeros of Zeta correspond to quantum criticality at which the value of Planck constant can change. This suggests that one should consider at quantum criticality only the cutoffs with "energy" not larger than log(n). The cutoff in the sum defining Riemann Zeta would mean that only m≤ n tensor powers of 2×2 Clifford algebra create the positive and negative energy states pairing to form zero energy states.

        Physically this would mean that thermodynamical states would contain only pairs of positive and negative energy states for which positive/negative energy states have fermion number not larger than n. Note that coherent states of Cooper pairs are consistent with fermion number conservation only in zero energy ontology.

      2. If the TGD inspired conjecture that piIm(s) is algebraic number for zeros of zeta holds true, the partition function defined by zeta with cutoff would be algebraic number and cutoff M-matrix would be algebraically universal.

    To sum up, these observations support my gut feeling that p-adic physics, hierarchy of Jones inclusions, and the hierarchy of Planck constants plus the generalization of imbedding space inspired by it, appear as aspects of the same very general mathematical structure obtained by gluing together a lot of structures. Also the process of gluing copies of 8-D imbedding space together along 4-D subspaces to form a larger structure should relate to this.

    Monday, August 13, 2007

    Burning saltwater by radiowaves and large Planck constant

    This morning my friend Samuli Penttinen send an email telling about strange discovery by engineer John Kanzius: salt water in the test tube radiated by radiowaves at harmonics of a frequency f=13.56 MHz burns. Temperatures about 1500 K which correspond to .15 eV energy have been reported. You can radiate also hand but nothing happens. The orginal discovery of Kanzius was the finding that radio waves could be used to cure cancer by destroying the cancer cells. The proposal is that this effect might provide new energy source by liberating chemical emergy in an exceptionally effective manner. The power is about 200 W so that the power used could explain the effect if it is absorbed in resonance like manner by salt water.

    The energies of photons involved are very small, multiples of 5.6× 10-8 eV and their effect should be very small since it is difficult to imagine what resonant molecular transition could cause the effect. This leads to the question whether the radiowave beam could contain a considerable fraction of dark photons for which Planck constant is larger so that the energy of photons is much larger. The underlying mechanism would be phase transition of dark photons with large Planck constant to ordinary photons with shorter wavelength coupling resonantly to some molecular degrees of freedom and inducing the heating. Microwave oven of course comes in mind immediately.

    1. The fact that the effects occur at harmonics of the fundamental frequency suggests that rotational states of molecules are in question as in microwave heating. Since the presence of salt is essential, the first candidate for the molecule in question is NaCl but also HCl can be considered and also water molecules. NaCl makes sense if NaCl and Na+ and Cl- are in equilibrium. The basic formula for the rotational energies is

      E(l)= E0×(l(l+1), E0=hbar2/2μR2. μ= m1 m2/(m1 +m2).

      Here R is molecular radius which by definition is deduced from the rotational energy spectrum. The energy inducing transition l→l+1 is ΔE(l)= 2E0×(l+1).

    2. By going to Wikipedia, one can find molecular radii of heteronuclear di-atomic molecules such as NaCl and homonuclear di-atomic molecules such as H2. Using E0(H2)=8.0×10-3 eV one obtains by scaling

      E0(NaCl)= (μ(H2/μ(NaCl)) × (R(H2)/R(NaCL)2.

      The atomic weights are A(H)=1, A(Na)=23, A(Cl)=35.

    3. A little calculation gives f(NaCl)= 2E0/h= 14.08 GHz. The ratio to the radiowave frequency is f(NaCl)/f= 1.0386×103 to be compared with the hbar/hbar0=210=1.024×103. The discrepancy is 1 per cent.

    Thus dark radiowave photons could induce a rotational microwave heating of the sample and the effect could be seen as an additional dramatic support for the hierarchy of Planck constants. There are several questions to be answered.

    1. Does this effect occur also for solutions of other molecules and other solutes than water? This can be tested since the rotational spectra are readily calculable from data which can be found at net.

    2. Are the radiowave photons dark or does water - which is very special kind of liquid - induce the transformation of ordinary radiowave photons to dark photons by fusing 210 radiowave massless extremals (MEs) to single ME. Does this transformation occur for all frequencies? This kind of transformation might play a key role in transforming ordinary EEG photons to dark photons and partially explain the special role of water in living systems.

    3. Why the radiation does not induce spontaneous combustion of living matter which contains salt. And why cancer cells seem to burn: is salt concentration higher inside them? As a matter fact, there are reports about spontaneous human combustion. One might hope that there is a mechanism inhibiting this since otherwise military would be soon developing new horror weapons unless it is doing this already now. Is it that most of salt is ionized to Na+ and Cl- ions so that spontaneous combustion can be avoided? And how this relates to the sensation of spontaneous burning - a very painful sensation that some part of body is burning?

    4. Is the energy heating solely due to rotational excitations? It might be that also a "dropping" of ions to larger space-time sheets is induced by the process and liberates zero point kinetic energy. The dropping of proton from k=137 (k=139) atomic space-time sheet liberates about .5 eV (0.125 eV). The measured temperature corresponds to the energy .15 eV. This dropping is an essential element of remote metabolism and provides universal metabolic energy quanta. It is also involved with TGD based models of "free energy" phenomena. No perpetuum mobile is predicted since there must be a mechanism driving the dropped ions back to the original space-time sheets.
    5. Are there other possibilities? Yes. One can consider also the possibility that energy is feeded to the rotational degrees of freedom of water molecules as in microwave oven and salt has some other function. Both mechanisms could be involved of course. This became clear after the realization that charge and spin fractionization force to further generalize the definition of the imbedding space by allowing also what in loose sense might be regarded as coverings of M4 resp. CP2 by discrete subgroups Ga resp. Gb of SO(3) besides factor spaces.

      1. Four options result as Cartesian products M4/Ga (factor space as earlier) and M4×Ga with CP2/Gb and CP2×Gb. × refers to covering rather than Cartesian product. M4 is obtained by exluding M2 remaining invariant under transformations of Cartan algebra of Poincare group and CP2 is obtained by excluding homologically trivial geodesic sphere S2 remaining invariant under color isospin rotations of SO(3) subset SU(3).

      2. It is absolutely essential that M2×S2 corresponds to a vacuum extremal since for it the value of Planck constant is ill defined (full quantum criticality). The full imbedding space is union of these spaces intersecting at the quantum critical manifold and configuration space is union over configuration spaces with different choices of quantum critical manifold (choice of quantization axes) so that Poincare and color invariance are not lost.

      3. The Cartesian product of factor space of M4 and covering space of CP2 maximizes the value of Planck constant for given choice of Ga and Gb and the value of Planck constant is given by hbar/hbar0 nanb.

      The microwave frequency used in microwave ovens is 2.45 GHz giving for the Planck constant the estimate 180.67 equal to 180 with error of .4 per cent. The values of Planck constants for (M4/Ga)× (CP2×Gb) option are given by hbar/hbar0= nanb and nanb= 4× 9× 5=180 can result from the number theoretically simple values of quantum phases exp(i2π/ni) corresponding to polygons constructible using only ruler and compass. For instance, one could have na= 2× 3 and nb=2× 3× 5. This option gives a slightly better agreement than NaCl option.

    Recall that one of the empirical motivations for the hierarchy of Planck constants came from the observed quantum like effects of ELF em fields at EEG frequences on vertebrate brain and also from the correlation of EEG with brain function and contents of consciousness difficult to understand since the energies of EEG photons are ridiculously small and should be masked by thermal noise.

    In TGD based model of EEG (actually fractal hierarchy of EEGs) the values hbar/hbar0 =2k11, k=1,2,3,..., of Planck constant are in a preferred role. More generally, powers of two of a given value of Planck constant are preferred, which is also in accordance with p-adic length scale hypothesis.

    For details see the chapter Dark Nuclear Physics and Condensed Matter of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

    Saturday, August 11, 2007

    About levitation, OBEs, and refrigerators

    Physicists have 'solved' mystery of levitation was the link of this morning. See also this. I glue a little piece of text here.

    Now, in another report that sounds like it comes out of the pages of a Harry Potter book, the University of St Andrews team has created an 'incredible levitation effects’ by engineering the force of nature which normally causes objects to stick together.

    Professor Ulf Leonhardt and Dr Thomas Philbin, from the University of St Andrews in Scotland, have worked out a way of reversing this phenomenon, known as the Casimir force, so that it repels instead of attracts.

    Their discovery could ultimately lead to frictionless micro-machines with moving parts that levitate but they say that, in principle at least, the same effect could be used to levitate bigger objects too, even a person.

    From the article one learns also that Leonhardt and Philbin have worked out how to turn the Casimir force from attraction to repulsion using a specially developed lens placed between two objects. Nothing has been done yet in practice but the idea is fascinating.

    Levitation and large hbar

    By going to Wikipedia you learn that Casimir force between conductors is a genuine quantum effect analogous to van der Waals force between neutral atoms. It becomes the dominating force between conductors in submicron scale. In the case of two conducting slabs the force per area is given by

    F/A= -hbar×π2/240a4,

    where a is the distance between slabs. Casimir force is clearly attractive and becomes strong in short length scales.

    What makes this interesting from the point of TGD is hbar proportionality of this force (recall the identification of dark matter as macroscopically quantum coherent large hbar phases). For large values of hbar Casimir force increases. If Casimir force can be made repulsive by using the proposed lense arrangement then large hbar would amplify the levitation effect and one might perhaps consider levitation even in macroscopic scales.

    Leonhardt explains that Casimir force is one of the basic mechanisms behind friction in nano length scales. This comes as a news for me. Casimir force increases with Planck constant and I would have expected that friction force is reduced as Planck constant increases. The fault in my thinking is that I associate friction force directly with dissipation. Thinking it again, I realize that friction interaction means at quantum level only an interaction leading to a formation of a bound state of two objects. This state has some binding energy and in the lowest energy state the object sticks to the surface. Friction interaction opposes the sliding of the object along the surface and some minimum force is required to put the object sliding. This brings first in my mind a macrocopic quantum jump from ground state to a state in which sliding takes place. The next mental image is more microscopic: the momentum feed must be larger than some critical value to induce the splitting of the bonds between object and surface.

    What happens as object slides along the surface? In particular, how the rate for the dissipation of energy depends on Planck constant? Denote by V the potential energy of the friction force. A simple dimensional argument based on Golden Rule, the rate for the dissipation of energy in the initial state |i> with energy Ei should look like follows

    dE/dt propto (1/hbar)×∑f |Vi,f|2/(Ef-Ei) .

    Here Vi,f is the matrix element of the interaction potential energy between states labelled by i and f. How this rate depends on hbar depends on V, the spectum Ef of energies, and on the energy dependence of the sum over Ef. If the set of states remains the same, the state sum gives no dependence on hbar. One expects that Casimir force gives to Ef a contribution proportional to hbar and if initial and final state in the sum have same kinetic energy as they naturally have in a model for the dissipation, Ef-Ei would be proportional to hbar. Therefore dE/dt would not depend on hbar. Hence the first guess is that the rate of dissipation does not depend on hbar if caused by Casimir force whereas for friction caused by fundamental forces the rate should decreates like 1/hbar.

    About levitation experiences

    So: are yogis able to increase their personal hbar temporarily? I would guess "No". I have had a lot of levitation experiences during states between sleep and awake and I feel always as being awake. During these experiences I have even performed tests to check whether this is the case or not and become convinced that I really levitate. Only after wake up to ordinary state of consciousness I have realized that it was a dream after all. These experiences as a rule involve very pleasant wavy-like bodily sensations and what I find remarkable is that the usual unpleasant sensory noise is absent. Around 1985 I had long-lasting experience in which I experienced body was for long times in this fantastic "enlightened" state but this happend during wake-up consciousness.

    In TGD Universe these experiences might result when my magnetic field body with large hbar moves and performs wave-like motion whereas biological body remains as such. Usually it would be magnetic body which would be at rest whereas biological body would move. The interference effects for dark photons mediating communications between biological body and field body could generate dynamical hologram like representation giving rise to the sensations of movement and wavy motion. If biological body sleeps it does not contribute to the ordinary sensory mental images which involve this unpleasant noisy aspect. Small dissipation for magnetic body having large hbar means that its contribution to unpleasant noise is small. Also train illusion and sensation created by thinking about falling down from cliff could be understood in terms of simulations performed by magnetic body.

    Similar experience during wake-up but without levitation experiences suggests that the brain areas responsible for creating somatosensory mental images about body must have been sleeping and the remaining sensory input have kept the cognitive representations at magnetic body reasonably realistic. My magnetic body had however also rather adventurous travels around globe, galaxy, and Universes (plural is not a typo!) but the crucial parts of my brain were definitely wake-up: I could remember these episodes and there was no wake-up following this state.

    It is interesting that during these flying experiences I always have found it impossible to go too far away. I can fly to some height but cannot continue anymore. In one dream involving fascinating experiences of a completely dissipationless spinning and ideal rectilinear motion my brothers brought me back as I tried to leave my apartment. That I lived in fifth floor was probably not the reason! Could it be that magnetic body cannot just take and leave the biological body to survive without cognitive representations and that the collective conscious entity containing the magnetic bodies of people with whom I strongly quantum entangle took the command over the irresponsible behavior of my magnetic body;-)?

    During experiences induced by the noise of refrigerator (which I heard during dream) I felt that the refrigerator strongly attracted my body. The experience was very pleasant and I had to make a difficult decision whether I shall find what happens if I let it go: I was however always so afraid that the refrigerator would "capture" my soul that I woke up. Perhaps it was a wise choice: otherwise my magnetic body might be now part of the magnetic body of that damned refrigenerator;-).

    Wednesday, August 08, 2007

    Peter Woit and consciousness

    Do not lose this: Peter Woit reveals to us what is science and what is not. For instance, the attempts to understand consciousness are not science and cannot be science since Peter knows that this cannot lead to any directly experimentally testable theory. Peter makes it also clear that he has not bothered to read anything which relates to quantum and consciousness since he knows that it is pseudo science.

    Again and again I find is amusing that people not doing science themselves know best what is not science: even without trying. It is our luck that we have all these ignorant crackpots who do not know that trying to answer questions not made in text books is pseudoscience. It is also amazing that these Peter Woits have not noticed that the history of science ridiculizes again and again the besserwissers making ignorance and lazyness a virtue.

    Tuesday, August 07, 2007

    A modest proposal

    There is again a lot of activity related to the dream of taking Finland to the top of science. As a rule, some organizational changes are seen as the magic trick to reach this goal. The peak activity seems to have a period of roughly ten years: as in case of super string revolutions I strongly suspect a correlation with sunspot activity.

    Each period of peak activity has its own buzz words. Last time one of the buzz words was "critical mass". There was a common realization among decision makers that the period of individuals is over in science and that science has become an industry, where individual researchers are those unlucky guys on the assemly line and only top leaders are what matters. Also the words "rotation", and "accountability" loaned from business world were heard often. The great promise of the organizational gurus in physics was that before year 2005 we would have a Nobel in physics. I have learned that the methods to achieve this goal created deep fear and horror in young candidates for heroes of scientific labour. My humble opinion is that a person who is told that his survival depends on whether he makes a scientific discovery within five years will not make this discovery. Or has anyone heard about great scientific discoveries made during the last five minutes before becoming hanged?

    At this time "innovation" has become the central buzz word. Now it has been however realized that besides top leaders also researcher is needed to carry out successful research. Gifted scientist is modelled as a cognitive athlete who wants to be the best. It is also admitted that researcher can also have some personal traits. For instance, he can suffer from narcism and the idea is to cleverly utilize this personal weakness to cheat him to become a top performer. It is however not assumed that top researcher might have soul since this would reduce the predictive power of otherwise simple and elegant model.

    In line with top-to-bottom philosophy organizational changes are also now seen as the magic manner to get to the Promised Land. One fellow says that we must fuse together three different high schools to combine art, commerce, and technology: MIT serves as a model here. Second guru is sure that by branding every possible product of scientific activity and making every idea a commercial product Finland will become the leader of science. The third wise guy thinks that very big is very beautiful: we must increase dramatically the funding of science and build the counterpart of Harward and we should not tolerate small universities anymore.

    Well, this all is so big, so big, so big. And there are also many unasked questions. Is the academic assembly line really the only tool to manufacture top researchers? Could it be that brilliant young researcher with internet connection might be able to decide himself what is the most interesting problem to solve and perhaps learn himself all that is needed to achieve this? Could it be that the old power greedy men at the top of tower have long time ago lost contact with real research and are not able tell for young minds what to do?

    Is Big Science the problem

    Why this view of decision makers about scientists as spiritual idiots? And why do we believe in formal organizations rather than trusting on individuals? Is "Big Science" the name of the disease? Is the real problem that scientific decision making in big organizations has become politics and instead of facts complex social forces - the fear to lose face being one of the most important amongst them - determine the decisions? To mention a familiar example: in theoretical physics the proponents of quantum field theory, superstrings and loop quantum gravity are much like political parties fighting for money and the question whether these theories have to say anything new and interesting about reality has long time ago become irrelevant.

    Superstring hegemony has during last years become a symbol of a colossal power system of science which has lost totally its connections with reality. Although postdoc positions for string theorists are becoming very rare, this hegemony will not lose its influence until the professors producing superstring publications have resigned. The situation in neuroscience and biology is not much better although the loss of contact to reality is not so manifest. The fact however is that the dogmas which have long ago turned out to be in conflict with experimental reality (introns as junk DNA, basic beliefs about cell membrane as pump-channel system, the view about how memories are represented, the view about consciousness as a function comparable to swetting or urination performed by a "consciousness module" somewhere in the brain - to mention only some of the most idiotic beliefs) remain official truths.

    Could we learn something from the art of simple household?

    What about accepting a spoonful of realism from everyday household where no one dreams of becoming the Big Boss? Science needs first rate thinkers to produce great visions. Also a small country like Finland can produce a couple of thinkers per century and perhaps the era of internet might amplify this rate somewhat. If such a thinker emerges he or she does not need very much. Thinkers want to understand: they do not dream of becoming academic mafiosos or leaders of big projects writing grant applications and sitting in meetings. Thinkers do not want to waste their precious time in continual travelling from conference to conference around the globe since web has been discovered. Even the most passionate thinker has however some basic metabolic needs and their printers and computers suffer breakdown now and then. Could one imagine that a country like Finland could afford say some euros per month (1000 would be enough!) for a person having passion and ability to think and requiring nothing else but the minor prerequisites for doing it?

    Well, this question was rhetoric. I know that this is not possible because it is so utterly simple and small and requires only some good will and real wisdom which is equally rare natural resource as genuine thinkers. Finland is full of individuals who could pay the needed money from their own pocket but they will not do it because they are afraid that they would be regarded as crazy.

    In any case, money does not seem to be a problem as far as theoretical science is considered. What applies to thinkers, need not hovever apply to experimental scientists - at least as we understood them in our scientific world view. In this belief system the testing of modern theories of physics requires a galaxy sized particle accelarator and super string hegemony even declares that we must give up the hope of predicting and testing anything. Not all of us agree. Some minds - thrown outside establishment of course - are working hardly to convay the message that our belief system in this and also many other respects is wrong - pathetically wrong. Sooner or later these heretics will achieve their goal and some day the era of Big Science will be seen as one of the worst periods of intellectual stagnation that human kind has ever experienced. This might mean that also the golden time of experimental physics might be here again and that big science might transform from gigantic projects with individuals acting like mindless machine parts to an activity of individuals requiring not much more than intelligence, curiosity, and open mind.

    A crazy suggestion

    I conclude with a cracy suggestion for scientific decision makers. Why not spend a different week end - kind of intellectual carnival during which every belief is challenged? Why not - just during this week end - to carefully listen what the heretics are saying? Why not being just for once intellectually honest - just as during those golden student days - and instead of routinely emitting this little magic word "crackpot" plung into a real scientific discussion listening and developing real counter arguments based on contents? Why not - just during this weekend - to behave like a decent human being rather than third rate politician?

    Saturday, August 04, 2007

    Chern-Simons action in hydrodynamical interpretation and Maxwell hydrodynamics as toy model for TGD

    Today Kea told about Terence Taos's posting 2006 ICM: Etienne Ghys, “Knots and dynamics”. Posting tells about really amazing mathematical results related to knots.

    1. Chern-Simons as helicity invariant

    Tao mentions helicity as an invariant of fluid flow. Chern-Simons action defined by the induced Kähler gauge potential for lightlike 3-surfaces has interpretation as helicity when Kähler gauge potential is identified as fluid velocity. This flow can be continued to the interior of space-time sheet. Also the dual of the induced Kähler form defines a flow at the light-like partonic surfaces but not in the interior of space-time sheet. The lines of this flow can be interpreted as magnetic field lines. This flow is incompressible and represents a conserved charge (Kähler magnetic flux). The question is which of these flows should define number theoretical braids. Perhaps both of them can appear in the definition of S-matrix and correspond to different kinds of partonic matter (electric/magnetic charges, quarks/leptons?,...). Second kind of matter could not flow in the interior of space-time sheet. Or could interpretation in terms of electric magnetic duality make sense?

    Helicity is not gauge invariant and this is as it must be in TGD framework since CP2 symplectic transformations induce U(1) gauge transformation which deforms space-time surface an modifies induced metric as well as classical electroweak fields defined by induced spinor connection. Gauge degeneracy is transformed to spin glass degeneracy.

    2. Maxwell hydrodynamics

    In TGD Maxwell's equations are replaced with field equations which express conservation laws and are thus hydrodynamical in character. With this background the idea that the analogy between gauge theory and hydrodynamics might be applied also in the reverse direction is natural. Hence one might ask what kind of relativistic hydrodynamics results if assumes that the action principle is Maxwell action for the four-velocity uα with the constraint term saying that light velocity is maximal signal velocity.

    1. For massive particles the length of four-velocity equals to 1: uα uα=1. In massless case one has uα uα=0. This condition means the addition of constraint term

      λ(uα uα-ε)

      to the Maxwell action. ε=1/0 holds for massive/massless flow. In the following the notation of electrodynamics is used to make easier the comparison with electrodynamics.

    2. The constraint term destroys gauge invariance by allowing to express A0 in terms of Ai but in general the constraint is not equivalent to a choice of gauge in electrodynamics since the solutions to the field equations with constraint term are not solutions of field equations without it. One obtains field equations for an effectively massive em field with Lagrange multiplier λ having interpretation as photon mass depending on space-time point:

      jα= ∂βFαβ= λAα,

      Aα==uα,

      Fαβ= ∂βAα-∂αAβ.

    3. In electrodynamic context the natural interpretation would be in terms of spontaneous massivation of photon and seems to occur for both values of ε. The analog of em current given by λAα is in general non-vanishing and conserved. This conservation law is quite strong additional constraint on the hydrodynamics. What is interesting is that breaking of gauge invariance does not lead to a loss of charge conservation.

    4. One can solve λ by contracting the equations with Aα to obtain λ= jαAα for ε=1. For ε=0 one obtains jαAα=0 stating that the field does not dissipate energy: λ can be however non-vanishing unless field equations imply jα=0. One can say that for ε=0 spontaneous massivation can occur. For ε=1 massivation is present from beginning and dissipation rate determines photon mass: a natural interpretation would be in terms of thermal massivation of photon. Non-tachyonicity fixes the sign of the dissipation term so that the thermodynamical arrow of time is fixed by causality.

    5. For ε=0 massless plane wave solutions are possible and one has ∂αβAβ=λAα. λ=0 is obtained in Lorentz gauge which is consistent with the condition ε=0. Also superpositions of plane waves with same polarization and direction of propagation are solutions of field equations: these solutions represent dispersionless precisely targeted pulses. For superpositions of plane waves λ with 4-momenta, which are not all parallel λ is non-vanishing so that non-linear self interactions due to the constraint can be said to induce massivation. In asymptotic states for which gauge symmetry is not broken one expects a decomposition of solutions to regions of space-time carrying this kind of pulses, which brings in mind final states of particle reactions containing free photons with fixed polarizations.

    6. Gradient flows satisfying the conditions Aα =∂α Φ and Aα Aα=ε give rise to identically vanishing hydrodynamical gauge fields and λ=0 holds true. These solutions are vacua since energy momentum tensor vanishes identically. There is huge number of this kind of solutions and spin glass degeneracy suggests itself. Small deformations of these vacuum flows are expected to give rise to non-vacuum flows.

    7. The counterparts of charged solutions are of special interest. For ε=0 the solution (u0,ur)= (Q/r)(1,1) is a solution of field equations outside origin and corresponds to electric field of a point charge Q. In fact, for ε=0 any ansatz (u0,ur)= f(r)(1,1) satisfies field equations for a suitable choice of λ(r) since the ratio of equations associate with j0 and jr gives an equation which is trivially satisfied. For ε=1 the ansatz (u0,ur)= (cosh(u),sinh(u)) expressing solution in terms of hyperbolic angle linearizes the field equation obtained by dividing the equations for j0 and jr to eliminate λ. The resulting equation is

      r2u+ 2∂ru/r=0

      for ordinary Coulomb potential and one obtains (u0,ur)= (cosh(u0+k/r), sinh(u0+k/r)). The charge of the solution at the limit r→ ∞ approaches to the value Q=sinh(u0)k and diverges at the limit r→ 0. The charge increases exponentially as a function of 1/r near origin rather than logarithmically as in QED and the interpretation in terms of thermal screening suggests itself. Hyperbolic ansatz might simplify considerably the field equations also in the general case.

    3. Similarities with TGD

    There are strong similarities with TGD which suggests that the proposed model might provide a toy model for the dynamics defined by Kähler action.

    1. Also in TGD field equations are essentially hydrodynamical equations stating the conservation of various isometry charges. Gauge invariance is broken for the induced Kähler field although Kähler charge is conserved. There is huge vacuum degeneracy corresponding to vanishing of induced Kähler field and the interpretation is in terms of spin glass degeneracy.

    2. Also in TGD dissipation rate vanishes for the known solutions of field equations and a possible interpretation is as space-time correlates for asympotic non-dissipating self organization patterns.

    3. In TGD framework massless extremals represent the analogs for superpositions of plane waves with fixed polarization and propagation direction and representing targeted and dispersionless propagation of signal. Gauge currents are light-like and non-vanishing for these solutions. The decomposition of space-time surface to space-time sheets representing particles is much more general counterpart for the asymptotic solutions of Maxwell hydrodynamics with vanishing λ.

    4. In TGD framework one can indeed consider the possibility that four-velocity assignable to a macroscopic quantum phase is proportional to Kähler potential. In this kind of situation one could speak of quantal Maxwell hydrodynamics. In this case however ε could be function of position.

    If TGD is taken seriously, these similarities force to ask whether Maxwell hydrodynamics might be interpreted as a nonlinear variant of real electrodynamics. One must however notice that in TGD em field is proportional to the induced Kähler form only in special cases and is in general non-vanishing also for vacuum extremals.

    For the construction of extremals of Kähler action see the chapter Basic Extremals of Kähler action of "Classical Physics in Many-Sheeted Space-time".