Sunday, October 15, 2017

From RNA world to RNA-tRNA world to RNA-DNA-tRNA world to DNA-RNA-protein world: how it went?

I told already earlier about how the transition from RNA world to RNA-tRNA world to DNA-RNA-protein world might have taken place in TGD Universe. Last night I realized a more detailed mechanism for the last step of the transition relying on the TGD based general model model of bio-catalysis based on heff=n×h phases of ordinary matter at dark magnetic flux tubes. It also became clear that DNA-RNA-tRNA world very probably preceded the transition to the last world in the sequence. Therefore I glue below the appropriately modified earlier posting.

I encountered a highly interesting work related to the emergence of RNA world: warmly recommended. For a popular article see this.

First some basic terms for the possible reader of the article. There are three key enzymes involved in the process which is believed to lead to a formation of longer RNA sequences able to replicate.

  1. Ribozyme is a piece of RNA acting as catalyst. In RNA world RNA had to serve also as a catalyst. In DNA world proteins took this task but their production requires DNA and transcription-translation machinery.

  2. RNA ligase promotes a fusion of RNA fragments to a longer one in presence of ATP transforming to AMP and diphospate and giving metabolic energy presumably going to the fusion. In TGD fUniverse this would involve
    generation of an atom (presumably hydrogen) with non-standard value of heff=n×h having smaller binding energy scales so that ATP is needed. These dark bonds would be involved with all bio-catalytic processes.

  3. RNA polymerase promotes a polymerization of RNA from building bricks. It looks to me like a special kind of ligase adding only single nucleotide to an existing sequence. In TGD Universe heff=n×h atoms would be involved as also magnetic flux tubes carrying dark analog of DNA with codons replaced with dark proton triplets.

  4. RNA recombinase promotes RNA strands to exchange pieces of same length. Topologically this corresponds to two reconnections occurring at points defining the ends of piece. In TGD Universe these reconnections would occur for magnetic flux tubes containing dark variant of DNA and induce the chemical processes at the level of chemistry.

Self ligation should take place. RNA strands would serve as ligases for the generation of longer RNA strands. The smallest RNA sequences exhibiting self-ligation activity was found to be 40-nucleotide RNA and shorter than expected. It had lowest efficiency but highest functional flexibility to ligate substrates to itself. R18 - established RNA polymerase model - had highest efficiency and highest selectivity.

What I can say about the results is that they give support for the notion of RNA world.

The work is related to the vision about RNA world proposed to precede DNA-RNA-protein world. Why I found it so interesting is that it relates to on particular TGD inspired glimpse to what happened in primordial biology.

In TGD Universe it is natural to imagine 3 worlds. RNA world, RNA-tRNA world, and DNA-RNA-protein world. For an early rather detailed version of the idea about transition from RNA world to DNA-RNA-proteins world but not realizing the tRNA-RNA world as intermediate step see this.

  1. RNA world would contain only RNA. Protein enzymes would not be present in RNA world and RNA itself should catalyze the processes needed to for polymerization, replication, and recombination of RNA. Ribozymes are the RNA counterparts of enzymes. In the beginning RNA would itself act as ribozymes catalyzing these processes.

  2. One can also try to imagine RNA-tRNA world. The predecessors of tRNA molecules containing just single amino-acid could have catalyzed the fusion of RNA nucleotide to a growing RNA sequence in accordance with the genetic code. Amino-acid sequences would not have been present at this stage since there would be no machinery for their polymerisation.

  3. One can consider a transition from this world to DNA-RNA-tRNA world. This would storage of genetic information to DNA from which it would have been transcribed by using polymerase consisting of RNA. This phase would have required the presence of cell membrane like structure since DNA is stabilized inside membranes or at them. Transition to this world should have involved reverse transcription catalized by RNA based reverse-transcriptase. Being a big evolutionary step, this transition should involve a phase transition increasing the value of heff=n × h.

  4. My earlier proposal has been that a transition from RNA world to DNA-RNA-protein world took place. The transition could have also taken place from DNA-RNA-tRNA world to world containing also amino-acid sequences and have led to rapid evolution of catalysis based on amino-acid sequences.

    The amino-acid sequences originating from tRNA originally catalyzing RNA replication stole the place of RNA sequences as the end products from RNA replication. The ribosome started to function as a translator of RNA sequences to amino-acid sequences rather than replication of them to RNAs! The roles of protein and RNA changed! Instead of RNA in tRNA the amino-acid in tRNA joined to the sequence! The existing machinery started to produce amino-acid sequences!

    Presumably the modification of ribosome or tRNA involved addition of protein parts to ribosome, which led to a quantum critical situation in which the roles of proteins and RNA polymers could change temporarily. When protein production became possible even temporarily, the produced proteins began to modify ribosome further to become even more favorable for the production of proteins.

    But how to produce the RNA sequences? The RNA replication machinery was stolen in the revolution. DNA had to do that via transcription to mRNA! DNA had to emerge before the revolution or at the same time and make possible the production of RNA via transcription of DNA to mRNA. The most natural options corresponds to "before", that is DNA-RNA-tRNA world. DNA could have emerged during RNA-tRNA era together with reverse transcription of RNA to DNA with RNA sequences defining ribozymes acting as reverse transcriptase. This would have become possible after the emergence of predecessor of cell membrane. After that step DNA sequences and amino-acid sequences would have been able to make the revolution together so that RNA as the master of the world was forced to become a mere servant!

    The really science fictive option would be the identification of the reverse transcription as time reversal of transcription. In zero energy ontology (ZEO) this option can be considered at least at the level of dark DNA and RNA providing the template of dynamics for ordinary matter.

How the copying of RNA strand to its conjugate strand catalysed by amino-acid of tRNA could have transformed to translation of RNA to amino-acid sequence? Something certainly changed.
  1. The change must have occurred most naturally to tRNA or - less plausibly - to the predecessor of the ribosome machinery. The change in the chemical structure of tRNA is not a plausible option. Something more than chemistry is required and in TGD Universe dark matter localized at magnetic flux tubes is the natural candidate.

  2. Evolution corresponds in TGD Universe gradual increase of heff=n × h. A dramatic evolutionary step indeed took place. The increase of the value of heff for some structural element of tRNA could have occurred so that the catalysis for amino-acid sequence instead of that for RNA sequence started to occur.

  3. The general model for bio-catalysis in TGD Universe involves a contraction of magnetic flux tubes by a reduction of heff and bringing together the reacting molecules associated with flux tubes: this explains the magic looking ability of biomolecules to find each other in the dense molecular soup. The reduction of heff for some dark atom(s) of some reacting molecules(s) to a smaller value liberates temporarily energy allowing to kick the reactants over a potential wall so that the reaction can occur (atomic binding energies scale as 1/heff2). After than the liberated energy is absorbed and ordinary atom transforms back to dark atom.

    In the recent case heff associated with a dark atom (or atoms) of tRNA could have increased so that the binding energy liberated would have increased and allowed to overcome a higher potential wall than before. If the potential wall needed to overcome in the fusion of additional amino-acid to a growing protein is higher than that in the fusion of additional RNA to a growing RNA sequence, this model could work.

  4. The activation energy for the addition of amino-acid should be larger than that for RNA nucleotide. A calculated estimate for the activation energy for the addition of amino-acid is 63.2 eV. An estimate for the activation energy for the addition of RNA nucleotide at the temperature range 37-13 C is in the range 35.6 -70.2 eV . An estimate for the activation energy for the addition of DNA nucleotide is 58.7 eV. The value in the case RNA would be considerably smaller than that in the case of amino-acids at physiological temperature. For DNA and amino-acid the activation energy would be somewhat smaller than for amino-acid. This is consistent with the proposed scenario. I am not able to decide how reliable these estimates are.

The natural first guess is that the dark atoms are hydrogen atoms. It is however not at all clear whether "ordinary" hydrogen atoms correspond to n=heff/h=n=1.
  1. Randell Mills has proposed his notion of hydrino atom to explain anomalous energy production and EUV radiation in 10-20 nm range taking place in certain electrolytic system and having no chemical explanation. The proposal of Mills is that hydrogen atom can make in presence of a catalyst a transition to a lower energy state with a reduced size. I have already earlier considered some TGD inspired models for hydrino. The resemblance with the claimed cold fusion suggests that the energy production involved in the two cases might involve the same mechanism.

    I have considered two models for the findings (see this). The first model is a variant of cold fusion model that might explain the energy production and the observed radiation at EUV energy range. Second model is a variant of hydrino atom assuming that ordinary hydrogen atom corresponds to heff/h=nH>1 and that catalyst containing hydrogen atoms with lower value of nh<nH could induce a phase transition transforming hydrogen atoms to hydrinos with binding energy spectrum scaled up by scaling factor (nH/nh)2 and radii scaled down by (nh/nH)2. The findings of Mills favour the value nH=6.

  2. Suppose the transition corresponds to a transition analogous to photon emission so that it occurs between Δ J=1 transitions of hydrogen atom. There are two simple options: either the direction of electron spin change but orbital angular momentum remains unaffected or the angular momentum of electron changes by Δ L=1 but spin direction does not change.

    The simplest assumption is that the principal quantum numbers in the initial and final state are ni=1 and nf≥ ni. Assume first that initial state with (nHi,ni=1) having Li=0 and final state with (nHf,nf≥ ni).

  3. The energy difference between the initial state with (nHi,ni=1) and final state with (nHf, nf). The initial binding energy is the ordinary binding of thought-to-be hydrogen atom in the ground state: Ei= Ef(nHf/nHi)2 ≈ 13.6 eV. Here Ef denotes the final ground state binding energy. The final state binding energy is Efnf= Ef/nf2.

    The liberated energy defining the order of magnitude for the activation energy (thermodynamical quantity) is given by

    Δ E=Efnf-Ei= Efnf2- Ef(nHfnHi)2= Ei[(nHinHf)2 nf-2-1].

    The condition Δ E > 0 gives

    nHi/nHf >nf .

    For nHi/nHf=nf one has Δ E=0. For instance, this occurs for (nHi,nHf)∈ {(2,1),(6,3),(6,2)}. Δ E>0 condition gives nHi > 2.

  4. Consider first ni=nf=1 for which the spin direction of electron changes if the transition is analogous to photon emission. By putting nf=1 in above equation one obtains a formula for the transition energy in this case. For instance, (nNi,ni)=(6,1)→ (nHf,nf) =(3,1) would correspond to Δ E=40.8 eV perhaps assignable to RNA polymerization and the transition (nHi,ni)=(7,1)→ (nHf,nf)=(3,1) to Δ E= 60.4 eV perhaps assignable to amino-acid polymerization and DNA polymerization. Note that nH=6 is supported by the findings of Mills.

  5. The table below gives the liberated energies Δ E for transitions with (ni,nf)=(1,2) in some cases. The liberated energy in transition (nHi,ni=1)→ (nHf,nf=2) in some cases.


    (nHi,ni) (nHf,nf) Δ E/eV
    (3,1) (1,2) 17.0
    (4,1) (1,2) 40.8
    (4,1) (2,2) 0.0
    (5,1) (1,2) 71.4
    (5,1) (2,2) 7.7
    (6,1) (1,2) 109.0
    (6,1) (3,2) 17.0


    The transitions (4,1)→ (1,2) resp. (5,1)→ (1,2) might give rise to the
    activation energies associated with RNA resp. amino-acid polymerization.

  6. If ordinary hydrogen atom and atoms in general correspond to heff/h=n=1, the liberated energies would be below the ground state energy E0=13.6 eV of hydrogen atom and considerably below the above estimates. For heavier atoms the binding energy scale would be Z2-fold and already for carbon with Z=6 by a factor 36 higher. It is difficult to obtain Δ E in the scale suggested by the estimates for the activation energies.

One could try to test whether tRNA could be modified to a state in which RNA is translates to RNA sequences rather than proteins. This would require a reduction of heff=n× h for the dark atom in question.

See the article From RNA world to RNA-tRNA world to RNA-DNA-tRNA world to DNA-RNA-protein world: how it went? or the chapter Evolution in Many-Sheeted Space-Time of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, October 12, 2017

Could the precursors of perfectoids emerge in TGD?

The work of Peter Stolze based on the notion of perfectoid has raised a lot of interest in the community of algebraic geometers. One application of the notion relates to the attempt to generalize algebraic geometry by replacing polynomials with analytic functions satisfying suitable restrictions. Also in TGD this kind of generalization might be needed at the level of M4× CP2 whereas at the level of M8 algebraic geometry might be enough. The notion of perfectoid as an extension of p-adic numbers Qp allowing all p:th roots of p-adic prime p is central and provides a powerful technical tool when combined with its dual, which is function field with characteristic p.

Could perfectoids have a role in TGD? The infinite-dimensionality of perfectoid is in conflict with the vision about finiteness of cognition. For other p-adic number fields Qq, q≠ p the extension containing p:th roots of p would be however finite-dimensional even in the case of perfectoid. Furthermore, one has an entire hierarchy of almost-perfectoids allowing powers of pm:th roots of p-adic numbers. The larger the value of m, the larger the number of points in the extension of rationals used, and the larger the number of points in cognitive representations consisting of points with coordinates in the extension of rationals. The emergence of almost-perfectoids could be seen in the adelic physics framework as an outcome of evolution forcing the emergence of increasingly complex extensions of rationals.

See the article Could the precursors of perfectoids emerge in TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 09, 2017

What does cognitive representability really mean?

I had a debate with Santeri Satama about the notion of number leading to the question about what cognitive representability of number could mean. This inspired writing of an articling discussing the notion of cognitive representability. Numbers in the extensions of rationals are assumed to be cognitively representable in terms of points common to real and various p-adic space-time sheets (correlates for sensory and cognitive). One allows extensions of p-adics induced by extension of rationals in question and the hierarchy of adeles defined by them.

One can however argue that algebraic numbers do not allow finite representation as do rational numbers. A weaker condition is that the coding of information about algorithm producing the cognitively representable number contains a finite amount of information although it might take an infinite time to run the algorithm (say containing infinite loops). Furthermore, cognitive representations in TGD sense are also sensory representations allowing to represent algebraic numbers geometrically (21/2) as the diameter of unit square). Stern-Brocot tree associated with partial fractions indeed allows to identify rationals as finite paths connecting the root of S-B tree to the rational in question. Algebraic numbers can be identified as infinite periodic paths so that finite amount of information specifies the path. Transcendental numbers would correspond to infinite non-periodic paths. A very close analogy with chaos theory suggests itself.

See the article What does cognitive representability really mean?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, October 06, 2017

More about dark nucleosynthesis

In the sequel a more detailed view about dark nucleosynthesis is developed using the information provided by the first book of Krivit. This information allows to make also the nuclear string model much more detailed and connect CF/LENR with co called X boson anomaly and other nuclear anomalies.

1. Not only sequences of dark protons but also of dark nucleons are involved

Are only dark protons sequences at magnetic flux tubes involved or can these sequences consists of nuclei so that one would have nucleus consisting of nuclei? From the first book I learned, that the experiments of Urutskoev demonstrate that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.

  1. Entire target nuclei can become dark in the sense described and end up to the same magnetic flux tubes as the protons coming from bubbles of electrolyte, and participate in dark nuclear reactions with the incoming dark nuclei: the dark nuclear energy scale would be much smaller than MeV. For heavy water electrolyte D must become dark nucleus: the distance between p and n inside D would be usual. A natural expectation is that the flux tubes connect the EZs and cathode.

    In the transformation to ordinary nuclear matter these nuclei of nuclei would fuse to ordinary nuclei and liberate nuclear energy associated with the formation of ordinary nuclear bonds.

  2. The transformation of protons to neutrons in strong electric fields observed already by Sternglass in 1951 could be understood as a formation of flux tubes containing dark nuclei and producing neutrons in their decays to ordinary nuclei. The needed voltages are in kV range suggesting that the scale of dark nuclear binding energy is of order keV implying heff/h=n∼ 211 - roughly the ratio mp/me.

  3. Remarkably, also in ordinary nuclei the flux tubes connecting nucleons to nuclear string would be long, much longer than the nucleon Compton length (see this and this). By ordinary Uncertainty Principle (heff=h) the length of flux tube to which binding energy is assigned would correspond to the size of nuclear binding energy scale of order few MeV. This would be also the distance between dark heff=n× h nuclei forming dark nuclear string! The binding energy would be scaled down by 1/n.

    This suggests that n→ 1 phase transition does not affect the lengths of flux tubes but only turns them to loops and that the distance between nucleons as measured in M4× CP2 is therefore scaled down by 1/n. Coulomb repulsion between proton does not prevent this if the electric flux between protons is channelled along the long flux tubes rather than along larger space-time sheet so that the repulsive Coulomb interaction energy is not affected in the phase transition! This line of thought obviously involves the notion of space-time as a 4-surface in crucial manner.

  4. Dark nuclei could have also ordinary nuclei as building bricks in accordance with fractality of TGD. Nuclei at dark flux tubes would be ordinary and the flux tubes portions - bonds - between them would have large heff and ahve thus length considerably longer than in ordinary nuclei. This would give sequences of ordinary nuclei with dark binding energy: similar situation is actually assumed to hold true for the nucleons of ordinary nuclei connected by analogs of dark mesons with masses in MeV range (see this).

Remark: In TGD inspired model for quantum biology dark variants of biologically important ions are assumed to be present. Dark proton sequences having basic entangled unit consisting of 3 protons analogous to DNA triplet would represent analogs of DNA, RNA, amino-acids and tRNA (see this). Genetic code would be realized already at the level of dark nuclear physics and bio-chemical realization would represent kind of shadow dynamics. The number of dark codons coding for given dark amino-acid would be same as in vertebrate genetic code.

2. How dark nuclei are transformed to ordinary nuclei?

What happens in the transformation of dark nuclei to ordinary ones? Nuclear binding energy is liberated but how does this occur? If gamma rays generated, one should invent also now a mechanism transforming gamma rays to thermal radiation. The findings of Holmlid provide valuable information here and lead to a detailed qualitative view about process and also allow to sharpen the model for ordinary nuclei.

  1. Holmlid (see this and this) has reported rather strange finding that muons (mass 106 MeV) pions (mass 140 MeV) and even kaons (mass 497) MeV are emitted in the process. This does not fit at all to ordinary nuclear physics with natural binding energy scale of few MeVs. It could be that a considerable part of energy is liberated as mesons decaying to lepton pairs (pions also to gamma pairs) but with energies much above the upper bound of about 7 MeV for the range of energies missing from the detected gamma ray spectrum (this is discussed in the first part of the book of Krivit). As if hadronic interactions would enter the game somehow! Even condensed matter physics and nuclear physics in the same coffee table are too much for mainstream physicist!

  2. What happens when the liberated total binding energy is below pion mass? There is experimental evidence for what is called X boson (see this) discussed from TGD point of view here. In TGD framework X is identified as a scaled down variant π(113) of ordinary pion π=π(107). X is predicted to have mass of m(π(113))= 2(113-107)/2m(π)≈ 16.68 MeV, which conforms with the mass estimate for X boson. Note that k=113 resp. k=117 corresponds to nuclear resp. hadronic p-adic length scale. For low mass transmutations the binding energy could be liberated by emission of X bosons and gamma rays.

  3. I have also proposed that pion and also other neutral pseudo-scalar states could have p-adically scaled variants with masses differing by powers of two. For pion the scaled variants would have masses 8.5 MeV, m(π(113))= 17 MeV, 34 MeV, 68 MeV, m(π(107))= 136 MeV, ... and also these could be emitted and decay to lepton pairs of gamma pairs (see this). The emission of scaled pions could be faster process than emission of gamma rays and allow to emit the binding energy with minimum number of gamma rays.

There is indeed evidence for pion like states (for TGD inspired comments (see this).
  1. The experimental claim of Tatischeff and Tomasi-Gustafsson is that pion is accompanied by pion like states organized on Regge trajectory and having mass 60, 80, 100, 140, 181, 198, 215, 227.5, and 235 MeV.

  2. A further piece of evidence for scaled variants of pion comes from two articles by Eef van Beveren and George Rupp. The first article is titled First indications of the existence of a 38 MeV light scalar boson. Second article has title
    Material evidence of a 38 MeV boson
    .

The above picture suggests that the pieces of dark nuclear string connecting the nucleons are looped and nucleons collapse to a nucleus sized region. On the other, the emission of mesons suggests that these pieces contract to much shorter pieces with length of order Compton length of meson responsible for binding and the binding energy is emitted as single quantum or very few quanta. Strings cannot however retain their length (albeit becoming looped with ends very near in M4× CP2) and contract at the same time! How could one unify these two conflicting pictures?
  1. To see how TGD could solve the puzzle, consider what elementary particles look like in TGD Universe
    (see this). Elementary particles are identified as two-sheeted structures consisting of two space-time sheets with Minkowskian signature of the induced metric connected by CP2 sized wormhole contacts with Euclidian signature of induced metric. One has a pair of wormhole contacts and both of them have two throats analogous to blackhole horizons serving as carriers of elementary particle quantum numbers.

    Wormhole throats correspond to homologically trivial 2-surfaces of CP2 being therefore Kähler magnetically charged monopole like entities. Wormhole throat at given space-time sheet is necessarily connected by a monopole flux tube to another throat, now the throat of second wormhole contact. Flux tubes must be closed and therefore consist of 2 "long" pieces connecting wormhole throats at different parallel space-time sheets plus 2 wormhole contacts of CP2 size scale connecting these pieces at their ends. The structure resembles extremely flattened rectangle.

  2. The alert reader can guess the solution of the puzzle now. The looped string corresponds to string portion at the non-contracted space-time sheet and contracted string to that at contracted space-time sheet! The first sheet could have ordinary value of Planck constant but larger p-adic length scale of order electron's p-adic length scale L(127) (it could correspond to the magnetic body of ordinary nucleon (see this)) and second sheet could correspond to heff=n× h dark variant of nuclear space-time sheet with n=2111 so that the size scales are same.

    The phase transition heff→ h occurs only for the flux tubes of the second space-time sheet reducing the size of this space-time sheet to that of nuclear k=137 space-time sheet of size of ∼ 10-14 meters. The portions of the flux tubes at this space-time sheet become short, at most of the order of nuclear size scale, which roughly corresponds to pion Compton length. The contraction is accompanied by the emission of the ordinary nuclear binding energy as pions, their scaled variants, and even heavier mesons. This if the mass of the dark nucleus is large enough to guarantee that total binding energy makes the emission possible. The second space-time sheet retains its size but the flux tubes at it retain their length but become loopy since their ends must follow the ends of the shortened flux tubes.

  3. If this picture is correct, most of the energy produced in the process could be lost as mesons, possibly also their scaled variants. One should have some manner to prevent the leakage of this energy from the system in order to make the process effective energy producer.

This is only rough overall view and it would be unrealistic to regard it as final: one can indeed imagine variations. But even its recent rough form it seems to be able explain all the weird looking aspects of CF/LENR/dark nucleosynthesis.

See the chapter Cold fusion again of "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Comparison of Widom-Larsen model with TGD inspired models of CF/LENR or whatever it is

I cannot avoid the temptation to compare WL to my own dilettante models for which also WL has served as an inspiration. I have two models explaining these phenomena in my own TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this and this ) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence is longer scales than usually.

The hierarchy of Planck constants heff=n× h has now rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

1. Simple modification of WL does not work

The first model is a modification of WL and relies on dark variant of weak interactions. In this case LENR would be appropriate term.

  1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutron would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.

  2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

2. Dark nucleosynthesis

Also second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

  1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this) (see this and this), which are negatively charged regions (see this, this, and this).

    Also the work of the group of Prof. Holmlid (see this and this) not yet included in the book of Krivit was of great help. TGD proposal (see this and this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei but would have almost atomic size scale now.

  2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

    Besides protons also deuterons and even heavier nuclei can end up to the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on basis of the data provided by Holmlid's experiments (see this and this).

    The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ mp/me. For infrared radiation the energy of photons would be about 1 eV and nuclear energy scale would be reduced by a factor about 10-6-10-7: one cannot exclude this option either. In fact, several options can be imagined since entire spectrum of heff is predicted. This prediction is a testable.

    Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

  3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

    In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than n-p mass difference.

    There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if dark variant of p→ n is possible.

  4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near cathode, where bubbles are formed. For the flux tubes leading from the system to external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of catalyst under some conditions. Flux tubes could have ends at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

    If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether already Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause ehating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

    It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggests that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e+p→ n+ν would become possible.

  5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.

Neither "CF" nor "LENR" is appropriate term for TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is in considerably smaller energy scale than usually. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that catalyst itself does not determined the outcome.

One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of also ordinary nucleosynthesis outside stellar interiors explain how elements heavier than iron are produced, might be more appropriate term.

See the chapter Cold fusion again of "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

For a summary of earlier postings see Latest progress in TGD.

Three books about cold fusion/LENR

Steven Krivit has written three books or one book in three parts - as you wish - about cold fusion (shortly CF in the sequel) - or low energy nuclear reaction (LENR) - which is the prevailing term nowadays and preferred by Krivit. The term "cold fusion" can be defended only by historical reasons: the process cannot be cold fusion. LENR relies on Widom-Larsen model (WL) trying to explain the observations using only the existing nuclear and weak interaction physics. Whether LENR is here to stay is still an open question. TGD suggests that even this interpretation is not appropriate: the nuclear physics involved would be dark and associated with heff=n× h phases of ordinary matter having identification as dark matter. Even the term "nuclear transmutation" would be challenged in TGD framework and "dark nuclear synthesis" looks a more appropriate term.

The books were a very pleasant surprise for many reasons, and I have been able to develop my own earlier overall view by adding important details and missing pieces and allowing to understand the relationship to Widom-Larsen model (WL).

1. What the books are about?

There are three books.

  1. "Hacking the atom: Explorations in Nuclear Research, vol I" considers the developments between 1990-2006. The first key theme is the tension between two competing interpretations. On one hand, the interpretation as CF involving necessarily new physics besides ordinary nuclear fusion and plagued by a direct contradiction with the expected signatures of fusion processes, in particular those of D+D→ 4He. On the other hand, the interpretation as LENR in the framework of WL in which no new physics is assumed and neutrons and weak interactions are in a key role.


    Second key theme is the tension between two competing research strategies.

    1. The first strategy tried to demonstrate convincingly that heat is produced in the process - commercial applications was the basic goal. This led to many premature declarations about solution of energy problems within few years and provided excellent weapons for the academic world opposing cold fusion on basis of textbook wisdom.

    2. Second strategy studied the reaction products and demonstrated convincingly that nuclear transmutations (isotopic shifts) took place. This aspect did not receive attention in public and the attempts to ridiculize have directed attention to the first approach and to the use of the term "cold fusion".

    According to Krivit, CF era ended around 2006, when Widom and Larsen proposed their model in which LENR would be the mechanism (see this). Widom-Larsen model (WL) can be however criticized for some un-natural looking assumptions: electron is required to have renormalized mass considerably higher than the real mass; the neutrons initiating nuclear reactions are assumed to have ultralow energies below thermal energy of target nuclei. This requires electron mass to be larger but extremely near to neutron-proton mass difference (see this, this, this, and this). The gamma rays produced in the process are assumed to transform to infrared radiation.

    To my view, WL is not the end of the story. New physics is required. For instance, the work of professor Holmlid and his team (see this and this) has provided new fascinating insights to what might be the mechanism of what has been called nuclear transmutations.

  2. Fusion Fiasco: Explorations in Nuclear Research, vol II" discusses the developments during 1989 when cold fusion was discovered by Fleischman and Pons (see this) and interpreted as CF. It soon turned out that the interpretation has deep problems and CF got the label of pseudoscience.

  3. Lost History: Explorations in Nuclear Research, vol III" tells about surprisingly similar sequence of discoveries, which has been cleaned away from history books of science because it did not fit with the emerging view about nuclear physics and condensed matter physics as completely separate disciplines. Although I had seen some remarks about this era I had not not become aware what really happened. It seems that discoveries can be accepted only when the time is mature for them, and it is far from clear whether the time is ripe even now.

What I say in the sequel necessarily reflects my limitations as a dilettante in the field of LENR/CF. My interest on the topic has lasted for about two decades and comes from different sources: LENR/CF is an attractive application for the unification of fundamental interactions that I have developed for four decades now. This unification predicts a lot of new physics - not only in Planck length scale but in all length scales - and it is of course fascinating to try to understand LENR/CF in this framework.

For instance, while reading the book, I realized that my own references to the literature have been somewhat random and not always appropriate. I do not have any systematic overall view about what has been done in the field: here the book makes wonderful service. It was a real surprise to find that first evidence for transmutation/isotope shifts emerged already for about century ago and also how soon isotope shifts were re-discovered after Pons-Fleischman discovery. The insistence on D+D→ 4He fusion model remains for an outsider as mysterious as the refusal of mainstream nuclear physicists to consider the possibility of new nuclear physics. One new valuable bit of information was the evidence that it is the cathode material that transforms to the isotope shifted nuclei: this helped to develop my own model in more detail.

Remark: A comment concerning the terminology. I agree with the author that cold fusion is not a precise or even correct term. I have myself taken CF as nothing more than a letter sequence and defended this practice to myself as a historical convention. My conviction is that the phenomenon in question is not a nuclear fusion but I am not at all convinced that it is LENR either. Dark nucleosynthesis is my won proposal.

What did I learn from the books?

Needless to say, the books are extremely interesting, for both layman and scientist - say physicist or chemist. The books provide a very thorough view about the history of the subject. There is also an extensive list of references to the literature. Since I am not an experimentalist and feel myself a dilettante in this field as a theoretician, I am unable to check the correctness and reliability of the data represented. In any case, the overall view is consistent with what I have learned about the situation during years. My opinion about WL is however different.

I have been working with ideas related to CF/LENR (or nuclear transmutations) but found books provided also completely new information and I became aware about some new critical points.

I have had a rather imbalanced view about transmutations/isotopic shifts and it was a surprise to see that they were discovered already 1989 when Fleisch and Pons published their work. Even more, the premature discovery of transmutations for century ago (1910-1930) interpreted by Darwin as a collective effect, was new to me. Articles about transmutations were published in prestigious journals like Nature and Naturwissenschaften. The written history is however history of winners and all traces of this episode disappeared from the history books of physics after the standard model of nuclear physics assuming that nuclear physics and condensed matter physics are totally isolated disciplines. The developments after the establishment of standard model relying on GUT paradigm looks to me surprisingly similar.

Sternglass - still a graduate student - wrote around 1947 to Einstein about his preliminary ideas concerning the possibility to transform protons to neutrons in strong electric fields. It became as a surprise to Sternglass that Einstein supported his ideas. I must say that this increased my respect of Einstein even further. Einstein's physical intuition was marvellous. In 1951 Sternglass found that in strong voltages in keV range protons could be transformed to neutrons with unexpectedly high rate. This is strange since the process is kinematically impossible for free protons: it however can be seen as support for WL model.

Also scientists are humans with their human weaknesses and strengths and the history of CF/LENR is full of examples of both light and dark sides of human nature. Researchers are fighting for funding and the successful production of energy was also the dream of many people involved. There were also people, who saw CF/LENR as a quick manner to become millionaire. Getting a glimpse about this dark side was rewarding. The author knows most of the influential people, who have worked in the field and this gives special authenticity to the books.

It was a great service for the reader the basic view about what happened was stated clearly in the introduction. I noticed also that with some background one can pick up any section and start to read: this is a service for a reader like me. I would have perhaps divided the material into separate parts but probably your less bureaucratic choice leaving room for surprise is better after all.

Who should read these books? The books would be a treasure for any physicist ready to challenge the prevailing prejudices and learn about what science is as seen from the kitchen side. Probably this period will be seen in future as very much analogous to the period leading to the birth of atomic physics and quantum theory. Also layman could enjoy reading the books, especially the stories about the people involved - both scientists and those funding the research and academic power holders - are fascinating. The history of cold fusion is a drama in which one can see as fight between Good and Evil and eventually realize that also Good can divide into Good and Evil. This story teaches about a lot about the role of egos in all branches of sciences and in all human activities. Highly rationally behaving science professionals can suddenly start to behave completely irrationally when their egos feel being under threat.

My hope is that the books could wake up the mainstream colleague to finally realize that CF/LENR or - whatever you wish to call it - is not pseudoscience. Most workers in the field are highly competent, intellectually honest, an have had so deep passion for understanding Nature that they have been ready to suffer all the humiliations that the academic hegemony can offer for dissidents. The results about nuclear transmutations are genuine and pose a strong challenge for the existing physics, and to my opinion force to give up the naive reductionistic paradigm. People building unified theories of physics should be keenly aware of these phenomena challenging the reductionistic paradigm even at the level of nuclear and condensed matter physics.

2. The problems of WL

For me the first book representing the state of CF/LENR as it was around 2004 was the most interesting. In his first book Krivit sees 1990-2004 period as a gradual transition from the cold fusion paradigm to the realization that nuclear transmutations occur and the fusion model does not explain this process.

The basic assumption of the simplest fusion model was that the fusion D+D → 4He explains the production of heat. This excluded the possibility that the phenomenon could take place also in light water with deuterium replaced with hydrogen. It however turned out that also ordinary water allows the process. The basic difficulty is of course Coulomb wall but the model has also difficulties with the reaction signatures and the production rate of 4He is too low to explain heat production. Furthermore, gamma rays accompanying 4He production were not observed. The occurrence of transmutations is a further problem. Production of Li was observed already in 1989, and later russia trio Kucherov, Savvatinova, Karabut detected tritium, 4He, and of heavy elements. They also observed modifications at the surface of the cathode down to depth of .1-1 micrometers.

Krivit sees LENR as a more realistic approach to the phenomena involved. In LENR Widom-Larsen model (WL) is the starting point. This would involve no new nuclear physics. I also see WL as a natural starting point but I am skeptic about understanding CF/LENR in term of existing physics. Some new physics seems to be required and I have been doing intense propaganda for a particular kind of new physics colfusion again (see this).

WL assumes that weak process proton (p) → neutron (n) occurring via e+ p→ n+ν (e denotes electron and ν for neutrino) is the key step in cold fusion. After this step neutron finds its way to nucleus easily and the process continues in conventional sense as analog of r-process assumed to give rise to elements heavier than iron in supernova explosions and leads to the observed nuclear transmutations. Essentially one proton is added in each step decomposing to four sub-steps involving beta decay n→ p and its reversal.

There are however problems.

  1. Already the observations of Sternglass suggest that e+ p→ n+ν occurs. e+ p→ n+ν is however kinematically impossible for free particles. e should have considerably higher effective mass perhaps caused by collective many-body effects. e+ p→ n+ν could occur in the negatively charged surface layer of cathode provided the sum of the rest masses of e and p is larger than that of n. This requires rather large renormalization of electron mass claimed to be due to the presence of strong electric fields. Whether there really exists a mechanism increasing the effective mass of electron, is far from obvious and strong nuclear electric fields are proposed to cause this.

  2. Second problematic aspect of WL is the extreme slowness of the rate of beta decay transforming proton to neutron. For ultraslow neutrons the cross section for the absorption of neutron to nucleus increases as 1/vrel, vrel the relative velocity, and in principle could compensate the extreme slowness of the weak decays. The proposal is that neutrons are ultraslow. This is satisfied if the sum of rest masses is only slightly larger than proton mass. One would have mE≈ mn-mp Δ En, where Δ En is the kinetic of neutron. To obtain correct order of magnitude for the rate of neutron absorptions Δ En should be indeed extremely small. One should have Δ E=10-12 eV and one has Δ E/mp= 10-21! This requires fine tuning and it is difficult to believe that the electric field causing the renormalization could be so precisely fine-tuned.

    Δ E corresponds to extremely low temperature about 10-8 K hard to imagine this at room temperature. Thermal energy of the target nucleus at room temperature is of the order 10-11Amp, A mass number. Hence it would seem that the thermal motion of the target nuclei mask the effect.

  3. One should also understand why gamma rays emitted in the ordinary nuclear interactions after neutron absorption are not detected. The proposal is that gamma rays somehow transform to infrared photons, which would cause the heating. This would be a collective effect involving quantum entanglement of electrons. One might hope that by quantum coherence the neutron absorption rate could be proportional to N2 instead of N, where N is the number of nuclei involved. This looks logical but I am not convinced about the physical realizability of this proposal.
To my opinion these objections are really serious. In the following two posts I will disuss in detail TGD based view about these phenomena interpreted in terms of dark nucleosynthesis.

See the chapter Cold fusion again "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, September 20, 2017

From RNA world to RNA-tRNA world to DNA-RNA-protein world: how it went?

I encountered a highly interesting work related to the emergence of RNA world: warmly recommended. For a popular article see this.

First some basic terms for the possible reader of the article. There are three key enzymes involved in the process which is believed to lead to a formation of longer RNA sequences able to replicate.

  1. Ribozyme is a piece of RNA acting as catalyst. In RNA world RNA had to serve also as a catalyst. In DNA world proteins took this task but their production requires DNA and transcription-translation machinery.

  2. RNA ligase promotes a fusion of RNA fragments to a longer one in presence of ATP transforming to AMP and diphospate and giving metabolic energy presumably going to the fusion. In TGD fUniverse this would involve
    generation of an atom (presumably hydrogen) with non-standard value of heff=n×h having smaller binding energy scales so that ATP is needed. These dark bonds would be involved with all bio-catalytic processes.

  3. RNA polymerase promotes a polymerization of RNA from building bricks. It looks to me like a special kind of ligase adding only single nucleotide to an existing sequence. In TGD Universe heff=n×h atoms would be involved as also magnetic flux tubes carrying dark analog of DNA with codons replaced with dark proton triplets.

  4. RNA recombinase promotes RNA strands to exchange pieces of same length. Topologically this corresponds to two reconnections occurring at points defining the ends of piece. In TGD Universe these reconnections would occur for magnetic flux tubes containing dark variant of DNA and induce the chemical processes at the level of chemistry.

Self ligation should take place. RNA strands would serve as ligases for the generation of longer RNA strands. The smallest RNA sequences exhibiting self-ligation activity was found to be 40-nucleotide RNA and shorter than expected. It had lowest efficiency but highest functional flexibility to ligate substrates to itself. R18 - established RNA polymerase model - had highest efficiency and highest selectivity.

What I can say about the results is that they give support for the notion of RNA world.

The work is related to the vision about RNA world proposed to precede DNA-RNA-protein world. Why I found it so interesting is that it relates to on particular TGD inspired glimpse to what happened in primordial biology.

In TGD Universe it is natural to imagine 3 worlds. RNA world, RNA-tRNA world, and DNA-RNA-protein world. For an early rather detailed version of the idea about transition from RNA world to DNA-RNA-proteins world but not realizing the tRNA-RNA world as intermediate step see this.

  1. RNA world would contain only RNA. Protein enzymes would not be present in RNA world and RNA itself should catalyze the processes needed to for polymerization, replication, and recombination of RNA. Ribozymes are the RNA counterparts of enzymes. In the beginning RNA would itself act as ribozymes catalyzing these processes.

  2. One can also try to imagine RNA-tRNA world. The predecessors of tRNA molecules containing just single amino-acid could have catalyzed the fusion of RNA nucleotide to a growing RNA sequence in accordance with the genetic code. Amino-acid sequences would not have been present at this stage since there would be no machinery for their polymerisation.

  3. My own proposal was that transition from RNA world to DNA-RNA-protein world took place as a revolution. Amino-acids of tRNA catalyzing RNA replication stole the place of RNA sequences as end products from RNA replication. The ribosome started to function as a translator of RNA sequences to amino-acid sequences rather than replication of them to RNAs! The roles of protein and RNA changed! Instead of RNA in tRNA the amino-acid in tRNA joined to the sequence! The existing machinery started to produce amino-acid sequences! Servant became the master!

    Presumably the modification of ribosome involved addition of protein parts to ribosome, which led to a quantum critical situation in which the roles of proteins and RNA polymers could change temporarily. When protein production became possible even temporarily, the produced proteins began to modify ribosome further to become even more favorable for the production of proteins.

    But how to produce the RNA sequences? The RNA replication machinery was stolen in the revolution. DNA had to do that via transcription to mRNA! DNA had to emerge before the revolution or at the same time and make possible the production of RNA via transcription of DNA to mRNA. DNA could have emerged during RNA-tRNA era together with reverse transcription of RNA to DNA with RNA sequences defining ribozymes acting as reverse transcriptase. This would have become possible after the emergence of predecessor of cell membrane. After that step DNA sequences and amino-acid sequences would have been able to make the revolution together so that RNA as the master of the world was forced to become a mere servant!

    The really science fictive option would be identification of the reverse transcription as time reversal of transcription. In zero energy ontology (ZEO) this option can be considered at least at the level of dark DNA and RNA providing the template of dynamics for ordinary matter.

How this could be tested? Could for instance ribosome be modified to a state in which RNA is translates to RNA sequences rather than proteins?

See the chapter Evolution in Many-Sheeted Space-Time of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 19, 2017

Super-number fields: does physics emerge from the notion of number?

A proposal that all physics emerges from the notion of number field is made. The first guess for the number field in question would be complexified octonions for which inverse exists except at complexified light-cone boundary: this has interpretation in terms of propagation of signals with light-velocity 8-D sense. The emergence of fermions however requires super-octonions as super variant of number field. Rather surprisingly, it turns out that super-number theory makes perfect sense. One can define the inverse of super-number and also the notion of primeness makes sense and construct explicitly the super-primes associated with ordinary primes. The prediction of new number piece of theory can be argued to be a strong support for the integrity of TGD.

See the article Super-number fields: does physics emerge from the notion of number?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, September 16, 2017

Is the quantum leakage between different signatures of the real sectors of the complexified M8 possible?

Complexified octonions have led to a dramatic progress in the understanding of TGD. One cannot however avoid
a radical question about fundamentals.

  1. The basic structure at M8 side consists of complexified octonions. The metric tensor for the complexified inner product for complexified octonions (no complex conjugation with respect to i for the vectors in the inner product) can be taken to have any signature (ε1,...,ε8), εi=+/- 1. By allowing some coordinates to be real and some coordinates imaginary one obtains effectively any signature from say purely Euclidian signature. What matters is that the restriction of complexified metric to the allowed sub-space is real. These sub-spaces are linear Lagrangian manifolds for Kähler form representing the commuting imaginary unit i. There is analogy with wave mechanics. Why M8 -actually M4 - should be so special real section? Why not some other signature?

  2. The first observation is that the CP2 point labelling tangent space is independent of the signature so that the problem reduces to the question why M4 rather than some other signature (ε1,..,ε4). The intersection of real subspaces with different signatures and same origin (t,r)=0 is the common sub-space with the same signature. For instance, for (1,-1,-1,-1) and (-1,-1,-1,-1) this subspace is 3-D t=0 plane sharing with CD the lower tips of CD. For (-1,1,1,1) and (1,1,1,1) the situation is same. For (1,-1,-1,-1) and (1,1,-1,-1) z=0 holds in the intersection having as common with the lower boundary of CD the boundary of 3-D light-cone. One obtains in a similar manner boundaries of 2-D and 1-D light-cones as intersections.

  3. What about CDs in various signatures? For a fully Euclidian signature the counterparts for the interiors of CDs reduce to 4-D intervals t∈ [0,T] and their exteriors and thus the space-time varieties representing incoming particles reduce to pairs of points (t,r)=(0,0) and (t,r)= (T,0): it does not make sense to speak about external particles. For other signatures the external particles correspond to 4-D surfaces and dynamics makes sense. The CDs associated with the real sectors intersect at boundaries of lower dimensional CDs: these lower-dimensional boundaries are analogous to subspaces of Big Bang (BB) and Big Crunch (BC).

  4. I have not found any good argument for selecting M4=M1,3 as a unique signature. Should one allow also other real sections? Could the quantum numbers be transferred between sectors of different signature at BB and BC? The counterpart of Lorentz group acting as a symmetry group depends on signature and would change in the transfer. Conservation laws should be satisfied in this kind of process if it is possible. For instance, in the leakage from M4=M1,3 to Mi,j, say M2,2, the intersection would be M1,2. Momentum components for which signature changes, should vanish if this is true. Angular momentum quantization axis normal to the plane is defined by two axis with the same signature. If the signatures of these axes are preserved, angular momentum projection in this direction should be conserved. The amplitude for the transfer would involve integral over either boundary component of the lower-dimensional CD.

    Final question: Could the leakage between signatures be detected as disappearance of matter for CDs in elementary particle scales or lab scales?

See the articleDoes M8-H duality reduce classical TGD to octonionic algebraic geometry?: part II.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.