To do, soon

…  these are some notes to self,  the future depends, as usual, on future inputs to my brain.

1. It seems possible to try to use the chemical concrete machine formalism for doing some very concrete “lifeforms”, i.e. families of graphs which, under the usual statistical interactions with other molecules in the “reactor”,  satisfy some definition of life. Far fetched? No, at least we see hints that combinators are multipliers (thus reproduction can be achieved with zippers and combinators). Strange way to see what could be a purpose of logic in biology.

2. There is a hidden symmetry in the moves of the chemical concrete machine which indicates there is no need for the termination gate (i.e. there is no garbage), instead this symmetry (inspired by the duality between multipliers and co-multipliers) puts in a new light the idea of recursion.

3. It seems to be easy to reformulate the cartesian method as described here into a sort of it’s opposite. Of course, such that it makes sense, not especially as a scientific method, but a method of creation, maybe relevant for the understanding how life proceeds.

4. I have large quantities of math facts and research which is put on hold, not reported here, or only in a very indirect way.

There is a partly interesting, partly boring period coming, I feel it. On one side, I am working on putting some of the work reported in this open notebook in  articles form, which is funny, but on the other side I ask myself: why? Is there any need, other than organizing a bit things? Is it useful? Is this a satisfying method of communication?

But is the blog a better method? Other? Suggestions?

Advertisements

“Visual awareness” by Koenderink

Further is an excerpt from the ebook Visual awareness by Jan Koenderink. The book is part of a collection, published by The Clootcrans Press.

What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. One says that awareness is intentional. This does not mean that the something exists otherwise than in awareness. For instance, you are visually aware in your dreams, when you hallucinate a golden mountain, remember previous visual awareness, or have pre-visions. However, the case that you are visually aware of the scene in front of you is fairly generic.

The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.

JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!

But it neatly sums up the mainstream view of the matter as I read it.

The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.  [p. 2-3]

___________________

I hold similar views, last time expressed in the post Ideology in the vision theater (but not with the same mastery as Koenderink, of course). Recall that “computing with space“, which is the main theme of this blog/open notebook, is about rigorously understanding (and maybe using) the “computation” done by the visual brain with the purpose to understand what space IS.  This is formulated in arXiv:1011.4485  as the “Plato’s hypothesis”:

(A) reality emerges from a more primitive, non-geometrical, reality in the same way as
(B) the brain construct (understands, simulates, transforms, encodes or decodes) the image of reality, starting from intensive properties (like a bunch of spiking signals sent by receptors in the retina), without any use of extensive (i.e. spatial or geometric) properties.
___________________
Nevermind my motivations, the important message is that  Koenderink critic is a hard science point of view about a hard science piece of research. It is not just a lexical game (although I recognize the value of such games as well, but as a mathematician I am naturally inclined towards hard science).

A less understood problem in sub-riemannian geometry (I)

A complete, locally compact riemannian manifold is a length metric space by the Hopf-Rinow theorem. The problem of intrinsic characterization of riemannian spaces asks for the recovery of the manifold structure and of the riemannian metric from the distance function coming from  to the length functional.

For 2-dim riemannian manifolds the problem has been solved by A. Wald in 1935. In 1948 A.D. Alexandrov  introduces his famous curvature (which uses comparison triangles) and proves that, under mild smoothness conditions on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional)  curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Hölder smoothness).

The problem has been solved by Nikolaev in 1998, in the paper A metric characterization of Riemannian spaces. Siberian Adv. Math.   9,  no. (1999),  1-58.  The solution of Nikolaev can be summarized  like this: he starts with a locally compact length metric space (and some technical details), then

  •  he constructs a (family of) intrinsically defined tangent bundle(s) of the metric space, by using a generalization of the cosine formula for estimating a kind of a distance between two curves emanating from different points. This will lead him to a generalization of the tangent bundle of a riemannian manifold endowed with the canonical Sasaki metric.
  • He defines a notion of sectional curvature at a point of the metric space, as a limit of a function of nondegenerated geodesic triangles, limit taken as these triangles converge (in a precised sense)  to the point.
  • The sectional curvature function thus constructed is supposed to satisfy a Hölder continuity condition (thus a regularity formulated in metric terms)
  • He proves then that  the metric space is isometric with (the metric space associated to) a riemannian manifold of precise (weak) regularity (the regularity is related to the regularity of the sectional curvature function).

Sub-riemannian spaces are length metric spaces as well. Any riemannian space is a sub-riemannian one. It is not clear at first sight why the characterization of riemannian spaces does not extend to sub-riemannian ones. In fact, there are two problematic steps for such a program for extending Nikolaev result to sub-riemannian spaces:

  • the cosine formula, as well as the Sasaki metric on the tangent bundle don’t have a correspondent in sub-riemannian geometry (because there is, basically, no statement canonically corresponding to Pythagoras theorem);
  • the sectional curvature at a point cannot be introduced by means of comparison triangles, because sub-riemanian spaces do not behave well with respect to this comparison of triangle idea, as proved by Scott Pauls.

In 1996 M. Gromov formulates the problem of intrinsic characterization of sub-riemannian spaces.  He takes the Carnot-Caratheodory (or CC) distance (this is the name of the distance constructed on a sub-riemannian manifold from the differential geometric data we have, which generalizes the construction of the riemannian distance from the riemannian metric) as the only intrinsic object of a sub-riemannian space. Indeed, in the linked article, section 0.2.B. he writes:

If we live inside a Carnot-Caratheodory metric space V we may know nothing whatsoever about the (external) infinitesimal structures (i.e. the smooth structure on V, the subbundle H \subset T(V) and the metric g on H) which were involved in the construction of the CC metric.
He then formulates the goal:
Develop a sufficiently rich and robust internal CC language which would enable us to capture the essential external characteristics of our CC spaces.
He proposes as an example to recognize the rank of the horizontal distribution, but in my opinion this is, say, something much less essential than to “recognize” the “differential structure”, in the sense proposed here as the equivalence class under local equivalence of dilation structures.
As in Nikolaev solution for the riemannian case, the first step towards the goal is to have a well defined, intrinsic, notion of tangent bundle. The second step would be to be able to go to higher order approximations, eventually towards a curvature.
My solution is to base all on dilation structures. The solution is not “pure”, because it introduces another ingredient, besides the CC distance: the field of dilations. However, I believe that it is illusory to think that, for the general sub-riemannian case, we may be able to get a “sufficiently rich and robust” language without. As an example, even the best known thing, i.e. the fact that the metric tangent spaces of a (regular) sub-riemannian manifold are Carnot groups, was previously not known to be an intrinsic fact. Let me explain: all proofs, excepting the one by using dilation structures, use non-intrinsic ingredients, like differential calculus on the differential manifold which enters in the construction of the CC distance. Therefore, it is not known (or it was not known, even not understood as a problem) if this result is intrinsic or if it is an artifact of the proof method.
Well, it is not, it turns out, if we accept dilation structures as intrinsic.
There is a bigger question lingering behind, once we are ready to think about intrinsic properties of sub-riemannian spaces:  what is a sub-riemannian space? The construction of such spaces uses notions and results which are by no means intrinsic (again differential structures, horizontal bundles, and so on).
Therefore I understand Gromov’s stated goal as:
Give a minimal, axiomatic, description of sub-riemannian spaces.
[Adapted from the course notes Sub-riemannian geometry from intrinsic viewpoint.]

Chemical concrete machine, detailed (IV)

As a preparation for the Turing computation properties of the chemical concrete machine, in this post I shall explain what multipliers and co-multipliers are.

Basically,  multipliers and co-multipliers are molecules which self-multiply.  More precisely, in the next figure we see the definition of those:

zip_split_5

Here A  and A' are molecules from the formalism of the chemical concrete machine and 1 and 2 are labels. The blue arrow means any chemical reaction which is allowed.

Question: by close examination of the previous posts on graphic lambda calculus, can you identify any multiplier? or co-multiplier?

If not, then be patient, because in a future post I shall give plenty examples of those, especially connected with logic.

Further, we shall see that \beta zippers, introduced in Chemical concret machine, detailed (III) , multiply in a very graphic way, kind of like what happens with the DNA of a cell when it divides. Let’s see.

We want to know if a zipper can be a multiplier. In the following figure we see what happens in the presence of DIST enzymes:

zip_split_1

The reaction continues:

zip_split_2

Now, the zipper multiplied into two zippers, but they are still connected.  We need more information about A, B, C, D and A', B', C', D'.   Remark that:

zip_split_4

zip_split_3

In conclusion: if A, B, C, D are multipliers and A', B', C', D' are co-multipliers, then the zipper is a multiplier!

__________________

Return to the chemical concrete machine tutorial.

Chemical concret machine, detailed (III)

This is a first post about what the chemical concrete machine can do. I concentrate here on geometrical like actions.

_____________________

1. Lists and locks.  Suppose you have a family of molecules which you want to free them in the medium in a given order. This corresponds to having a list of molecules, which is “read” sequentially. I shall model this with the help of the zipper from graphic lambda calculus.

Suppose that the molecules we want to manipulate have the form A \rightarrow A', with A and A' from the family of “other molecules” and \rightarrow an arrow, in the model described in Chemical concrete machine, detailed (I).  Here are three zippers (lists).

zip_list_1

The first zipper, called a \beta zipper, behaves in the following way. In the presence of \beta^{+} enzymes, there is only one reaction site available, namely the one involving the red and green nodes in the neighbourhood of the D, D'. So there is only one reaction possible with a \beta^{+} enzyme, which has a a result the molecule D \rightarrow D' and a new, shorter \beta zipper. This new zipper has only one reaction site, this time involving nodes in the neighbourhood of C, C', so the reaction with the enzyme \beta^{+} gives C \rightarrow C' and a new, shorter zipper. The reaction continues like this, freeing in order the molecules B\rightarrow B', then A \rightarrow A' and E \rightarrow E'.

The second zipper is called a FAN-IN zipper (or a \phi zipper). It behaves the same as the previous one, but this time in the presence of the FAN-IN enzyme \phi^{+}.

On the third row we see a mixed  zipper. The first molecule D \rightarrow D' is released only in the presence of a \phi^{+} enzyme$, then we are left with a \beta zipper.

This can be used to lock zippers. Look for example at the following molecule:

zip_list_4

called a locked \beta zipper. In the presence of only \beta^{+} enzymes, nothing happens. If we add into the reactor also \phi^{+} enzymes, then the zipper unlocks, by releasing a loop (that’s seen as garbage) and a \beta zipper which starts to react with \beta^{+} enzymes.

The same idea can be used for keeping a molecule inactive unless both \phi^{+} and \beta^{+} enzymes are present in the reactor.  Say that w have a molecule A \rightarrow A' which is made inactive under the form presented in the following figure

zip_list_3

The molecule is locked, but it has two reaction sites, one sensible to \beta^{+}, the other sensible to \phi^{+}. Both enzymes are needed for unlocking the molecule, but there is no preferred order of reaction with the enzymes (in particular these reactions can happen in parallel).

_____________________

2. Sets. Suppose now that we don’t want to release the molecules in a given order. We need to prepare a molecule which has several reaction sites available, so that multiple reactions can happen in parallel, as in the last example. Mathematically, that could be seen as a representation of the set of molecules we want to free, instead of the list of them.  This is easy, as described in the next figure:

zip_list_2

On the first row we see what is called a \beta set. It has 4 possible reaction sites with the enzyme \beta^{+}, therefore, in the presence of this enzyme, the molecules A \rightarrow A', … , $E \rightarrow E’$ are released at the same moment. Alternatively, we may think about a \beta set as a bag of molecules which releases (according to the probability of the reaction with a \beta^{+} enzyme$) one of the four molecules A \rightarrow A', … , D \rightarrow D', at random. (It should be interesting to study the evolution of this reaction, because now there are only 3 reaction sites left, …)

On the second row we see a FAN-IN, or \phi set. It behaves the same as the previous one, but this time in the presence of the FAN-IN \phi^{+} enzyme.

Finally, we see a mixed set on the third row. (It should have an interesting dynamics, as a function of the concentrations of the two enzymes.)

_____________________

3. Pairs.  Actually, there is no limit but the imagination to what geometrical operations to consider, see for example the posts Sets, lists and order of moves in graphic lambda calculus and Pair of synapses, one controlling the other (B-type NN part III) . As another example, here is a more involved molecule, which produces different pairs of molecules, according to the presence of \phi^{+} or \beta^{+} enzymes.

In the following figure we see how we model a pair of molecules, then two possible reactions a re presented.

zip_list_5

The idea is that we can decide, by controlling the amount of \beta^{+} or \phi^{+}, to couple A with D and C with D, or to couple A with B and C with D. Why? Suppose that A and B can react with both C and D, depending of course on how close available molecules are.

For example, we want that A and B molecules to be, statistically, one in the proximity of another. We add to the reactor some enzyme \phi^{+} and we obtain lots of pairs (A,B)_{\beta}. Now, when we add further the \beta^{+} enzyme, then, immediately after the reaction with this enzyme we are going to have lots of pairs of molecules A and B which are physically close one to another, starting to react.

Instead, if we first introduce \beta^{+} enzymes and then \phi^{+} enzymes, then A would react more likely with D.

_____________________

Return to the chemical concrete machine tutorial.

A sea of possibilities

… opens when I look at graphic lambda calculus as a graph rewriting system  (see also the foundational Term graph rewriting by Barendregt et al.)  The first, most obvious one, is that by treating graphic lambda calculus as a particular GRS, I might USE already written software for applications, like the chemical concrete machine or the B-type neural networks (more about this further). There are other possibilities, much, much more interesting from my point of view.

The reason for writing this post is that I feel a bit like the character Lawrence Pritchard Waterhouse from Neal Stephenson’s Cryptonomicon, more specifically as described by the character Alan Turing in a discussion with Rudolf  von Hacklheber. Turing (character in the book) describes science as a train with several locomotives, called “Newton”, etc (Hacklheber suggests there’s a “Leibniz” locomotive as well), with today’s scientists in the railroad cars and finally with Lawrence running after the train with all his forces, trying to keep the pace.

When you change the research subjects as much as I did, this feeling is natural, right? So, as for the lucky  Lawrence from the book (lucky because having the chance to be friend with Turing), there is only one escape for keeping the pace: collaboration. Why run after the GRS train when there is  already amazing research done?  My graphic lambda calculus is a particular GRS, which is designed so that it has applications in real (non-silicon) life, like biological vision (hopefully), chemistry, etc.  In real life, I believe, geometry rules, not term rewriting, not types, not any form of the arbor porphyriana. These are extremely useful tools for getting predictions on (silicon) computers out of models. Nature has a way to be massively (and geometrically) parallel, extremely fast and unpreoccupied  with the cartesian method. On the other side, in order to get predictions from geometrically interesting models (like graphic lambda calculus and eventually emergent algebras)  there is no other tool for simulations better than the computer.

Graphic lambda calculus is not just any GRS, but one which has very interesting properties, as I hope shown in this open notebook/blog. So, I am not especially interested in the way graphic lambda calculus falls into the general formalism of GRS, in particular because of various reasons, like (I might be naive to think) that of the heavy underlying machinery which seems to be used to reduce the geometrically beautiful formalism to some linear writing formalism dear to logicians. But how to write a (silicon) computer program without this reduction? Mind that Nature does not need this step, for example a chemical concrete machine may be (I hope) implemented in reality just by well mixing the right choice of substances (gross simplification which serves to describe the fact that reality is effortlessly parallel).

All this for calling the attention of eventual GRS specialists to the subject of graphic lambda calculus and it’s software implementation (in particular), which is surely more productive than becoming myself such a specialist and then use GRS for graphic lambda calculus.

Please don’t let me run after this train 🙂   The king  character from Brave says better than me  (00.40 in this clip)  :

Now, back to the other possibilities.   I’ll give you evidence for these, so that you can judge for yourself.  I started from the following idea, which is related to the use of graphic lambda calculus for neural networks. I wrote previously that the NN which I propose have the strange property that there’s nothing circulating through the wires of the network. Indeed, a more correct view is the following: say you have a network of real neurons which is doing something.  Whatever it does the network, it does it by physical and chemical mechanisms. Imagine the network, then image, overimposed over the network, a dynamical chemical reaction network which explains what the real network does.  The same idea rules the NN which I am beginning to describe in some of the previous posts. Instead of the neurons there are graphs which link to others through synapses, which are also graphs. The “computation” consists in graph rewriting moves, so at the end of the computation the initial graphs and synapses are “consumed”. This image fits well not with the image of the physical neural network, but with the image of the chemical reaction network which is overimposed.

I imagine this idea is not new, so I started to google for this. I have not found (yet, thank you for sending me any hints)  exactly the same idea, but here is what I have found:

That’s already to much to process in one’s plate, right? But I think I have made my point: a sea of possibilities. I can think about others, instead of “running after the train”.

Research is so much fun!