Example: if-then-else in the chemical concrete machine

… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.

Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can).  I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.

It can compute anything which can be computed by using any programming  language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.

There are, of course, interesting questions to ask, among them:

  • do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
  • is this an example of a sequential or parallel computation?
  • what evaluation procedure is used in the chemical concrete machine?

The second question is the most easy to ask: is parallel computation.  The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)

But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:

  • lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
  • they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.

The third question, concerning the evaluation procedure will be also explored in further posts.  Care has to be taken here because there are no variables in these formalisms  (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right?  The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.

Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.

Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong).  The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow.  Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.

_________________________

The if-then-else construct has already been explored in graphic lambda calculus,  see Teaser: B-type neural networks in graphic lambda calculus (II)   in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.

In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:

bckw_111

Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:

bckw_112

Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.

In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:

bck_113

The previous remarks apply here as well.

With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another  the medicine B.

_____________________

Return to the chemical concrete machine tutorial.

My idea about UD

Here is what I believe that  UD is doing. This can be surely multiply optimized, but the basics is: put the z buffer in the 3d space.

I. Preparing the database.

1. Imagine a 3D cubic lattice which is as big as your 3D scenery bounding box.  Take coordinates aligned with the lattice. Put the camera at coordinates (0,0,0) and surround it with a cube C. The faces of the cube C will serve as the cubemap we want to construct. Each face is covered by pixels of a given resolution. We have already the following parameters to play with, given in the units of the coordinate system chosen: the step of the lattice, the dimension of the cube C, the resolution of a face of C.

2. render, by any means you like, the lattice, as seen from the (0,0,0) pov, on the 6 “screens” -faces of C.  We have 6 view frustra, the discussion will be the same for each of them. In order to render them you need to put small balls, or squares facing the relevant face of the cubemap, or whatever, at the lattice nodes, so we have another parameter here, the diameter of the ball or square.  As a result of the rendering you now know the following things:

  • for any lattice atom you know which pixel from which face it corresponds. You may do better for lattice atoms which are inside the cube C, namely “ignore” them, say you attribute to them a IGNORE label, otherwise you attribute to each lattice atom the 2D coordinates of the pixel from the cubemap and a information which says which face of the cubemap the pixel is,
  • you have information about the scale, which you attach to the lattice atoms, like this: if two neighbouring lattice atoms project on the same pixel then attach IGNORE to both. If the ball/square/whatever of a lattice atom projects on more than one pixel then attach to it  a number SCALE approximately proportional with the square ROOT (thanks ine) of the number of pixels it projects (or the dimension of the bounding box of the pixels)

Of course, you don’t want to take a huge lattice, with very small balls. That’s all in the parameters choice.

3.  Take now your database of 3D points, i.e. the real one, which you want to render eventually, UD style. I shall ignore, for the sake of the argument, how is the database implemented: as an octree or otherwise, or even if the database is made by 3D points or by some more complex objects, like polygons.  Put this database in the coordinates chosen first, such that, for example, if you are working with octrees, the cells of the lattice correspond with some level of the octree.  Attach to each node of the lattice the supplementary information: the points from the database which are within the ball surrounding the atom, or else the label VOID. Alternatively, think that any point from the database is inside some cell lattice and “project” it on each corner of the the cell lattice (i.e. attach to each lattice atom a list of points from the database which are in the neighbourhood of the lattice atom, the nil list corresponds to VOID)

4.  Let’s see what we have, supposing for example that we use octrees. We add the 3D lattice to the 3D database of points (by using lists of points as explained at 3.) and for any lattice atom we attach also the information as explained at point 2.

How to compress this efficiently? There is a new parameter here, namely the level of the octree and also how is the color, for example, information stored in the octree. Of course, this is a matter of recursion, namely at points 1-3 we may take finer and finer resolutions and lattice steps, and so on, starting from a very gross lattice and resolution, etc, then trying to figure a correct recursion procedure. That’s work to be done, is not trivial but it is somehow straightforward once you figure it.

II. The pipeline and rendering.

The problem is that we want to be able to get very very fast, from the database constructed at (I), only the points which are needed for realtime rendering, when the camera is at coordinates (x,y,z).

This problem splits into two different ones:

  • at the start of the realtime UD rendering, we want to be able to cull something close to the minimum number of 3D points, when camera is at (0,0,0). According to the information given by Euclideon, a good algorithm should   able to do this in about 1s.
  • then, we need a procedure to take what we need from the database when we change the pov from (x,y,z) to (x+1,y, z) (or alike). This should be much more faster, allowing for realtime rendering.

As a preparation, let’s remark that:

  1. if the camera is a (0,0,0), then we already know where each lattice point projects, is written in the database. So we just need to start from a pixel, at a given resolution (reccursion again), and to choose from the database only the lattice atoms which are CLOSE, in the decreasing order of SCALE, and from those the real 3D points which are neighbours (of course we have to use the octree structure). We get for each pixel a number of points of the order of log(dimension of the world).
  2. if the camera is at (x,y,z) we also know where each point from the 3D database projects, because we read it from the data attached to the lattice atom which, translated by (x,y,z), is the neighbour of the point. We get also the SCALE parameter from this.

1. We use remark 1 to solve the first problem, namely what comes through the pipeline from the huge 3D database to the pre-rendering buffer B, when we start with the camera at (0,0,0). The buffer contains about (number of pixels) X log(dimension of the world)  3D points, along with pixels coordinates where they project and with SCALE.  This is fed directly to the rendering procedure, which you can choose freely, but it is almost trivial.

2. What happens when we move the camera? We update the pre-rendering buffer B, only by updating the pixel and scale information for the 3D points in the buffer D, getting only by a translation (addition) the relevant data from the huge database, in case here are two things which might happen: there is a point which exits the buffer, or there is a hole in the image.

_________________________

Is this making any sense? Please let me know, because I was asked repeatedly what do I really believe.

Example: decorations of S,K,I combinators in simply typed graphic lambda calculus

Continuing from   Simply typed graphic lambda calculus  , let’s look at decorations for the S,K,I combinators.

We work in the lambda calculus sector of the graphic lambda calculus, which contains graphs in GRAPH which are associated to terms in lambda calculus by the algorithm explained here.

We want to decorate the arrows of such graphs, according to the rules given in the following figure:

bckw_101

Decorations are denoted by letters \sigma, \tau … or by letters a, b, c …. (what’s in a name?).  These decorations are called “types”. There is only one operation, called “\rightarrow“, which takes a pair of types (\sigma, \tau) and returns the type \sigma \rightarrow \tau.

In algebraic terms, here’s what is happening. We start with a graph in GRAPH and we decorate all it’s arrows with different letters from an alphabet (of types). Then we associate to it the magma with:

  • generators being the (decorations of the) arrows
  • relations being the ones from the rules of decorations given in the previous figure.

If the magma is free,  then we say that the graph is well typed. That means we can find a finite collection of types, with no relation between them, such that all arrows can be decorated by some element of the free magma generated by that collection of types.

_____________________

Remark 1:  Please, don’t let me reinvent the wheel.  Surely somewhere in the big work on type theory, there is something analogous done already. Just send me citations and I shall add them with due acknowledgements.

_____________________

Remark 2: This is exactly the same idea which is used in knot theory, where we associate to a knot diagram it’s knot  quandle. But with a twist: while in knot theory we look for quandles which are not free (we cannot eliminate all relations between the generators), in this simply typed lambda calculus we concentrate only on those graphs which have free magmas associated.  Please look at the right, lower corner from the previous figure, where is given the rule of decoration for the \varepsilon gate. (We are not going to use it in relation to the lambda calculus sector.)  As you see, this is compatible with the trivial quandle with the operation xy = y.

_____________________

Remark 3:  I am not going to enter or take the extremely interesting path of cartesian closed categories , basically because my strong bias against anything cartesian. The initial source of this bias comes from the experience with sub-riemannian geometry, where any application of cartesian ideas, like slicing a problem until it becomes 1 dimensional, then solving it by 1d calculus and analysis techniques, then reassembling the slices, leads always to “rigidity results”, which have the form: if you try then you arrive at a contradiction. Don’t get me wrong, there are amazing and extremely deep such results, but they are always in the form of a proof by contradiction.

Remark 4: The point of view from this simply typed graphic lambda calculus is surely related to the Hindley-Milner type system.

_____________________

Let’s see how this works for the I, K, S combinators, then for the \Omega combinator.

The procedure is the following:

  • we number the nodes of the graph (arbitrarily)
  • we put arbitrary, different labels on each arrow of the graph
  • we write the relations which appear from each node, according to the rules of decoration
  • finally, we examine the generated magma, to see if, eliminating some generators by using the relations, we can arrive to prove that’s a free magma.

In the next figure this is done for the I combinator (the identity) and for the K combinator.

bckw_103

We obtain the right types for identity and K combinators (there are no contexts, that’s for later). It was easy, they are “well typed” according to the definition from here.

Now, let’s look at the S combinator:

bckw_104

We have to work a bit, but not too much:

bckw_105

(For the convenience of the readers, I added in the figures the usual notations for combinators and types. )

We obtain again the right types for S as well as for the variables involved.

_____________________

Let’s try to decorate  the combinator \Omega = (\lambda x . xx) (\lambda x. xx).

bckw_106

We obtain the following magma presentation:

bckw_107

which cannot be free, because of relations (4) and (6). So \Omega is not well typed, exactly like in the simply typed lambda calculus.

Simply typed graphic lambda calculus

Let me prepare the path towards discussing about types and spaces. In this post is a first proposal for a simply typed graphic lambda calculus, which is compatible with simply typed lambda calculus.

The leading idea is:

types are decorations, in the same way as elements of an emergent algebra are decorations.

Motivation: Remember that emergent algebras appeared as an effort to understand the formalism of decorated binary trees which appeared first in Dilatation structures I. Fundamentals, section 4. Various efforts have been spent for this, as witnessed in Computing with space, sections 3-6,  until finally arriving to the formalism of graphic lambda calculus, as exposed in  arXiv:1305.5786 [cs.LO] . The main “trick” of graphic lambda calculus is that it eliminates variables, therefore it eliminates decorations of graphs!

So, the program is that now we go back from the other side, logic, where types are decorations, as you will see, in order to connect with emergent algebras, which deal with the problem of describing spaces in a purely relational way, without any appeal to points (well, in the graphic lambda calculus formalism).

Interesting thing will appear, surely.

__________________________

In this version of simply typed lambda calculus we introduce types (just a bag of names  \sigma, \tau, … ), with one operation, denoted by \rightarrow.  We use types as decorations of arrows of graphs in GRAPH, according to the following decoration rules:

bckw_101

 

Remark how the decoration of the \varepsilon gate corresponds to the trivial quandle.

I don’t claim that I can decorate any graph in GRAPH by using these rules.  But when I can do it, then under any move from graphic lambda calculus the decoration remains compatible with the decoration rules. Indeed, this is proved in the next figure:

bckw_102

The pruning and the FAN-OUT moves are trivially preserving decorations.

__________________________

In another post we shall see why is this compatible with the simply typed combinatory logic.

Chemical concrete machine, pit stop

I think it’s pretty clear already what the chemical concrete machine is and that’s a fair idea about the things it can do, in principle. It is time to prepare a more conservative presentation, aka an article or two,  based  on the  list:

There are lots of other features which were not yet discussed here, as well as surprising paths which demand future investigations.

I need as much input as possible from those who are interested in this subject. Because there are many ways to tell a story, depending on the story teller and on the audience. As you know, I am a mathematician who wants to stir an interdisciplinary collaboration on this subject which seems to fall in the algorithmic chemistry  or biological computing fields.

So I need help from you, in two directions:

  • to calm down my math habits, for the sake of presenting a reasonably comprehensible story to non-mathematicians
  • to tell me which parts need more development, which are more interesting, so to say.

Chemical concrete machine, detailed (VI)

Let’s continue from  the post  Chemical concrete machine, detailed (V) , by adding some more facts and remarks.

_________________

We have seen that in order to not have to apply a controlled sequence of CO-ASSOC molecules (i.e. moves), it is better to introduce a composite move, saying that a certain simple molecule is a propagator.

bckw_9

With this move the formalism of the chemical concrete machine is exactly as powerful as without it. The reason is that the move called “PROP+” can be seen as a sequence of CO-ASSOC moves and DIST+ moves.

Let’s accept this move (i.e. like if there is an associated enzyme to it) and let’s revisit the proof that the W molecule is a multiplier.

bckw_10

_________________

Besides the B, C, K, W molecules (which correspond to the  B,C,K,W system  ), other molecules are also interesting: in the following figure are presented the molecules F, I, S’ and S.

bckw_12

The molecule F may represent FALSE in the Church encoding of the booleans, along with K which encodes TRUE.  The molecule I is the identity and the molecule S corresponds to the combinator S from the SKI combinator calculus. which is

a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not useful for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language.

These correspondences are established by using the algorithm for transforming lambda calculus terms into graphs in the graphic lambda calculus, then by using the dictionary between the gates from graphic lambda calculus to the essential molecules from the chemical concrete machine.

What about the S’ molecule?  It is a version of the molecule S, i.e. it transforms into S by this reaction:

bckw_13

Also, there is a proof, analogous with the one for W, of the fact that S’ is a multiplier, by using the PROP+ move. This proof is better than the proof for S which was given at the end of the post   Local FAN-IN eliminates global FAN-OUT (I) .

_____________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (V)

The B,C,K,W system

   is a variant of combinatory logic that takes as primitive the combinators B, C, K, and W. This system was discovered by Haskell Curry in his doctoral thesis Grundlagen der kombinatorischen Logik, whose results are set out in Curry (1930).

In this post, I shall explain which are the correspondents of the B, C, K, W, combinators in the formalism of the chemical concrete machine, then I shall prove that they are multipliers. In a future post I shall prove that  their correspondents in the chemical machine make the machine Turing complete.

Remark: In a sense, this was already established, by using the S, K , I combinators, in the post Local FAN-IN eliminates global FAN-OUT (I) . You only have to pass from the  black and white drawing conventions of graphic lambda calculus to the drawings coloured with red and green of the chemical concrete machine, and then use the result which says that combinatory logic can be implemented in graphic lambda calculus with global FAN-OUT. All in all this gives that in the chemical concrete machine the combinatory logic can be implemented without any global move.

However, I think is more instructive to see first why other combinators than S, K, I are multipliers (which is a reformulation of the result mentioned in the remark).

___________________

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )

bckw_2

Now, let’s prove that they are multipliers!  The proofs for B and C are very much alike, therefore I put here only the proof that B is a multiplier:

bckw_3

By the conventions of the chemical concrete machine, I mention here the enzymes which are involved in the reactions, instead of writing the moves, like in the graphic lambda calculus.

The proof that K is a multiplier is the following:

bckw_6

Notice how, in both cases, the reactions seem feasible, in the sense that there are no cases, they can be accomplished by a linear process: pour some DIST enzymes, then some FAN-IN, for B, or DIST, LOC PRUNING and then FAN-IN, for K. More than that, remark that the order of reactions do not matter, i.e. they can be accomplished by pouring all the needed enzymes at the same moment.

For the W combinator (molecule), things get a bit more complex, as was the case for the combinator S:

bckw_4

There is a reaction (or move) which needs explanations. I called it DISENTANGLE (CO-ASSOC) reaction. It is this:

bckw_5

It can clearly be done by a controlled succession of CO-ASSOC moves (reactions). From the point of view of the feasibility in the real world (provided a real implementation of the chemical concrete machine will appear), it seems hard to control the exact order of applications of CO-ASSOC moves which gives the DISENTANGLE move as an effect. So, probably,  we shall need a “disentangle enzyme” dedicated to this.

As an alternative, remark that for proving that W is a multiplier we need an application of the DISENTANGLE composite move, described in the next figure:

bckw_7

For practical (or theoretical as well) purposes, it is enough to take this as a move. If you are lost and don’t understand what is the correspondent of this move in lambda calculus, is like this:  for any term T, if you apply a FAN-OUT gate to TT then you obtain two TT.  (Recall that’s practically invisible in lambda calculus, because there is no FAN-OUT, instead all variables have names and the pattern matching works by some magic outside that formalism.)

In other words, what would get rid of needing a controlled sequence of CO-ASSOC reactions for multiplying the molecule W is this:  assume that the molecule which is connected to the “y” essential molecule (i.e. to the input of a FAN-OUT gate) is a “propagator”. Propagators are defined in the next figure:

bckw_8

Propagators are different from multipliers, because they are molecules with one selected input and one selected output which “propagate along a FAN-OUT gate”, while multipliers are multiplied by a FAN-OUT gate. Propagators can serve in fact as labels, or names, which propagate along a tree of FAN-OUT gates.

_________________

Return to the chemical concrete machine tutorial.

To do, soon

…  these are some notes to self,  the future depends, as usual, on future inputs to my brain.

1. It seems possible to try to use the chemical concrete machine formalism for doing some very concrete “lifeforms”, i.e. families of graphs which, under the usual statistical interactions with other molecules in the “reactor”,  satisfy some definition of life. Far fetched? No, at least we see hints that combinators are multipliers (thus reproduction can be achieved with zippers and combinators). Strange way to see what could be a purpose of logic in biology.

2. There is a hidden symmetry in the moves of the chemical concrete machine which indicates there is no need for the termination gate (i.e. there is no garbage), instead this symmetry (inspired by the duality between multipliers and co-multipliers) puts in a new light the idea of recursion.

3. It seems to be easy to reformulate the cartesian method as described here into a sort of it’s opposite. Of course, such that it makes sense, not especially as a scientific method, but a method of creation, maybe relevant for the understanding how life proceeds.

4. I have large quantities of math facts and research which is put on hold, not reported here, or only in a very indirect way.

There is a partly interesting, partly boring period coming, I feel it. On one side, I am working on putting some of the work reported in this open notebook in  articles form, which is funny, but on the other side I ask myself: why? Is there any need, other than organizing a bit things? Is it useful? Is this a satisfying method of communication?

But is the blog a better method? Other? Suggestions?

“Visual awareness” by Koenderink

Further is an excerpt from the ebook Visual awareness by Jan Koenderink. The book is part of a collection, published by The Clootcrans Press.

What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. One says that awareness is intentional. This does not mean that the something exists otherwise than in awareness. For instance, you are visually aware in your dreams, when you hallucinate a golden mountain, remember previous visual awareness, or have pre-visions. However, the case that you are visually aware of the scene in front of you is fairly generic.

The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.

JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!

But it neatly sums up the mainstream view of the matter as I read it.

The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.  [p. 2-3]

___________________

I hold similar views, last time expressed in the post Ideology in the vision theater (but not with the same mastery as Koenderink, of course). Recall that “computing with space“, which is the main theme of this blog/open notebook, is about rigorously understanding (and maybe using) the “computation” done by the visual brain with the purpose to understand what space IS.  This is formulated in arXiv:1011.4485  as the “Plato’s hypothesis”:

(A) reality emerges from a more primitive, non-geometrical, reality in the same way as
(B) the brain construct (understands, simulates, transforms, encodes or decodes) the image of reality, starting from intensive properties (like a bunch of spiking signals sent by receptors in the retina), without any use of extensive (i.e. spatial or geometric) properties.
___________________
Nevermind my motivations, the important message is that  Koenderink critic is a hard science point of view about a hard science piece of research. It is not just a lexical game (although I recognize the value of such games as well, but as a mathematician I am naturally inclined towards hard science).

A less understood problem in sub-riemannian geometry (I)

A complete, locally compact riemannian manifold is a length metric space by the Hopf-Rinow theorem. The problem of intrinsic characterization of riemannian spaces asks for the recovery of the manifold structure and of the riemannian metric from the distance function coming from  to the length functional.

For 2-dim riemannian manifolds the problem has been solved by A. Wald in 1935. In 1948 A.D. Alexandrov  introduces his famous curvature (which uses comparison triangles) and proves that, under mild smoothness conditions on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional)  curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Hölder smoothness).

The problem has been solved by Nikolaev in 1998, in the paper A metric characterization of Riemannian spaces. Siberian Adv. Math.   9,  no. (1999),  1-58.  The solution of Nikolaev can be summarized  like this: he starts with a locally compact length metric space (and some technical details), then

  •  he constructs a (family of) intrinsically defined tangent bundle(s) of the metric space, by using a generalization of the cosine formula for estimating a kind of a distance between two curves emanating from different points. This will lead him to a generalization of the tangent bundle of a riemannian manifold endowed with the canonical Sasaki metric.
  • He defines a notion of sectional curvature at a point of the metric space, as a limit of a function of nondegenerated geodesic triangles, limit taken as these triangles converge (in a precised sense)  to the point.
  • The sectional curvature function thus constructed is supposed to satisfy a Hölder continuity condition (thus a regularity formulated in metric terms)
  • He proves then that  the metric space is isometric with (the metric space associated to) a riemannian manifold of precise (weak) regularity (the regularity is related to the regularity of the sectional curvature function).

Sub-riemannian spaces are length metric spaces as well. Any riemannian space is a sub-riemannian one. It is not clear at first sight why the characterization of riemannian spaces does not extend to sub-riemannian ones. In fact, there are two problematic steps for such a program for extending Nikolaev result to sub-riemannian spaces:

  • the cosine formula, as well as the Sasaki metric on the tangent bundle don’t have a correspondent in sub-riemannian geometry (because there is, basically, no statement canonically corresponding to Pythagoras theorem);
  • the sectional curvature at a point cannot be introduced by means of comparison triangles, because sub-riemanian spaces do not behave well with respect to this comparison of triangle idea, as proved by Scott Pauls.

In 1996 M. Gromov formulates the problem of intrinsic characterization of sub-riemannian spaces.  He takes the Carnot-Caratheodory (or CC) distance (this is the name of the distance constructed on a sub-riemannian manifold from the differential geometric data we have, which generalizes the construction of the riemannian distance from the riemannian metric) as the only intrinsic object of a sub-riemannian space. Indeed, in the linked article, section 0.2.B. he writes:

If we live inside a Carnot-Caratheodory metric space V we may know nothing whatsoever about the (external) infinitesimal structures (i.e. the smooth structure on V, the subbundle H \subset T(V) and the metric g on H) which were involved in the construction of the CC metric.
He then formulates the goal:
Develop a sufficiently rich and robust internal CC language which would enable us to capture the essential external characteristics of our CC spaces.
He proposes as an example to recognize the rank of the horizontal distribution, but in my opinion this is, say, something much less essential than to “recognize” the “differential structure”, in the sense proposed here as the equivalence class under local equivalence of dilation structures.
As in Nikolaev solution for the riemannian case, the first step towards the goal is to have a well defined, intrinsic, notion of tangent bundle. The second step would be to be able to go to higher order approximations, eventually towards a curvature.
My solution is to base all on dilation structures. The solution is not “pure”, because it introduces another ingredient, besides the CC distance: the field of dilations. However, I believe that it is illusory to think that, for the general sub-riemannian case, we may be able to get a “sufficiently rich and robust” language without. As an example, even the best known thing, i.e. the fact that the metric tangent spaces of a (regular) sub-riemannian manifold are Carnot groups, was previously not known to be an intrinsic fact. Let me explain: all proofs, excepting the one by using dilation structures, use non-intrinsic ingredients, like differential calculus on the differential manifold which enters in the construction of the CC distance. Therefore, it is not known (or it was not known, even not understood as a problem) if this result is intrinsic or if it is an artifact of the proof method.
Well, it is not, it turns out, if we accept dilation structures as intrinsic.
There is a bigger question lingering behind, once we are ready to think about intrinsic properties of sub-riemannian spaces:  what is a sub-riemannian space? The construction of such spaces uses notions and results which are by no means intrinsic (again differential structures, horizontal bundles, and so on).
Therefore I understand Gromov’s stated goal as:
Give a minimal, axiomatic, description of sub-riemannian spaces.
[Adapted from the course notes Sub-riemannian geometry from intrinsic viewpoint.]

Chemical concrete machine, detailed (IV)

As a preparation for the Turing computation properties of the chemical concrete machine, in this post I shall explain what multipliers and co-multipliers are.

Basically,  multipliers and co-multipliers are molecules which self-multiply.  More precisely, in the next figure we see the definition of those:

zip_split_5

Here A  and A' are molecules from the formalism of the chemical concrete machine and 1 and 2 are labels. The blue arrow means any chemical reaction which is allowed.

Question: by close examination of the previous posts on graphic lambda calculus, can you identify any multiplier? or co-multiplier?

If not, then be patient, because in a future post I shall give plenty examples of those, especially connected with logic.

Further, we shall see that \beta zippers, introduced in Chemical concret machine, detailed (III) , multiply in a very graphic way, kind of like what happens with the DNA of a cell when it divides. Let’s see.

We want to know if a zipper can be a multiplier. In the following figure we see what happens in the presence of DIST enzymes:

zip_split_1

The reaction continues:

zip_split_2

Now, the zipper multiplied into two zippers, but they are still connected.  We need more information about A, B, C, D and A', B', C', D'.   Remark that:

zip_split_4

zip_split_3

In conclusion: if A, B, C, D are multipliers and A', B', C', D' are co-multipliers, then the zipper is a multiplier!

__________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (III)

This is a first post about what the chemical concrete machine can do. I concentrate here on geometrical like actions.

_____________________

1. Lists and locks.  Suppose you have a family of molecules which you want to free them in the medium in a given order. This corresponds to having a list of molecules, which is “read” sequentially. I shall model this with the help of the zipper from graphic lambda calculus.

Suppose that the molecules we want to manipulate have the form A \rightarrow A', with A and A' from the family of “other molecules” and \rightarrow an arrow, in the model described in Chemical concrete machine, detailed (I).  Here are three zippers (lists).

zip_list_1

The first zipper, called a \beta zipper, behaves in the following way. In the presence of \beta^{+} enzymes, there is only one reaction site available, namely the one involving the red and green nodes in the neighbourhood of the D, D'. So there is only one reaction possible with a \beta^{+} enzyme, which has a a result the molecule D \rightarrow D' and a new, shorter \beta zipper. This new zipper has only one reaction site, this time involving nodes in the neighbourhood of C, C', so the reaction with the enzyme \beta^{+} gives C \rightarrow C' and a new, shorter zipper. The reaction continues like this, freeing in order the molecules B\rightarrow B', then A \rightarrow A' and E \rightarrow E'.

The second zipper is called a FAN-IN zipper (or a \phi zipper). It behaves the same as the previous one, but this time in the presence of the FAN-IN enzyme \phi^{+}.

On the third row we see a mixed  zipper. The first molecule D \rightarrow D' is released only in the presence of a \phi^{+} enzyme$, then we are left with a \beta zipper.

This can be used to lock zippers. Look for example at the following molecule:

zip_list_4

called a locked \beta zipper. In the presence of only \beta^{+} enzymes, nothing happens. If we add into the reactor also \phi^{+} enzymes, then the zipper unlocks, by releasing a loop (that’s seen as garbage) and a \beta zipper which starts to react with \beta^{+} enzymes.

The same idea can be used for keeping a molecule inactive unless both \phi^{+} and \beta^{+} enzymes are present in the reactor.  Say that w have a molecule A \rightarrow A' which is made inactive under the form presented in the following figure

zip_list_3

The molecule is locked, but it has two reaction sites, one sensible to \beta^{+}, the other sensible to \phi^{+}. Both enzymes are needed for unlocking the molecule, but there is no preferred order of reaction with the enzymes (in particular these reactions can happen in parallel).

_____________________

2. Sets. Suppose now that we don’t want to release the molecules in a given order. We need to prepare a molecule which has several reaction sites available, so that multiple reactions can happen in parallel, as in the last example. Mathematically, that could be seen as a representation of the set of molecules we want to free, instead of the list of them.  This is easy, as described in the next figure:

zip_list_2

On the first row we see what is called a \beta set. It has 4 possible reaction sites with the enzyme \beta^{+}, therefore, in the presence of this enzyme, the molecules A \rightarrow A', … , $E \rightarrow E’$ are released at the same moment. Alternatively, we may think about a \beta set as a bag of molecules which releases (according to the probability of the reaction with a \beta^{+} enzyme$) one of the four molecules A \rightarrow A', … , D \rightarrow D', at random. (It should be interesting to study the evolution of this reaction, because now there are only 3 reaction sites left, …)

On the second row we see a FAN-IN, or \phi set. It behaves the same as the previous one, but this time in the presence of the FAN-IN \phi^{+} enzyme.

Finally, we see a mixed set on the third row. (It should have an interesting dynamics, as a function of the concentrations of the two enzymes.)

_____________________

3. Pairs.  Actually, there is no limit but the imagination to what geometrical operations to consider, see for example the posts Sets, lists and order of moves in graphic lambda calculus and Pair of synapses, one controlling the other (B-type NN part III) . As another example, here is a more involved molecule, which produces different pairs of molecules, according to the presence of \phi^{+} or \beta^{+} enzymes.

In the following figure we see how we model a pair of molecules, then two possible reactions a re presented.

zip_list_5

The idea is that we can decide, by controlling the amount of \beta^{+} or \phi^{+}, to couple A with D and C with D, or to couple A with B and C with D. Why? Suppose that A and B can react with both C and D, depending of course on how close available molecules are.

For example, we want that A and B molecules to be, statistically, one in the proximity of another. We add to the reactor some enzyme \phi^{+} and we obtain lots of pairs (A,B)_{\beta}. Now, when we add further the \beta^{+} enzyme, then, immediately after the reaction with this enzyme we are going to have lots of pairs of molecules A and B which are physically close one to another, starting to react.

Instead, if we first introduce \beta^{+} enzymes and then \phi^{+} enzymes, then A would react more likely with D.

_____________________

Return to the chemical concrete machine tutorial.

A sea of possibilities

… opens when I look at graphic lambda calculus as a graph rewriting system  (see also the foundational Term graph rewriting by Barendregt et al.)  The first, most obvious one, is that by treating graphic lambda calculus as a particular GRS, I might USE already written software for applications, like the chemical concrete machine or the B-type neural networks (more about this further). There are other possibilities, much, much more interesting from my point of view.

The reason for writing this post is that I feel a bit like the character Lawrence Pritchard Waterhouse from Neal Stephenson’s Cryptonomicon, more specifically as described by the character Alan Turing in a discussion with Rudolf  von Hacklheber. Turing (character in the book) describes science as a train with several locomotives, called “Newton”, etc (Hacklheber suggests there’s a “Leibniz” locomotive as well), with today’s scientists in the railroad cars and finally with Lawrence running after the train with all his forces, trying to keep the pace.

When you change the research subjects as much as I did, this feeling is natural, right? So, as for the lucky  Lawrence from the book (lucky because having the chance to be friend with Turing), there is only one escape for keeping the pace: collaboration. Why run after the GRS train when there is  already amazing research done?  My graphic lambda calculus is a particular GRS, which is designed so that it has applications in real (non-silicon) life, like biological vision (hopefully), chemistry, etc.  In real life, I believe, geometry rules, not term rewriting, not types, not any form of the arbor porphyriana. These are extremely useful tools for getting predictions on (silicon) computers out of models. Nature has a way to be massively (and geometrically) parallel, extremely fast and unpreoccupied  with the cartesian method. On the other side, in order to get predictions from geometrically interesting models (like graphic lambda calculus and eventually emergent algebras)  there is no other tool for simulations better than the computer.

Graphic lambda calculus is not just any GRS, but one which has very interesting properties, as I hope shown in this open notebook/blog. So, I am not especially interested in the way graphic lambda calculus falls into the general formalism of GRS, in particular because of various reasons, like (I might be naive to think) that of the heavy underlying machinery which seems to be used to reduce the geometrically beautiful formalism to some linear writing formalism dear to logicians. But how to write a (silicon) computer program without this reduction? Mind that Nature does not need this step, for example a chemical concrete machine may be (I hope) implemented in reality just by well mixing the right choice of substances (gross simplification which serves to describe the fact that reality is effortlessly parallel).

All this for calling the attention of eventual GRS specialists to the subject of graphic lambda calculus and it’s software implementation (in particular), which is surely more productive than becoming myself such a specialist and then use GRS for graphic lambda calculus.

Please don’t let me run after this train 🙂   The king  character from Brave says better than me  (00.40 in this clip)  :

Now, back to the other possibilities.   I’ll give you evidence for these, so that you can judge for yourself.  I started from the following idea, which is related to the use of graphic lambda calculus for neural networks. I wrote previously that the NN which I propose have the strange property that there’s nothing circulating through the wires of the network. Indeed, a more correct view is the following: say you have a network of real neurons which is doing something.  Whatever it does the network, it does it by physical and chemical mechanisms. Imagine the network, then image, overimposed over the network, a dynamical chemical reaction network which explains what the real network does.  The same idea rules the NN which I am beginning to describe in some of the previous posts. Instead of the neurons there are graphs which link to others through synapses, which are also graphs. The “computation” consists in graph rewriting moves, so at the end of the computation the initial graphs and synapses are “consumed”. This image fits well not with the image of the physical neural network, but with the image of the chemical reaction network which is overimposed.

I imagine this idea is not new, so I started to google for this. I have not found (yet, thank you for sending me any hints)  exactly the same idea, but here is what I have found:

That’s already to much to process in one’s plate, right? But I think I have made my point: a sea of possibilities. I can think about others, instead of “running after the train”.

Research is so much fun!

Chemical concrete machine, detailed (II)

Continues from the  Chemical concrete machine, detailed (I).  In this part I connect with  the previous post  Chemical concrete machine, short list of gates and moves, then there is a discussion part.

__________________

Let’s look again at the two examples of chemical reactions from the first part:

ess_4

These reactions look as being complementary, as if one is the inverse of another. Indeed, this is the case, but for seeing this better we need to refine a bit the notations.

For each reaction is specified an enzyme, \beta^{+} for the first reaction and \beta^{-} for the second reaction, and moreover a reaction site, denoted by closed red, dashed, curve. The meaning is that the enzyme react with the molecule by binding at the reaction site and by modifying it, without any other action on the rest of the molecule.

Practically, each enzyme changes a small preferred pattern into another. The rest of the molecule does not take part to the reaction and for this reason is better to concentrate exclusively on the reaction sites and how they change.

The next figure represents this change, for the pair of complementary enzymes \beta^{+} and \beta^{-}.

ess_5

(The crossing from the right hand side of the figure is not real, remember that the molecules float in a 3D container. The 2D drawings we make are projections in the plane of what we see in space.)

The drawing is made such that to make clear the correspondence between  the free ends or beginnings of arrows pointing or coming from the rest of the space.

Now is clear why the two reactions are one the inverse of another. Finally, we represent both reactions like this:

ess_6

In graphic lambda calculus, this is the most important move, namely the graphic beta move. It corresponds to the beta reduction from lambda calculus.

As simple as it might seem to the eyes of a biochemist, it has tremendous powers. Besides (graphic lambda calculus), it appears also in topology research related to various theories in quantum physics, see the UNZIP move described in   The algebra of knotted trivalent graphs and Turaev’s shadow world  .

__________________

I can now describe the allowed chemical reactions with enzymes in the model of the chemical concrete machine. I reproduce further the last figure from the post Chemical concrete machine, short list of gates and moves , with the understanding that it describes such chemical reactions, with the notational conventions detailed here.

Reactions with enzymes are called “moves”, as in graphic lambda calculus. Moreover, for each reaction-move is given a link to the description of the said move in graphic lambda calculus.

convention_2

convention_3

convention_6

convention_4

The last move, elimination of loops,

convention_5

is not a chemical reaction, however! It’s meaning is that loops can be considered garbage (element of GARB). This is in contradiction with the assumption that GARB consists of reaction products which don’t react further with the molecules. Sometimes we might need reactions between loops and enzymes, like \beta^{-}.   This is a subject which will be discussed later, but the zest of it is that GARB may also be seen as the collection of reaction products which we don’t want to detect at the end of the chain of reactions.

The list of allowed chemical reactions is minimal. We may add other reactions, like reactions between the “other molecules”, this will also be discussed later.

I the next post I shall describe what the chemical machine can do, in principle.

__________________

Discussion.    We have now at our disposal an universe of molecules and a family of allowed chemical reactions. In order to get a usable model we need a bit more. The natural addition would be to put these reaction in the form of a chemical reactions network, or, equivalently,  a Petri net. This is a path to follow, which will give explicit, numerical, predictions about the behaviour of the chemical concrete machine.

We cannot apply directly Petri nets to the situation at hand, because the chemical reactions, as described here, are between reaction sites, and not between molecules. A molecule may have several reaction sites, moreover there are reaction sites which involve two molecules. After each possible reaction with an enzyme, the resulted molecule, or molecules, may have several reaction sites as well.

Moreover, a reaction site, say, which involves two molecules, like the reaction site containing two arrows, possibly from different molecules, are aplenty. We need to make physical assumptions like saying that a reaction site might be localized in the 3D space, otherwise we have a combinatorial explosion of possible reactions involving one, or a pair of molecules.

These problems related to reaction sites have surely been studied elsewhere. One possible place to learn from might be (in the view of a naive mathematician like me), the research around the  SCHEMA algorithm.   Are reaction sites schemas?

This brings us to the question of the real implementation of the chemical concrete machine. It is clear that the subject belongs to the synthetic biology field and that there is a strong need for a collaborative help from specialists in this field.

There are two possible ways for such an implementation:

  • to construct molecules like the essential ones, with the purpose of manipulating interesting other molecules, which appear in the model as the “other molecules”. Indeed, as we shall see, the chemical concrete machine can perform logical computation (it is Turing complete) in a much simpler form than imitating the structure of a silicon computer (instead, it uses extreme functional programming), and also it can perform geometrical operations, like hoarding and grouping interesting “other” molecules, releasing them at the detection of a chemical signal, and so on, again in a much simpler way than by imitating a mechanical large scale machine, or a silicon computer with actuators and effectors.
  • or to recognize chemical reactions in the reality, for example in a living organism, which satisfy the chemical concrete machine formalism. In this second case, which might be likely due to the simplicity of the formalism, the information we get from the chemical concrete machine model is semantic, eg. if we want to “convince”  a living cell to do a certain logical computation, we may exploit the chemical concrete machine formalism, embodied in the cell by recognizing the reactions which are meaningful for the formalism, and then trying to find out how to exploit them (basically how to detect or amplify their effects). This second way seems to be related to the extremely interesting path opened by Fontana and Buss , saying that essentially lambda calculus is something a sufficiently complex chemical reaction  network manifests.

Chemical concrete machine, detailed (I)

This is the first in a series of posts which have the purpose to explain in detail the chemical concrete machine. Please read the previous posts with the tag chemical concrete machine and also consult the tutorial on graphic lambda calculus, if needed.

As a motivation, see  Extreme functional programming done with biological computers.

_____________________

I look forward for comments, corrections and contributions.

_____________________

In the following I shall use the name “molecule” in a very loose sense. A molecule is seen as made by other smaller molecules (for example atoms are molecules) which are connected by chemical bonds, or by other molecules called “arrows”. Therefore, a molecule is seen as a graph with nodes which are smaller molecules and arrows which connect those nodes.

There might exist molecules with free arrows, or bonds.

Besides molecules, I shall also use “enzymes”. A chemical reaction will always involve two molecules, or a molecule and an enzyme.  The product of the reaction is a molecule plus garbage, denoted “GARB”, which can be a collection of molecules which are inactive (i.e. they cannot be part of any other chemical reaction).

We shall work with a list A, B, C, ... of unspecified molecules, which can react (or not) one with another, this is left to the choice of the user of the chemical concrete machine. Besides this bag of names of molecules, we have a short list of essential molecules, which are the ingredients of the machine.

For convenience, arrows and loops (i.e. closed arrows) are seen as molecules.

All this is described in the following figure.

ess_1

On the first row you see the list of essential molecules, named respectively

  • “app”, corresponding to the application gate in (graphic) lambda calculus
  • “y”  ,  corresponding to the FAN-OUT gate in graphic lambda calculus
  • “lambda”, corresponding to the lambda abstraction gate in (graphic) lambda calculus
  • “f” , corresponding to the \varepsilon gate in graphic lambda calculus

For the moment these correspondences don’t mean much, therefore I leave the explanation for later. There are four essential molecules, called app, y, lambda and f.

On the second row we see a loop, an arrow (which are considered molecules, as written before), and a molecule called “term”, which corresponds in graphic lambda calculus to the termination gate.

In the last part of the figure you see the list of enzymes. What is to be noticed is that any enzyme comes in two flavors: “+” and “-“. The reason for this is the following. Any chemical reaction which involves a molecule and an enzyme will correspond to a move in the list of moves of the chemical concrete machine  . If you examine these moves then you shall notice that they go in both directions. Therefore, such a move appears as a pair of unidirectional moves. By convention, the moves from left to right are denoted by “+” and the moves from right to left are denoted by “-“. To each such move is associated an enzyme.

_____________________

With molecules and essential molecules we build larger molecules. The physical, concrete way of constructing such molecules is left unspecified (although it can be done in the formalism of the chemical concrete machine).  Here are some examples of larger molecules.

ess_2

We imagine that all molecules float inside a 3D container. There may be several copies of the same molecule in the container.

Generally, the rule of building molecules is that they are made by other molecules and essential ones, simply by connecting the arrows and respecting the arrows orientations. More technically, with the correspondences specified previously, the rules of building molecules are exactly the ones for building graphs in the set GRAPH described in the post Introduction to graphic lambda calculus. The only new thing is that we admit also a list A, B, C, ... of unspecified nodes with unspecified valences, which model “other molecules”.

_____________________

So, in this simplified view of reality, molecules are such graphs. Let us see how they react with enzymes. We cannot simply write reactions as

molecules + enzyme = molecules + GARB

because these molecules may be complicated beasts (graphs) and because, as we shall see, the enzymes prefer to react with certain patterns (subgraphs) of molecules. For specifying how the reaction takes place we need:

  • molecules
  • an enzyme
  • and a “reaction site”, which is a small part of the initial collection of  molecules.

The reaction site have to be present in the molecule, otherwise the reaction cannot happen.

In the following figure I consider two examples of reactions (which correspond to graphic beta moves in the realm of graphic lambda calculus).

ess_3

In the first row we see a reaction between a molecule and the enzyme \beta^{+}, which results into two other molecules and some GARB.  There is a small region in the initial molecule, marked by a dashed red closed curve, which represents a reaction site for the \beta^{+} molecule.

In the second row is written the same reaction, but in a simpler form. The red “+” sign is eliminated, the two molecules which are obtained are juxtaposes, as if they are floating in the 3D container, and the GARB is ignored. Moreover, the enzyme \beta^{+} points towards the reaction site.

The rows 3 and 4 describe another reaction. At closer inspection, it’s a reaction which can be interpreted as the inverse of the first one.  Let’s examine directly the 4th row (which is obtained from the 3rd row  by the same procedure as the 2nd row was obtained from the 1st).  The reaction site of the enzyme \beta^{-} is a pair of arrows from two different molecules, The resulting molecule is the same as the initial molecule from the previous reaction.

_____________________

In the next post I shall continue with the chemical reactions which are allowed, with all details.

Extreme functional programming done with biological computers

After a very interesting hangout with Eugenio Battaglia and Gerd Moe-Behrens emerged this idea: to use the functional programming for biological computing. It is not straightforward to explain it, therefore I use this post in order to clarify a bit what is my understanding of the problem.

As a background for this description, consider that I am a mathematician, which is a mixed blessing, because even if some ideas seem easy in mathematical terms, they turn out to be hard to explain without explicit recourse to mathematics. On the other side, I am ignorant concerning chemistry and biology, therefore handicapped in discussions with specialists outside my narrow mathematical world.

Further I shall describe things from my point of view, within the mentioned constraints which shape my understanding. I am not sure if I gave the right description of the zest of the discussion.

1. Apparently, as Gerd Moe-Behrens explains in The biological microprocessor, or how to build a computer with biological parts, there is a trend to use the architecture of a silicon computer for building a biological computer. In particular this means to try to build boolean logic gates,  or memory.  It is an important observation that whatever happens in a living cell which resembles to computation (in a more vague sense than Turing computation) is not necessarily implemented as in a silicon computer. Instead, synthetic biology tries to import concepts from silicon computers into the bio realm, mainly because these concepts are already familiar. Hence the accent on bits and boolean logic, although not exclusive (other studies concern for example membrane computing, or neural networks, or continuously evolving dynamical systems).

2. On the other side, functional programming  is one of the main  programming paradigms, a very elegant one,  receiving more and more attention in the competition with the more well known imperative programming . Or, evaluation of boolean circuits, which is taken as a role model in many studies of biological computing, is more natural in the imperative programming paradigm. However, functional programming has its roots in the lambda calculus, one of the two pillars of computation, along with the Turing machine (of equivalent power, but with different philosophies behind).

3. In 1994, Fontana and Buss, in a pair of articles, introduce the idea that lambda calculus is a kind of natural formalization of the bare bones of  chemistry.  Molecules are seen as lambda terms, collision of molecules is seen as the application operation in lambda calculus, and  the abstraction operation from lambda calculus “captures the role of functional groups embedded in the reactants” [source, p. 11].

4. By the previous points, it is only natural to try to use the functional programming paradigm for biological computing. This means in particular to no longer chase for logical gates and boolean circuits.

5. An interesting thing is to notice that Fontana and Buss use lambda calculus with eta reduction, i.e. the terms (or molecules) are identified with functions, in the sense that the molecule A is identified with the function B \mapsto AB. Also functional programming, as far as I understand, also use this eta reduction, being more interested into types.

6. Independently, I built graphic lambda calculus, which is even more powerful than  lambda calculus. One of the interesting parts of graphic lambda calculus is that it makes clear that the beta reduction (which is the main reduction move in lambda calculus) is local in nature (read “simple to implement in reality”) and the eta reduction, as innocuous as it might look to our powerful brains, is in fact a global reduction move (here you see the eta reduction in graphic lambda calculus).  Therefore, I believe that “extreme” functional programming (i.e. without extensionality) is more likely to be implemented in reality (i.e. outside a silicon computer).

7. In graphic lambda calculus we work with graphs, (some of them) corresponding to lambda terms. We could identify them with chemical molecules, as it’s done in algorithmic chemistry.  But there are some differences between algorithmic chemistry and my chemical concrete machine proposal, namely that  the application and the abstraction operations from lambda calculus are, in the concrete machine, ingredients of molecules, and moves (reductions) are assimilated with enzymes, thus reductions are certain chemical reactions.

8. This leads us to the last point: it is intriguing to search if “extreme” functional programming (i.e. functional programming without extensionality) could be implemented in biological computing. The advantage of this extreme functional programming paradigm is that it might be much more simple to do this implementation than to mimic a silicon computer, because already lambda calculus looks like a kind of chemistry. Another advantage, coming from the fact that the chemical concrete machine uses the graphic lambda calculus formalism, is that geometric operations (like bonding two molecules together, or performing reduction steps in parallel)  are more simple to describe in graphic lambda calculus than in the usual lambda calculus.

The disadvantage is that is hard to understand lambda calculus without extensionality. It is already hard to understand functional programming, by someone with the imperative programming mindframe. Without extensionality, is then even harder to not fall into habits of thinking which might be misleading. I know this, because two years ago all this lambda calculus was totally opaque to me. Why, not using equality? Not using functions? Is this mathematics or some crazy invention of logicians? Now I know better, of course. But for somebody which is used with boolean logic and (like me) with functional analysis, it looks crazy first.  If you go further, then you start to love it.

Xanadu rules for OA publishing

Any new project in OA publishing meets high expectations. That is why most of them fail. Truth is that, until now, and with few exceptions, no such project meets the expectations of us, anonymous creators of net content.

I am interested in research and communication of it, particularly in mathematical research (but math is everywhere and best challenges are now in the sciences of the living, so it does not matter much what kind of research we are discussing about).

Discussions about new OA publication proposals, I noticed, quickly turn to some sensible points, which are usually not considered by the creators. Central among those is: we need more interactive forms of communicating research. The notion of the article as the main communication vehicle is challenged, because if we are willing to allow the article to bear online, perpetual examination and commenting then, at some point, we obtain a complex product, with many contributors, with many levels of complexity, something which is no longer an article, but something else.

For simplicity, let’s call such an object a “document”, which may have various versions in space and in time (i.e. a version 3 in London and a version 2 in Singapore).

Contributors to documents are called “creators”, be them authors, reviewers or commenters.

We have a kind of a mess, right? Something not quite easy to handle by the www, something which will entropically turn to chaos.

Not quite. It would be so in the world of the html link.

There is an old attempt, the Project Xanadu, with it’s tumblers and enfilades, which look to me as an ideal system of meaningful organisation of scientific communication. We don’t need the whole net to devolve in time to the ideas of the ’60’s and then re-evolve with tumblers instead of html links. Instead, for those of us who are nerdy, have long time attention spans and want to communicate original, creative research, the rules of the Project Xanadu could be taken as an example of what would be nice to have.

In the following I reproduce those rules, but with some words replaced, like “server” with “journal” (that’s a bad word, journal, but for the moment I don’t have another), “user” with “creator”. Also, “document is to be understood in the sense explained previously. Finally, I shall strikethrough the rules which I don’t think they apply (or I don’t support, because are the kind of evil thing a legacy publisher would adore).

Here they are:  (source for the original rules)

  1. Every journal is uniquely and securely identified.
  2. Every journal can be operated independently or in a network.
  3. Every creator is uniquely and securely identified.
  4. Every creator can search, retrieve, create and store documents.
  5. Every document can consist of any number of parts each of which may be of any data type.
  6. Every document can contain links of any type including virtual copies (“transclusions“) to any other document in the system accessible to its owner.
  7. Links are visible and can be followed from all endpoints.
  8. Permission to link to a document is explicitly granted by the act of publication.
  9. Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (“transclusions”) of all or part of the document.
  10. Every document is uniquely and securely identified.
  11. Every document can have secure access controls.
  12. Every document can be rapidly searched, stored and retrieved without creator knowledge of where it is physically stored.
  13. Every document is automatically moved to physical storage appropriate to its frequency of access from any given location.
  14. Every document is automatically stored redundantly to maintain availability even in case of a disaster.
  15. Every journal provider can charge their creators at any rate they choose for the storage, retrieval and publishing of documents.
  16. Every transaction is secure and auditable only by the parties to that transaction.
  17. The client-server communication protocol is an openly published standard. Third-party software development and integration is encouraged.

___________________

UPDATE: Among the Xanadu Projects, as listed on this page of Xanadu Australia, there is

Committed (persistent) online publishing

Algorithmic chemistry and the chemical concrete machine

I just discovered the work of Fontana and Buss on Algorithmic Chemistry. The ideas appear to be very close to what I am trying to do with the chemical concrete machine (independently, because of my relative ignorance of past research in the field, but I’m improving my knowledge day by day).

I shall jump directly to differences. At first sight there are a lot of ideas which look the same, mainly that lambda terms are molecules and chemical reactions are related to reduction in lambda calculus.

The main differences are:

  • for the chemical concrete machine, molecules are not just any abstract list, but a particular one, expressed as certain graphs,
  • I don’t use lambda calculus, but a more powerful version, graphic lambda calculus, which works directly with such “molecules” (i.e. graphs in GRAPH), being free from any linear writing convention or variable names,
  • chemical reactions between such molecules do not correspond to lambda abstraction or to the application. Instead, the chemical reactions correspond to reduction moves, mainly to the graphic beta move,
  • molecules are not functions, because of the lack of extensionality (eta reduction), which is good, because it makes the calculus much more concrete than with extensionality (but this is an argument which needs more details, that’s for later). For now I shall just mention my belief that extensionality and types are an unnecessary relics of thought (controversial, I know) and one of the main stumbling blocks on the path of reconciling syntactic with semantic is keeping types and extensionality (the short argument against these is that there’s nothing like types or extensionality  in any biological brain, and probably is just another manifestation of the cartesian disease).

But otherwise my proposal of the chemical concrete machine is clearly in the field of algorithmic chemistry!

Message in a bottle

I don’t know how you function, but I use to send messages, containing ideas which turn around in my head, to whom I think might be interested, as an invitation to dialogue, or as a request for help.

Reading such messages afterwards, almost always I notice that it is not obvious what are my intentions and motivations, even to me.

Nevertheless, on a longer time scale, these become transparent. Some look a bit like messages in a bottle which I send from a remote island and years after I receive them on another shore.

These poetic lines, partially motivated by the sun intake and the proximity of the sea, are an introduction to such a bottled message, which I just received and moreover one which can be used as evidence for the long time scale phenomenon.

I use now an older laptop, the one which I had with me when I was in Rio. I don’t know if I wrote this before, or if anybody cares (but indulge me to use this place as a log, besides being an open notebook), the strongest feeling I had back in Rio was one of total freedom. And what does a free mathematician these days? Well, I decided I would like to switch to biology, of course.

I have a folder on my laptop, created in Rio, called “next”. What’s that? I asked myself today and opened it. Inside there’s a latex file “bio_next”. Aha, let’s see, I don’t remember what ‘s  about.

In the file is a collection of quotes from messages sent to somebody I shall not mention, because they are part of strange messages which probably did not have any meaning from the receiver point of view.

But I invite you to compare these messages with the content of the chorasimilarity open notebook, in order to prove that there’s a long time scale coherence. Hopefully, now I am a bit wiser and I keep a log.

I shall mention also, because as somebody said that the net is not subtle, that the purpose of sharing these messages is to invite you, dear reader, to dialogue.

Here is the content of the file (latex commands and names/places  excluded):

nov 14 2007:   I hope to be in ..  at the beginning of December (but  not 100 percents  sure). I could pass by … if you are there,  because I am very curious about your opinion. Mine is  that even these contributions are unworthy, they show  the existence of an infinite field of analysis. So maybe  this could nuance the old dichotomy discrete-continuum a  bit.

feb 1, 2008:      On another note, I am (was) impressed by your turning to biology and  now am part of a project in molecular dynamics (more of less). So  now I am strugling with questions like: what are “shape” and  “function” meaning for a biochemist? Very non bourbakian notions.

oct 5, 2009:  I think that the paper is difficult for someone not in the right frame of mind and rigid. I also believe that it is important, less because it gives a kind of axiomatization of SR geometry, but more because it shows  that there is a world of analysis outside  the regular paths. And I think is relevant for understanding our presupositions concerning the space, but I noticed that this is not a viewpoint to discuss in a math paper for publication.

nov 16, 2009:   Question (to your curiosity): if a squirrel  (monocular vision times two eyes) would do analysis,  what it would look like?  My tentative answer: the axiom A4 (with the approximate  difference operator, the one from the last section  of Bellaiche SR paper) of dilatation structures, seems  natural to us,  binocular vision beings. To a squirrel,  the most natural is to base everything on the  approximate inverse operator ( inv , see the previous  “Emergent algebras as generalizations of differentiable   algebras, with applications” http://arxiv.org/abs/0907.1520  )   Therefore,  a squirrel would start to do analysis on  symmetric spaces, not on vector spaces as us.

jan 5, 2010:    Question: The basic component of dilatation structures (and therefore  of any scheme, let’s say, of finite differences) are dilatations. A  dilatation is just a gate with two entries and one output. The calculus  with dilatations consists in finding interesting binary trees with  nodes beind dilatations, such that when \varepsilon goes to 0, the output  of the tree converges.  Likewise, boolean calculus is based on the transistor (say, a NAND  gate) and any circuit made by transistors is (in multiple ways)  representable as a binary tree.  The trees appearing in the axioms of dilatation structures are  different from the ones appearing in the axioms of boolean algebras,  but the question remains: find interesting circuits and their algebra,  but for dilatation structures.  So I have a question to ask you: if you think about dilatation gates as  elementary molecules, what is their chemistry? That is, have you seen  (binary) trees and/or algebraic rules involving such objects in chemistry?  Is maybe process algebra another ingredient to add? For process algebra see   http://www.lfcs.inf.ed.ac.uk/reports/89/ECS-LFCS-89-86/ and the wiki entry    http://en.wikipedia.org/wiki/Process_calculus  I would appreciate any help/idea.

feb 12, 2010:   Otherwise, concerning my last mail, now  I have an answer, namely dilations in  metric spaces are approximate quandle  operations (Reidemeister move III is  true in the limit of the rescaling which  gives the tangent space)! It is somehow natural,  if you think about the Alexander quandle, which  is an “exact” quandle due to the linearity, and  the quandle operation is just an Euclidean dilation.  Sorry that I can’t  explain more precisely in few words,  but this means that there is a whole algebraic avenue  to explore!

jun 10, 2010:       This is just to communicate some ideas about  what means to simulate (computations in) a space,  which I discovered that seems to be related to  some  viewpoints in studies of human vision,  like Koenderink’ “Brain a geometry engine”.