Past vitalism in extreme functional programming

In Vitalism everything is a function.  Just like in lambda calculus with eta reduction.


With chemlambda we are someplace corresponding to The Sceptycal Chymist: or Chymico-Physical Doubts and Paradoxes, by Boyle.

Recall that chemlambda (and GLC) rejects extensionality, because it is a global move, not a local one.

Therefore, chemlambda and GLC are not functional, properly speaking. Which is good, actually, for a variety of reasons, kind of the same as the ones why chemistry is better than vitalism.

In another post, this way of seeing was called “extreme functional programming“.

I hope this might help the understanding of distributed GLC model of computation and the reasons behind choices which were made in the model.



1. Criticism of vitalism  supports the view that functional programming with extensionality is an reenactment of vitalism.

Vitalism has sometimes been criticized as begging the question by inventing a name. Molière had famously parodied this fallacy in Le Malade imaginaire, where a quack “answers” the question of “Why does opium cause sleep?” with “Because of its soporific power.”[29] Thomas Henry Huxley compared vitalism to stating that water is the way it is because of its “aquosity”.[30] His grandson Julian Huxley in 1926 compared “vital force” or élan vital to explaining a railroad locomotive’s operation by its élan locomotif (“locomotive force”). [quote from the wiki source on vitalism]

2. From the wiki source:

The Sceptical Chymist: or Chymico-Physical Doubts & Paradoxes is the title of Robert Boyle‘s masterpiece of scientific literature, published in London in 1661. In the form of a dialogue, the Sceptical Chymist presented Boyle’s hypothesis that matter consisted of atoms and clusters of atoms in motion and that every phenomenon was the result of collisions of particles in motion. For these reasons Robert Boyle has been called the founder of modern chemistry by J. R. Partington.[1]


3. Chemlamba is based on a chemical metaphor. The initial name of the formalism is “chemical concrete machine”, in contrast with the “chemical abstract machine” of Berry and Boudol.



Microbes take over and then destroy the HAL 9000 prototype

Today was a big day for the AI specialists and their brainchild, the HAL 9000. Finally, the decision was made to open the isolation bubble which separated the most sophisticated artificial intelligence from the Net.  They  expected that  somehow their brainchild will survive unharmed  when exposed to the extremely dynamic medium of decentralized, artificial life based computation we all use every day.

As the video made by Jonathan Eisen shows,  in about 9   seconds after the super-intelligence was taken out of the quarantine and relayed to the Net “microbes take over and then destroy the” HAL 9000 prototype.

After the experiment, one of the creators of the HAL 9000 told us: “Maybe we concentrated too much on higher level aspects of the mind. We aim for understanding intelligence and rational behaviour, but perhaps we should learn this lesson from Nature, namely that real life is a wonderful, complex tangle of local, low level interactions, and that rational mind is a very fragile epiphenomenon. We tend to take for granted the infrastructure of life which runs in the background.”

“I was expecting this result” said a Net designer. “The strong point of the Alife decentralized functioning of the Net is exactly this: as the microbes, the Net needs no semantics to function. This is what keeps us free from the All Seeing Eye, corporation clouds based Net which was the rule some years ago. This is what gives everybody the trust to use the Net.”



This is another post  which  respond to the challenge from Alife vs AGI.  You are welcome to suggest another one or to make your own.


From a stain on the wall to five visual languages

Do you know about the “stain on the wall”  creativity technique of Leonardo da Vinci? Here is a quote [source used]:

I will not forget to insert into these rules, a new theoretical invention for knowledge’s sake, which, although it seems of little import and good for a laugh, is nonetheless, of great utility in bringing out the creativity in some of these inventions.    This is the case if you cast your glance on any walls dirty with such stains or walls made up of rock formations of different types.  If you have to invent some scenes, you will be able to discover them there in diverse forms, in diverse landscapes, adorned with mountains, rivers, rocks, trees, extensive plains, valleys, and hills. You can even see different battle scenes and movements made up of unusual figures,  faces with strange expressions,  and myriad things which you can  transform into a complete and proper form constituting part of similar walls and rocks. These are like the sound of bells, in whose tolling, you hear names and words that your imagination conjures up.

I propose to you  five graphical formalisms, or visual languages, towards the goal of “computing with space”.

They all come from a “stain on the wall”,  reproduced here  (is the beginning of the article What is a space? Computations in emergent algebras and the front end visual system, arXiv:1009.5028),  with  some links  to more detailed explanations and related material which I invite you to follow.

Or better, to threat them as  a stain on the wall. To share, to dream about, to create, to discuss.

In mathematics “spaces” come in many flavours. There are vector spaces, affine spaces, symmetric spaces, groups and so on. We usually take such objects as the stage where the plot of reasoning is laid. But in fact what we use, in many instances,are properties of particular spaces which, I claim, can be seen as coming from a particular class of computations.

There is though a “space” which is “given” almost beyond doubt, namely the physical space where we all live. But as it regards perception of this space, we know now that things are not so simple. As I am writing these notes, here in Baixo Gavea, my eyes are attracted by the wonderful complexity of a tree near my window. The nature of the tree is foreign to me, as are the other smaller beings growing on or around the tree.  I can make some educated guesses about what they are: some are orchids, there is a smaller, iterated version of the big tree. However, somewhere in my brain, at a very fundamental level, the visible space is constructed in my head, before the stage where I a capable of recognizing and naming the objects or beings that I see.


The five visual languages are to be used with the decentralized computing model called Distributed GLC.  They point to different aspects, or they try to fulfil different goals.

They are:



Birth and death of zipper combinators (II)

Continues from   Birth and death of zipper combinators (I).

In the following is presented another mechanism of birth/death of zipper combinators, which is not using a garbage collecting arrow and termination node.

For any zipper combinator A there is a number n and a succession o fn CLICK, ZIP and LOC PRUNING moves such that



So this time   we transform a zipper combinator connected to a termination node into a bunch of loops.

Proof. The first part is identical with the one from the previous post, namely we remark that we can reduce the initial zipper combinator with the exit connected to a termination node into a finite collection of simpler zippers.


This is done by a succession of LOC PRUNING moves for (+) half-zippers.

We shall use then the following succession of moves, in order to “unlock” (-) half-zippers.


If we use this for the I zipper combinator then we can transform it into one loop.


The same trick is used for transforming the zipper combinator K into two loops.


The zipper combinator S is transformed into three loops, starting by the same trick.


The proof is done.


It follows that w can replace garbage collection by ELIM LOOPS, a move which appeared in earlier formulations of chemlambda.

Seen from the birth point of view, if we have enough loops then we can construct any zipper combinator (with the exit arrow connected to a termination node).


Birth and death of zipper combinators (I)

Because zipper logic is a graph rewriting system which does not use variable names, just like GLC and chemlabda,  it needs ways to multiply, distribute or kill  things.

See  Chemlambda, universality and self-multiplication  for multiplication, distribution or propagation phenomena.

In this post we shall see how zipper combinators die or are born, depending on the sense of reading the moves. This can happen in several ways, one of them explained in this post.

This can also be seen as the statement: we can do garbage collection in chemlambda, GLC or zipper logic.



Zipper combinators.   The set of zipper combinators is the smallest set of zipper graphs with the properties:

  •  it contains the S, K, I zipper graphs defined in  the next figure


  • for any natural number n >0 and for any n+1 zipper combinators, the zipper graph obtained by connecting the out arrows of the  zipper combinators to the in arrows of the (+n) half-zipper is a zipper combinator.


  • any zipper graph which is obtained by applying a zipper logic move to a zipper combinator is a zipper combinator.



I want to prove that for any zipper combinator A, there is a number n and  a succession of n  LOCAL PRUNING,  CLICK and ZIP moves such that:


Proof.     Becauseof the LOC PRUNING move satisfied by  $latex  (+)$ half-zippers, it follows that by a finite number of these moves we can achieve the effect described in the next figure, for any zipper combinator A (of course, the number of these moves depends on A):


It is left to prove that the free arrow ending with a termination node can “eat”, one by one, all the I, K, S zipper combinators.

For the I combinator, it happens like this:


The case of the combinator K is described next:


The combinator S is absorbed by the following succession of moves:



The proof is done.


We can read all backwards, by saying that an arrow connected to a termination node can give birth to  any zipper combinator, with the out arrow connected to a termination node.





Halfcross way to pattern recognition (in zipperlogic)

Inspired by the Zipper logic and knot diagrams post, here is an alternate encoding of chemlambda.

This is part of the series on zipper logic (branded in this post as  “zipperlogic”).  The last post is Zipper logic (VI) latest version, detailed.

As can be seen in that post, zipperlogic is equivalent with chemlambda, but it has two interesting qualities: is more intuitive and it has the CLICK move.

The CLICK move  transforms a pair of opposed half-zippers into a  zipper, which is then unzipped with the ZIP move. While the ZIP move is equivalent with the graphic beta  move, there is no correspondent to the CLICK move, apparently.

The CLICK move becomes useful when we use other realizations of the zipperlogic than chemlambda.    In the one where half-zippers are realized as towers of crossings, the CLICK move turns out to be a pattern recognition move, and the ZIP move becomes the familiar R2 (Reidemeister 2) move, applied to that pattern.

That is why CLICK is interesting: because in order to apply moves in chemlambda, we have first to identify the patterns where these moves may be used.

Now, I want to justify this in the following.  I shall not aim for another realization of zipperlogic, but for one of chemlambda, inspired by the one of zipperlogic seen as acting on towers of crossings.

I shall use half-crossings.  Recall that in the post Slide equivalence of knots and lambda calculus (I) I wrote:

Louis Kauffman proposes in his book Knots and Physics  (part II, section “Slide equivalence”), the notion of slide equivalence. In his paper “Knotlogic” he uses slide equivalence (in section 4) in relation to the self-replication phenomenon in lambda calculus. In the same paper he is proposing to use knot diagrams as a notation for the elements and operation of a combinatory algebra (equivalent with untyped lambda calculus).


Obviously, we have four gates, like in the lambda calculus sector of the graphic lambda calculus. Is this a coincidence?


So this post can be seen as a try to answer this question.

But the halfcrossings which I use here are different than the ones defined by Louis Kauffman. There might be a way to transform ones into the others, but I have not found it yet.


Here is the encoding of chemlambda by halfcrossings:


Remark that each of the halfcrossings has a dangling, vanishing thread, like in the previous post Bacterial conjugation is beta reduction.

[I shall come back in later posts to the relevance of this formalism for horizontal gene transfer.]

Look at this as a new notation for chemlambda nodes and just replace  the green and red nodes by these halfcrossings in order to get the right moves for the halfcrossings.

With an exception: the CLICK move. This move consists into joining neighbouring dangling threads, in two situations, one related to the beta move, the other related to the FAN-IN move.

Look how the beta move appears with halfcrossings and the CLICK move used for pattern recognition (in the figure this s calld “pattern id”):


Nice, right?


Now, the other CLICK move, involved into the identification of the pattern appearing  in the FAN-IN move.


In a future post I shall look at the DIST moves, in this encoding.



Sometimes an anonymous review is “a tale told by an idiot …”

… “full of sound and fury, signifying nothing.” And the editor believes it, even if it is self-contradictory, after sitting on the article for half a year.

There are two problems:

  • the problem of time; you write a long and dense article, which may be hard to review and the referee, instead of declining to review it, it keeps it until the editor presses him to write a review, then he writes some fast, crappy report, much below the quality of the work required.
  • the problem of communication: there is no two way communication with the author. After waiting a considerable amount of time, the author has nothing else to do than to re-submit the article to another journal.

Both problems could be easily solved by open peer-review. See Open peer-review as a service.

The referee can well be anonymous, if he wishes, but a dialogue with the author and, more important, with other participants could only improve the quality of the review (and by way of consequence, the quality of the article).

I reproduce further such a review, with comments. It is about the article “Sub-riemannian geometry from intrinsic viewpoint” arXiv:1206.3093 .  You don’t need to read it, maybe excepting the title, abstract and contents pages, which I reproduce here:

Sub-riemannian geometry from intrinsic viewpoint
Marius Buliga
Institute of Mathematics, Romanian Academy
P.O. BOX 1-764, RO 014700
Bucuresti, Romania
This version: 14.06.2012


Gromov proposed to extract the (differential) geometric content of a sub-riemannian space exclusively from its Carnot-Caratheodory distance. One of the most striking features of a regular sub-riemannian space is that it has at any point a metric tangent space with the algebraic structure of a Carnot group, hence a homogeneous Lie group. Siebert characterizes homogeneous Lie groups as locally compact groups admitting a contracting and continuous one-parameter group of automorphisms. Siebert result has not a metric character.
In these notes I show that sub-riemannian geometry may be described by about 12 axioms, without using any a priori given differential structure, but using dilation structures instead.
Dilation structures bring forth the other intrinsic ingredient, namely the dilations, thus blending Gromov metric point of view with Siebert algebraic one.
MSC2000: 51K10, 53C17, 53C23

1 Introduction       2
2 Metric spaces, groupoids, norms    4
2.1 Normed groups and normed groupoids      5
2.2 Gromov-Hausdorff distance     7
2.3 Length in metric spaces       8
2.4 Metric profiles. Metric tangent space      10
2.5 Curvdimension and curvature     12

3 Groups with dilations      13
3.1 Conical groups     14
3.2 Carnot groups     14
3.3 Contractible groups   15

4 Dilation structures  16
4.1 Normed groupoids with dilations     16
4.2 Dilation structures, definition    18

5 Examples of dilation structures 20
5.1 Snowflakes, nonstandard dilations in the plane    20
5.2 Normed groups with dilations    21
5.3 Riemannian manifolds    22

6 Length dilation structures 22
7 Properties of dilation structures    24
7.1 Metric profiles associated with dilation structures    24
7.2 The tangent bundle of a dilation structure    26
7.3 Differentiability with respect to a pair of dilation structures    29
7.4 Equivalent dilation structures     30
7.5 Distribution of a dilation structure     31

8 Supplementary properties of dilation structures 32
8.1 The Radon-Nikodym property    32
8.2 Radon-Nikodym property, representation of length, distributions     33
8.3 Tempered dilation structures    34
9 Dilation structures on sub-riemannian manifolds   37
9.1 Sub-riemannian manifolds    37
9.2 Sub-riemannian dilation structures associated to normal frames     38


10 Coherent projections: a dilation structure looks down on another   41
10.1 Coherent projections     42
10.2 Length functionals associated to coherent projections    44
10.3 Conditions (A) and (B)     45

11 Distributions in sub-riemannian spaces as coherent projections    45
12 An intrinsic description of sub-riemannian geometry    47
12.1 The generalized Chow condition     47
12.2 The candidate tangent space    50
12.3 Coherent projections induce length dilation structures  53

Now the report:


Referee report for the paper

 Sub-riemannian geometry from intrinsic viewpoint

Marius Buliga

New York Journal of Mathematics (NYJM).

One of the important theorems in sub-riemannian geometry is a result
credited to Mitchell that says that Gromov-Hausdorff metric tangents
to sub-riemannian manifolds are Carnot groups.
For riemannian manifolds, this result is an exercise, while for
sub-riemannian manifolds it is quite complicate. The only known
strategy is to define special coordinates and using them define some
approximate dilations. With this dilations, the rest of the argument
becomes very easy.
Initially, Buliga isolates the properties required for such dilations
and considers
more general settings (groupoids instead of metric spaces).
However, all the theory is discussed for metric spaces, and the
groupoids leave only confusion to the reader.
His claims are that
1) when this dilations are present, then the tangents are Carnot groups,
[Rmk. The dilations are assumed to satisfy 5 very strong conditions,
e.g., A3 says that the tangent exists – A4 says that the tangent has a
multiplication law.]
2) the only such dilation structures (with other extra assumptios) are
the riemannian manifolds.
He misses to discuss the most important part of the theory:
sub-riemannian manifolds admit such dilations (or, equivalently,
normal frames).
His exposition is not educational and is not a simplification of the
paper by Mitchell (nor of the one by Bellaiche).

The paper is a cut-and-past process from previous papers of the
author. The paper does not seem reorganised at all. It is not
consistent, full of typos, English mistakes and incomplete sentences.
The referee (who is not a spellchecker nor a proofread) thinks that
the author himself could spot plenty of things to fix, just by reading
the paper (below there are some important things that needs to be

The paper contains 53 definitions – fifty-three!.
There are 15 Theorems (6 of which are already present in other papers
by the author of by other people. In particular 3 of the theorems are
already present in [4].)
The 27 proofs are not clear, incomplete, or totally obvious.

The author consider thm 8.10 as the main result. However, after
unwrapping the definitions, the statement is: a length space that is
locally bi-lipschitz to a commutative Lie group is locally
bi-lipschitz to a Riemannian manifold. (The proof refers to Cor 8.9,
which I was unable to judge, since it seems that the definition of
“tempered” obviously implies “length” and “locally bi-lipschitz to the

The author confuses the reader with long definitions, which seems very
general, but are only satisfied by sub-riemannian manifolds.
The definitions are so complex that the results are tautologies, after
having understood the assumptions. Indeed, the definitions are as long
as the proofs. Just two examples: thm 7.1 is a consequence of def 4.4,
thm 9.9 is a consequence of def 9.7.

Some objects/notions are not defined or are defined many pages after
they are used.

Small remarks for the author:

def 2.21 is a little o or big O?

page 13 line 2. Which your convention, the curvdim of a come in infinite.
page 13 line -2. an N is missing in the norm

page 16 line 2, what is \nu?

prop 4.2 What do you mean with separable norm?

page 18 there are a couple of “dif” which should be fixed.
in the formula before (15), A should be [0,A]

pag 19 A4. there are uncompleted sentences.

Regarding the line before thm 7.1, I don’t agree that the next theorem
is a generalisation of Mitchell’s, since the core of his thm is the
existence of dilation structures.

Prop 7.2 What is a \Gamma -irq

Prop 8.2 what is a geodesic spray?

Beginning of sec 8.3 This is a which -> This is a

Beginning of sec 9 contains a lot of English mistakes.

Beginning of sec 9.1 “we shall suppose that the dimension of the
distribution is globally constant..” is not needed since the manifold
is connected

thm 9.2 rank -> step

In the second sentence of def 9.4, the existence of the orthonormal
frame is automatic.


Now, besides some of the typos, the report is simply crap:

  • the referee complains that I’m doing it for groupoids, then says that what I am doing applies only to subriemannian spaces.
  • before, he says that in fact I’m doing it only for riemannian spaces.
  • I never claim that there is a main result in this long article, but somehow the referee mentions one of the theorems as the main result, while I am using it only as an example showing what the theory says in the trivial case, the one of riemannian manifolds.
  • the referee says that I don’t treat the sub-riemannian case. Should decide which is true, among the various claims, but take a look at the contents to get an opinion.
  • I never claim what the referee thinks are my two claims, both being of course wrong,
  • in the claim 1) (of the referee) he does not understand that the problem is not the definition of an operation, but the proof that the operation is a Carnot group one (I pass the whole story that in fact the operation is a conical group one, for regular sub-riemannian manifolds this translates into a Carnot group operation by using Siebert, too subtle for the referee)
  • the claim 2) is self-contradictory just by reading only the report.
  • 53 definitions (it is a very dense course), 15 theorems and 27 proofs, which are with no argument: “ not clear, incomplete, or totally obvious
  • but he goes on hunting the typos, thanks, that’s essential to show that he did read the article.

There is a part of the text which is especially perverse: The paper is a cut-and-past process from previous papers of the

Mind you, this is a course based on several papers, most of them unpublished! Moreover, every contribution from previous papers is mentioned.

Tell me what to do with these papers: being unpublished, can I use them for a paper submitted to publication? Or else, they can be safely ignored because they are not published? Hmm.

This shows to me that the referee knows what I am doing, but he does not like it.

Fortunately, all the papers, published or not, are available on the arXiv with the submission dates and versions.



See also previous posts:




Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.


The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.


In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:



After the ZIP move the arc which represents SP is representing a pilus as well!


Click and zip with bacterial conjugation

Bacterial conjugation may be a tool for doing the CLICK and ZIP in the real world.  Alternatively, it may serve as inspiration for designing the behaviour 1 of a GLC actor in distributed GLC.

The description of bacterial conjugation, as taken from the linked wiki page:


Conjugation diagram 1- Donor cell produces pilus. 2- Pilus attaches to recipient cell and brings the two cells together. 3- The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. 4- Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donors.

Step 2 looks like  a CLICK move from zipper logic:


Step 4 looks like a ZIP move:


Not convinced? Look then at the CLICK move as seen when zippers are made of crossing diagrams:


On the other side, the ZIP move is a form of graphic beta move.  Which is involved in the behaviour 1 of GLC actors.

Imagine that each bacteria is an actor.  You have a pair of (bacteria/actors) which (are in proximity/connected in the actor diagram) and they proceed to (bacterial conjugation/behaviour 1). In the most general form, which actually involves up to 6 actors, the bacteria :a and :b interact like this:



(in this figure we see only two nodes, each one belonging to one actor.)  The case of bacterial conjugation is when there are only two actors, i.e. :a = :c = :f  and :b = :d = :e . Each of the new arrows which appeared after the graphic beta move could be seen as a pilus.

Easy to describe it, but the mechanism of bacterial conjugation is fascinating. Can it be harnessed for (this type of) computation?

UPDATE:  searching for “plasmid computing”,  found Computing with DNA by operating on plasmids by T. Head, G. Rozenberg, R.S. Bladergroen, C.K.D. Breek, P.H.M. Lommerse, H.P. Spaink, BioSystems 57 (2000) 87 – 93.

They have a way to compute with plasmids. In this post is argued that bacterial conjugation itself (regardless of the plasmid involved) can be used as the main graph rewrite for doing (a graphic version of) lambda calculus, basically.



Zipper logic (VI) latest version, detailed

Zipper logic is a graph rewrite system. It consists in a class of graphs, called zipper graphs and a collection of moves (graph rewrites) acting on zipper graphs.

Let’s start by defining the zipper graphs. Such a graph is made by the basic ingredients described in the next two figures.

First there are two types of half-zippers and one type of zipper.  For any natural number n there is a (-n) half-zipper (first row), a (+n) half-zipper and a  (n) zipper.


At the right you see that these are just nodes with oriented arrows. At the left you see a more intuitive notation, which will be used further.

The numbering of the arrows indicate that there is an order on those arrows.

Besides the half-zippers and zippers, there are the already familiar nodes (a) fanout, (b) fanin  from chemlambda. To them, we add (c) arrows, termination nodes and loops.


A zipper graph is formed by a finite number of those ingredients, which are connected according to the arrow orientations. Note that there might be arrows with one, or both ends free, and that a zipper graph does not have to be connected.

The zipper moves, now.  There are the TOWER moves, which serve to stack half-zippers on top of others.


There is the CLICK move, described in the next figure for a particular case. In general, the CLICK moves creates a zipper from two opposed half-zippers, possibly also with a rest, which is a half zipper. It is very intuitive.


You shall see later that CLICK is a very funny move, one which formalizes the identification of a pattern.

The ZIP move is the one which gives the name to zipper logic. It looks like the action of zipping or unzipping a zipper.


The composite CLICK + ZIP plays the role of the graphic beta move, but here is a more subtle thing: CLICK is like identifying the good pattern for the graphic beta move and ZIP is like actually applying the graphic beta move.

Then you have the DIST moves, like in chemlambda, but for half-zippers:


And then there are the LOCAL PRUNING moves, some for half-zippers and other just like those in chemlambda.


Finally,  there are some moves (among them the very important FAN-IN move) which involve only the familiar nodes from chemlambda.


That’s it!

It looks very much like chemlambda, right? That is true, with the subtlety of CLICK added, which is exploited when we find models of the zipper logic outside chemlambda.


Microbiome OS

Your computer could be sitting alone and still be completely outnumbered for your operating system  is home to  millions of tiny passengers – chemlambda molecules.

The programs making the operating system of your computer are made up of around ten million code lines, but you harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there are 100 virtual bacterial ones. This is your ‘microbiome’ OS and it has a huge impact on your social  life, your ability to  interact with the Internet of Things and more. The way you use your computer, in turn, affect them. Everything from the forums we visit  to the way we use the Internet for our decentralized computations  influences the species of bacteria that take up residence in our individual mocrobiome OS.


Text adapted from the article Microbiome: Your Body Houses 10x More Bacteria Than Cells, which I found by reading this G+ post by Lacerant Plainer.

This is a first example of a post which would respond to the challenge from Alife vs AGI. For commodity of the reader I reproduce it further:

In  this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!



Chemlambda, universality and self-multiplication

Together with Louis Kauffman, we submitted  the following article:

M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,   arXiv:1403.8046


The article abstract is:

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time.

This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

Please enjoy the nice text and the 21 figures!

In this post I want to explain in few words what is this larger project, because it is something which is open to anybody to contribute and play with.

We look at the real living world as something ruled by chemistry. Everywhere in the real world there are local chemical interactions, local in space and time. There is no global, absolute, point of view which is needed to give meaning to this alive world.  Viruses, bacteria and even prebiotic chemical entities form the scaffold of this world, until very recently, when the intelligent armchair philosophers appeared and invented what they call “semantics”. Before the meaning of objects, there was life.

Likewise, we may imagine a near future where the virtual world of the Internet is seamlessly interlinked with the real world, by means of the Internet of Things and artificial chemistry.

Usually we are presented a future where   artificial intelligences  and rational machines   give expert advices  or make decisions based on statistics of real life interactions between us humans or between us and the objects we manipulate. This future is one of gadgets, useful objects and  virtual bubbles for the bayesian generic human. Marginally, in this future, we humans, we may chit chat and ask corporations for better gadgets or for more useful objects. This is the future of cloud computing, that is centralized distributed computing.

This future world does not look at all like the real world.

Because the real world is not centralized. Because individual entities which participate in the real world do live individual lives and have individual interactions.

Because we humans want to discuss and interact with others more than we want better gadgets.

We think  then about a future of a virtual world based on  decentralized computing with an artificial chemistry, a world where  individual entities, real or virtual,  interact by the means of an artificial chemistry, instead of being baby-sitted by statistically benevolent artificial intelligences.

Moreover, the Internet of Things, the bridge between the real and the virtual world, should be designed as a translation tool between real chemistry and artificial chemistry. Translation of what? Of  decentralized purely local computations.

This is the goal of the project, to see if such a future is possible.

It is a fun goal and there is much to learn and play with. It is by no means something which appeared from nowhere, instead it is a natural project, based on lots of bits of past and present research.