All posts by chorasimilarity

Mathematics is everywhere! Open to collaborations.

Sometimes an anonymous review is “a tale told by an idiot …”

… “full of sound and fury, signifying nothing.” And the editor believes it, even if it is self-contradictory, after sitting on the article for half a year.

There are two problems:

  • the problem of time; you write a long and dense article, which may be hard to review and the referee, instead of declining to review it, it keeps it until the editor presses him to write a review, then he writes some fast, crappy report, much below the quality of the work required.
  • the problem of communication: there is no two way communication with the author. After waiting a considerable amount of time, the author has nothing else to do than to re-submit the article to another journal.

Both problems could be easily solved by open peer-review. See Open peer-review as a service.

The referee can well be anonymous, if he wishes, but a dialogue with the author and, more important, with other participants could only improve the quality of the review (and by way of consequence, the quality of the article).

I reproduce further such a review, with comments. It is about the article “Sub-riemannian geometry from intrinsic viewpoint” arXiv:1206.3093 .  You don’t need to read it, maybe excepting the title, abstract and contents pages, which I reproduce here:

Sub-riemannian geometry from intrinsic viewpoint
Marius Buliga
Institute of Mathematics, Romanian Academy
P.O. BOX 1-764, RO 014700
Bucuresti, Romania
This version: 14.06.2012


Gromov proposed to extract the (di fferential) geometric content of a sub-riemannian space exclusively from its Carnot-Caratheodory distance. One of the most striking features of a regular sub-riemannian space is that it has at any point a metric tangent space with the algebraic structure of a Carnot group, hence a homogeneous Lie group. Siebert characterizes homogeneous Lie groups as locally compact groups admitting a contracting and continuous one-parameter group of automorphisms. Siebert result has not a metric character.
In these notes I show that sub-riemannian geometry may be described by about 12 axioms, without using any a priori given differential structure, but using dilation structures instead.
Dilation structures bring forth the other intrinsic ingredient, namely the dilations, thus blending Gromov metric point of view with Siebert algebraic one.
MSC2000: 51K10, 53C17, 53C23

1 Introduction       2
2 Metric spaces, groupoids, norms    4
2.1 Normed groups and normed groupoids      5
2.2 Gromov-Hausdor ff distance     7
2.3 Length in metric spaces       8
2.4 Metric pro files. Metric tangent space      10
2.5 Curvdimension and curvature     12

3 Groups with dilations      13
3.1 Conical groups     14
3.2 Carnot groups     14
3.3 Contractible groups   15

4 Dilation structures  16
4.1 Normed groupoids with dilations     16
4.2 Dilation structures, defi nition    18

5 Examples of dilation structures 20
5.1 Snowflakes, nonstandard dilations in the plane    20
5.2 Normed groups with dilations    21
5.3 Riemannian manifolds    22

6 Length dilation structures 22
7 Properties of dilation structures    24
7.1 Metric pro files associated with dilation structures    24
7.2 The tangent bundle of a dilation structure    26
7.3 Diff erentiability with respect to a pair of dilation structures    29
7.4 Equivalent dilation structures     30
7.5 Distribution of a dilation structure     31

8 Supplementary properties of dilation structures 32
8.1 The Radon-Nikodym property    32
8.2 Radon-Nikodym property, representation of length, distributions     33
8.3 Tempered dilation structures    34
9 Dilation structures on sub-riemannian manifolds   37
9.1 Sub-riemannian manifolds    37
9.2 Sub-riemannian dilation structures associated to normal frames     38


10 Coherent projections: a dilation structure looks down on another   41
10.1 Coherent projections     42
10.2 Length functionals associated to coherent projections    44
10.3 Conditions (A) and (B)     45

11 Distributions in sub-riemannian spaces as coherent projections    45
12 An intrinsic description of sub-riemannian geometry    47
12.1 The generalized Chow condition     47
12.2 The candidate tangent space    50
12.3 Coherent projections induce length dilation structures  53

Now the report:


Referee report for the paper

 Sub-riemannian geometry from intrinsic viewpoint

Marius Buliga

New York Journal of Mathematics (NYJM).

One of the important theorems in sub-riemannian geometry is a result
credited to Mitchell that says that Gromov-Hausdorff metric tangents
to sub-riemannian manifolds are Carnot groups.
For riemannian manifolds, this result is an exercise, while for
sub-riemannian manifolds it is quite complicate. The only known
strategy is to define special coordinates and using them define some
approximate dilations. With this dilations, the rest of the argument
becomes very easy.
Initially, Buliga isolates the properties required for such dilations
and considers
more general settings (groupoids instead of metric spaces).
However, all the theory is discussed for metric spaces, and the
groupoids leave only confusion to the reader.
His claims are that
1) when this dilations are present, then the tangents are Carnot groups,
[Rmk. The dilations are assumed to satisfy 5 very strong conditions,
e.g., A3 says that the tangent exists - A4 says that the tangent has a
multiplication law.]
2) the only such dilation structures (with other extra assumptios) are
the riemannian manifolds.
He misses to discuss the most important part of the theory:
sub-riemannian manifolds admit such dilations (or, equivalently,
normal frames).
His exposition is not educational and is not a simplification of the
paper by Mitchell (nor of the one by Bellaiche).

The paper is a cut-and-past process from previous papers of the
author. The paper does not seem reorganised at all. It is not
consistent, full of typos, English mistakes and incomplete sentences.
The referee (who is not a spellchecker nor a proofread) thinks that
the author himself could spot plenty of things to fix, just by reading
the paper (below there are some important things that needs to be

The paper contains 53 definitions – fifty-three!.
There are 15 Theorems (6 of which are already present in other papers
by the author of by other people. In particular 3 of the theorems are
already present in [4].)
The 27 proofs are not clear, incomplete, or totally obvious.

The author consider thm 8.10 as the main result. However, after
unwrapping the definitions, the statement is: a length space that is
locally bi-lipschitz to a commutative Lie group is locally
bi-lipschitz to a Riemannian manifold. (The proof refers to Cor 8.9,
which I was unable to judge, since it seems that the definition of
“tempered” obviously implies “length” and “locally bi-lipschitz to the

The author confuses the reader with long definitions, which seems very
general, but are only satisfied by sub-riemannian manifolds.
The definitions are so complex that the results are tautologies, after
having understood the assumptions. Indeed, the definitions are as long
as the proofs. Just two examples: thm 7.1 is a consequence of def 4.4,
thm 9.9 is a consequence of def 9.7.

Some objects/notions are not defined or are defined many pages after
they are used.

Small remarks for the author:

def 2.21 is a little o or big O?

page 13 line 2. Which your convention, the curvdim of a come in infinite.
page 13 line -2. an N is missing in the norm

page 16 line 2, what is \nu?

prop 4.2 What do you mean with separable norm?

page 18 there are a couple of “dif” which should be fixed.
in the formula before (15), A should be [0,A]

pag 19 A4. there are uncompleted sentences.

Regarding the line before thm 7.1, I don’t agree that the next theorem
is a generalisation of Mitchell’s, since the core of his thm is the
existence of dilation structures.

Prop 7.2 What is a \Gamma -irq

Prop 8.2 what is a geodesic spray?

Beginning of sec 8.3 This is a which -> This is a

Beginning of sec 9 contains a lot of English mistakes.

Beginning of sec 9.1 “we shall suppose that the dimension of the
distribution is globally constant..” is not needed since the manifold
is connected

thm 9.2 rank -> step

In the second sentence of def 9.4, the existence of the orthonormal
frame is automatic.


Now, besides some of the typos, the report is simply crap:

  • the referee complains that I’m doing it for groupoids, then says that what I am doing applies only to subriemannian spaces.
  • before, he says that in fact I’m doing it only for riemannian spaces.
  • I never claim that there is a main result in this long article, but somehow the referee mentions one of the theorems as the main result, while I am using it only as an example showing what the theory says in the trivial case, the one of riemannian manifolds.
  • the referee says that I don’t treat the sub-riemannian case. Should decide which is true, among the various claims, but take a look at the contents to get an opinion.
  • I never claim what the referee thinks are my two claims, both being of course wrong,
  • in the claim 1) (of the referee) he does not understand that the problem is not the definition of an operation, but the proof that the operation is a Carnot group one (I pass the whole story that in fact the operation is a conical group one, for regular sub-riemannian manifolds this translates into a Carnot group operation by using Siebert, too subtle for the referee)
  • the claim 2) is self-contradictory just by reading only the report.
  • 53 definitions (it is a very dense course), 15 theorems and 27 proofs, which are with no argument: “ not clear, incomplete, or totally obvious
  • but he goes on hunting the typos, thanks, that’s essential to show that he did read the article.

There is a part of the text which is especially perverse: The paper is a cut-and-past process from previous papers of the

Mind you, this is a course based on several papers, most of them unpublished! Moreover, every contribution from previous papers is mentioned.

Tell me what to do with these papers: being unpublished, can I use them for a paper submitted to publication? Or else, they can be safely ignored because they are not published? Hmm.

This shows to me that the referee knows what I am doing, but he does not like it.

Fortunately, all the papers, published or not, are available on the arXiv with the submission dates and versions.



See also previous posts:




Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.


The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.


In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:



After the ZIP move the arc which represents SP is representing a pilus as well!


Click and zip with bacterial conjugation

Bacterial conjugation may be a tool for doing the CLICK and ZIP in the real world.  Alternatively, it may serve as inspiration for designing the behaviour 1 of a GLC actor in distributed GLC.

The description of bacterial conjugation, as taken from the linked wiki page:


Conjugation diagram 1- Donor cell produces pilus. 2- Pilus attaches to recipient cell and brings the two cells together. 3- The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. 4- Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donors.

Step 2 looks like  a CLICK move from zipper logic:


Step 4 looks like a ZIP move:


Not convinced? Look then at the CLICK move as seen when zippers are made of crossing diagrams:


On the other side, the ZIP move is a form of graphic beta move.  Which is involved in the behaviour 1 of GLC actors.

Imagine that each bacteria is an actor.  You have a pair of (bacteria/actors) which (are in proximity/connected in the actor diagram) and they proceed to (bacterial conjugation/behaviour 1). In the most general form, which actually involves up to 6 actors, the bacteria :a and :b interact like this:



(in this figure we see only two nodes, each one belonging to one actor.)  The case of bacterial conjugation is when there are only two actors, i.e. :a = :c = :f  and :b = :d = :e . Each of the new arrows which appeared after the graphic beta move could be seen as a pilus.

Easy to describe it, but the mechanism of bacterial conjugation is fascinating. Can it be harnessed for (this type of) computation?

UPDATE:  searching for “plasmid computing”,  found Computing with DNA by operating on plasmids by T. Head, G. Rozenberg, R.S. Bladergroen, C.K.D. Breek, P.H.M. Lommerse, H.P. Spaink, BioSystems 57 (2000) 87 – 93.

They have a way to compute with plasmids. In this post is argued that bacterial conjugation itself (regardless of the plasmid involved) can be used as the main graph rewrite for doing (a graphic version of) lambda calculus, basically.



Zipper logic (VI) latest version, detailed

Zipper logic is a graph rewrite system. It consists in a class of graphs, called zipper graphs and a collection of moves (graph rewrites) acting on zipper graphs.

Let’s start by defining the zipper graphs. Such a graph is made by the basic ingredients described in the next two figures.

First there are two types of half-zippers and one type of zipper.  For any natural number n there is a (-n) half-zipper (first row), a (+n) half-zipper and a  (n) zipper.


At the right you see that these are just nodes with oriented arrows. At the left you see a more intuitive notation, which will be used further.

The numbering of the arrows indicate that there is an order on those arrows.

Besides the half-zippers and zippers, there are the already familiar nodes (a) fanout, (b) fanin  from chemlambda. To them, we add (c) arrows, termination nodes and loops.


A zipper graph is formed by a finite number of those ingredients, which are connected according to the arrow orientations. Note that there might be arrows with one, or both ends free, and that a zipper graph does not have to be connected.

The zipper moves, now.  There are the TOWER moves, which serve to stack half-zippers on top of others.


There is the CLICK move, described in the next figure for a particular case. In general, the CLICK moves creates a zipper from two opposed half-zippers, possibly also with a rest, which is a half zipper. It is very intuitive.


You shall see later that CLICK is a very funny move, one which formalizes the identification of a pattern.

The ZIP move is the one which gives the name to zipper logic. It looks like the action of zipping or unzipping a zipper.


The composite CLICK + ZIP plays the role of the graphic beta move, but here is a more subtle thing: CLICK is like identifying the good pattern for the graphic beta move and ZIP is like actually applying the graphic beta move.

Then you have the DIST moves, like in chemlambda, but for half-zippers:


And then there are the LOCAL PRUNING moves, some for half-zippers and other just like those in chemlambda.


Finally,  there are some moves (among them the very important FAN-IN move) which involve only the familiar nodes from chemlambda.


That’s it!

It looks very much like chemlambda, right? That is true, with the subtlety of CLICK added, which is exploited when we find models of the zipper logic outside chemlambda.


Microbiome OS

Your computer could be sitting alone and still be completely outnumbered for your operating system  is home to  millions of tiny passengers – chemlambda molecules.

The programs making the operating system of your computer are made up of around ten million code lines, but you harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there are 100 virtual bacterial ones. This is your ‘microbiome’ OS and it has a huge impact on your social  life, your ability to  interact with the Internet of Things and more. The way you use your computer, in turn, affect them. Everything from the forums we visit  to the way we use the Internet for our decentralized computations  influences the species of bacteria that take up residence in our individual mocrobiome OS.


Text adapted from the article Microbiome: Your Body Houses 10x More Bacteria Than Cells, which I found by reading this G+ post by Lacerant Plainer.

This is a first example of a post which would respond to the challenge from Alife vs AGI. For commodity of the reader I reproduce it further:

In  this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!



Chemlambda, universality and self-multiplication

Together with Louis Kauffman, we submitted  the following article:

M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,   arXiv:1403.8046


The article abstract is:

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time.

This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

Please enjoy the nice text and the 21 figures!

In this post I want to explain in few words what is this larger project, because it is something which is open to anybody to contribute and play with.

We look at the real living world as something ruled by chemistry. Everywhere in the real world there are local chemical interactions, local in space and time. There is no global, absolute, point of view which is needed to give meaning to this alive world.  Viruses, bacteria and even prebiotic chemical entities form the scaffold of this world, until very recently, when the intelligent armchair philosophers appeared and invented what they call “semantics”. Before the meaning of objects, there was life.

Likewise, we may imagine a near future where the virtual world of the Internet is seamlessly interlinked with the real world, by means of the Internet of Things and artificial chemistry.

Usually we are presented a future where   artificial intelligences  and rational machines   give expert advices  or make decisions based on statistics of real life interactions between us humans or between us and the objects we manipulate. This future is one of gadgets, useful objects and  virtual bubbles for the bayesian generic human. Marginally, in this future, we humans, we may chit chat and ask corporations for better gadgets or for more useful objects. This is the future of cloud computing, that is centralized distributed computing.

This future world does not look at all like the real world.

Because the real world is not centralized. Because individual entities which participate in the real world do live individual lives and have individual interactions.

Because we humans want to discuss and interact with others more than we want better gadgets.

We think  then about a future of a virtual world based on  decentralized computing with an artificial chemistry, a world where  individual entities, real or virtual,  interact by the means of an artificial chemistry, instead of being baby-sitted by statistically benevolent artificial intelligences.

Moreover, the Internet of Things, the bridge between the real and the virtual world, should be designed as a translation tool between real chemistry and artificial chemistry. Translation of what? Of  decentralized purely local computations.

This is the goal of the project, to see if such a future is possible.

It is a fun goal and there is much to learn and play with. It is by no means something which appeared from nowhere, instead it is a natural project, based on lots of bits of past and present research.


Take a better look at the knotted S combinator (zipper logic VI)

Continuing from  Knot diagrams of the SKI (zipper logic V) , here are some  more drawings of the S combinator which was described in the last post by means of crossings:




Seen like this, it looks very close to the drawings from section 5 of  arXiv:1103.6007.

I am not teasing you,  but many things are now clear exactly because of all these detours. There is a lot to write and explain now, pretty much straightforward and a day after day effort to produce something  which describes well the end of the journey. When in fact the most mysterious creative part is the journey.


Knot diagrams of the SKI (zipper logic V)

Continuing from  Zipper logic and knot diagrams, here are the  S,K,I combinators when expressed in this convention of the zipper logic:


Besides crossings (which satisfy at least the Reidemeister 2 move), there are also fanout nodes. There are associated DIST moves which self-reproduce the half-zippers as expressed with crossings.

Where do the DIST moves come from? Well, recall that there are at least two different ways to express crossings as macros in GLC or chemlambda: one with application and abstraction nodes, the other with fanout and dilation nodes.

This is in fact the point: I am interested to see if the emergent algebra sector of GLC, or the corresponding one in chemlambda, is universal, and when I am using crossings I am thinking secretly about dilations.

The DIST moves (which will be displayed in a future post) come from the DIST moves from the emergent algebra sector of chemlambda (read this post and links therein).

There is though a construct which is strange, namely the left-to-right arrow which has attached a stack of right-to-left arrows,  and the associated CLICK move which connects these stacks of arrows.

Actually, these stacks look like half-zippers themselves and the CLICK move looks like (un)zipping a zipper.

So, are we back to square one?

No, because even if we replace those stacks by some other half-zippers and the CLICK move by unzipping, we still have the property that those constructs and moves, which are external to knot diagrams, are very localized.

Anyway, I can cheat by saying that I can do the CLICK move, if the crossings are expressed in the emergent algebra sector of chemlambda (therefore dilation nodes, fanout and fanin nodes), with the help of ELIM LOOPS and SWITCH.

But I am interested into simple, mindless ways to do this.



Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.


All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.


The first thread of the Moirai

I just realized that maybe +Carl Vilbrandt  put the artificial life thread of ideas in my head, with this old comment at the post Ancient Turing machines (I): the three Moirai:

Love the level of free association  between computation and Greek philosophy. Very creative.
In this myth computation = looping cycles of life. by the Greek goddess of Mnemosyne (one of the least rememberer of the gods requires the myth of the Moirai to recall how the logic of life / now formalized as Lisp works.
As to the vague questions:
1. Yes they seem to be the primary hackers of necessity.
2. Yes The emergent the time space of spindle of necessity can only be by the necessary computational facts of matter.
3. Of course at this scale of myth of wisdom it a was discreet and causal issue. Replication of them selfs would have been no problem.
Lisp broken can’t bring my self write in any other domain. So lisp is the language of life. With artifical computation resources science can at last define life and design creatures.

Thank you!

When I wrote that post, things were not as clear to me as now. Basically I was just trying to generate all graphs of GLC (in the newer version all molecules of the artificial chemistry called “chemlambda“) by using the three Moirai as a machine.

This thread is now a big dream and a project in the making, to unify the meatspace with the virtual space by using the IoT as a bridge between the real chemistry of the real world and the artificial chemistry of the virtual world.

Zipper logic and knot diagrams

In this post I want to show how to do zipper logic with knot diagrams. Otherwise said, I want to define zippers and their moves in the realm of knot diagrams.

Knot diagrams, it’s a way of saying, in fact I shall use oriented tangle diagrams (i.e. the wires are oriented and there might be in or out wires) which moreover are only locally planar (i.e. we admit also “virtual crossings”, in the sense that the wires may cross without creating a crossing 4-valent node) and not, as usual, globally planar.

Here is the half zippers definition:


As you see, each half zipper has some arrows which are not numbered, that is because we don’t need this information, which can be deduced from the given numbering, just by following the arrows.

We have to define now the CLICK move. For the zippers from chemlambda, that was easy, the move CLICK is trivial there. No longer here:


The figure illustrates a CLICK move between a (-m)Z and a (+n)Z with m>n.  It’s clear how to define CLICK for the other cases m = n and m<n.

Finally, the ZIP move is nothing by repeated application of a R2 move:


It works very nice and it has a number of very interesting consequences, which will be presented in future posts.

For the moment, let me close by recalling the post Two halves of beta, two halves of chora.


Zipper logic (III). Universality

Continues from Zipper logic (II). Details . In this post we shall see that zipper logic is Turing universal.

The plan is the following. I shall define combinators in the zipper logic framework, as being made of two ingredients:

  • instead of a tree of application nodes we have a combination of + half zippers (i.e. there is a clear way to transform a binary tree of application nodes into a collection of + zippers)
  • each combinator is then seen as being made of a combination of + half zippers (replacing the tree of application nodes) and the combinators S, K, I, seen also as particular combinations of + and – zippers.

1. Converting trees of application nodes into combinations of + zippers is straightforward.  Imagine such a tree with the root upwards and leaves downwards.  Each application node has then an exit arrow  and two input arrows, one at left, the other one at right.

Cut into two each right input arrow which is not a leaf or the root arrow.  We obtain a collection of + half zippers. Replace each tower of application nodes from this collection with the corresponding + half zipper.

Then glue back along the arrows which have been cut in two.

An example is given in the following figure.


2. The S, K, I combinators in zipper logic are defined int he next figure.


3. Combinators in zipper logic are by definition trees of application nodes translated into + half zippers, as explained at point 1, with leaves the S, K, I combinators defined at point 2.

4. Computation in zipper logic means the application of the moves (graph rewrites) defined in the previous post  Zipper logic (II). Details .

In particular the computation with a combinator in zipper logic means the reduction according to the graph rewrites of zipper logic of the respective combinator, as defined at point 3.

5. Zipper logic is Turing universal. In order to prove this we have to check the usual reductions of the S, K, I combinators.

Let A be a  combinator, then the zipper combinator which corresponds to IA reduces to the zipper combinator of A:


Let A, B be two combinators. Then the zipper combinator corresponding to the combinator (KA)B reduces as in the following figure.


The combinator (SK)K reduces to the combinator I, as in the next figure.


Now, let A, B, C be three combinators. We see then how the combinator ((SA)B)C reduces to (AC)(BC), when expressed as zipper combinators.


We see here a move called “FAN-OUT”.  This move is a composite of DIST and  FAN-IN, like in chemlambda. It is left to prove that indeed, any zipper combinator is a multiplicator (i.e. the move FAN-OUT makes sense when applied to a zipper combinator). The proof is the same as the one needed in a step of the proof that chemlambda is Turing universal. This is left to the reader, or for another time, if necessary.


Question:  why is CLICK needed? after all I used it all the time with ZIP.

Will see this next time, when I shall prove that tangle diagrams are Turing universal, via zipper logic.


Zipper logic (II). Details

Continuing from Zipper logic, let’s make the following notation for half zippers:


As you see, each half zipper has a leading strand, i.e. the 1->1″ for the (-n)Z  and 1″->1′ for the (+n)Z.

The half zippers are just towers of nodes, therefore if we put a tower over another then we get a bigger tower, provided is made by the same nodes. We transform this into two moves.


We make, from two half zippers with opposed signs, a zipper and a half zipper (that is the CLICK move), then we ZIP the zipper (this corresponds to the graphic beta move).  This is explained in the next figure, for a particular case n<m


The CLICK move and the (full) zippers are not necessary, but they help to visualize in a more clear way what is happening. Moreover, you shall see that there are other zipper constructs, and the introduction of both full zippers and the CLICK move helps to structure future explanations.


In the zipper logic formalism, there are also used the fanout node, the termination node and the fanin node (so we are seeing the zippers as if we are in chemlambda).

The fanout and fanin nodes satisfy the FAN-IN move.  The fanout satisfies CO-ASSOC and CO-COMM.


I mentioned in the previous post that half zippers are distributors. Indeed, they satisfy the following DIST moves when connected at the leading strand with a fanout:



The LOC PRUNING  moves for half zippers are:



In the next post we shall see that zipper logic is universal.


As given here, some of the moves are not local,  because there is no restriction on the length of the half zippers where the moves apply.  But this is not a serious problem because, as we shall see, it’s enough to use half zippers and moves with bounded length (at least 3).


Zipper logic

As seen in GLC or chemlambda, combinatory logic is a matter of zipping, unzipping or self-multiplication of (half) zippers.

So, why not taking this seriously, by asking if there is a way to express combinatory logic as a logic of zippers.

Besides the fun, there are advantages of using this for computing with space, just let me first  explain the logic of zippers in all detail.

The starting point is the fact that the S, K, I combinators and the reductions they are involved into can be expressed by the intermediary of zippers.

I am going back to the post Combinators and zippers.

A  n-zipper in GLC (in fact I am going to use chemlambda in order to eliminate the GLOBAL FAN-OUT move)  is a graph which has 2n nodes, looking like this:


Zippers are interesting because they behave like actual zippers. Indeed, we can unzip  a zipper by a graphic beta move:


Therefore, we can define a k-ZIP move, as a concatenation of k beta moves, which can be applied to a n-zipper, provided that k \leq n.

As a real zipper, we see that the n-zipper can be split into 2 half zippers, which are denoted as “-n Z” and “+n Z”.


Prepared like this, let’s think about combinators. In the SKI system a general combinator is expressed as a tree made by application nodes, which has as leaves the combinators S, K, and I.

Now, we can see a tree made by application nodes  as a combination  of + half zippers.

Also, the leaves, i.e. the S,K, I combinators are themselves made of half zippers and termination nodes or fanout nodes.

This is easy to see for the I and K combinator (images taken from the post  Combinators and zippers ):



zipper_4  The combinator S is made of:

  • one -3Z  half-zipper
  • one -2Z half-zipper
  • one -1Z half zipper
  • one fanout node

linked in a particular way:

zipper_5Conclusion: any combinator expressed through S,K,I is made of:

  • a tree of + half zippers
  • with leaves made by -1Z, -2Z, -3Z half zippers,
  • connected perhaps to fanout nodes and termination nodes.

The reduction of a combinator is then expressed by:

  •  application of k-ZIP moves
  • LOC PRUNING moves (in relation to termination nodes and also in relation to CO-ASSOC moves, see further)
  • and DIST moves.

Let me explain. The application of k-ZIP moves is clear. The k-ZIP moves shorten the + half zippers which form the application tree.

We sit now in chemlambda, in order to avoid the GLOBAL FAN-OUT. Then, what about the moves related to a half zipper which is connected to the in arrow of a fanout node?

These are DIST moves. Indeed, remark that  all half zippers are DISTRIBUTORS.  Because they are made by concatenations of lambda or application nodes.

Moreover, the fanout node is a distributor itself! Indeed, we may see the CO-ASSOC move as a DIST move, via the application of some FAN-IN and LOC PRUNING moves:


All in all this defines the Zipper Logic, which is then Turing universal.  In the next post I shall give all details, but it’s straightforward.


Alife vs AGI

Artificial general intelligence  is, of course, on the top of the mind of some of the best or most interesting researchers. In the post Important research avenues on my mind, Ben Goertzel writes:

1. AGI, obviously … creating robots and virtual-world robots that move toward human-level general intelligence


5. Build a massive graph database of all known info regarding all organisms, focused on longevity and associated issues, and set an AI to work mining patterns from it… I.e. what I originally wanted to do with my Biomind initiative, but didn’t have the $ for…

6. Automated language learning — use Google’s or Microsoft’s databases of text to automatically infer a model of human natural languages, to make a search engine that really understands stuff.  This has overlap with AGI but isn’t quite the same thing…

7. I want to say femtotech as I’m thinking about that a fair bit lately but it probably won’t yield fruit in the next few years…


9. Nanotech-using-molecular-bio-tools and synthetic biology seem to be going interesting places, but I don’t follow those fields that closely, so I hope you’re pinging someone else who knows more about them…

I believe that 9 is far more likely to achieve sooner than 1. Will explain a bit later, after looking a bit at the frame of mind which, I think, constrains this ordering.

AGI is the queen, the graal, something which almost everybody dreams to see. It is an old dream. Recent advances in cognition show that yeah, we, Natural general intelligence beings, are kind of robots with many, many processes going in parallel in the background, all of them giving the feeling of reality. On top of all these processes are the ones related to consciousness and high level functioning of the brain. It is admirable to try to model those, but it is naive, and coming from a old way of seeing things, to believe that the other processes are somehow not as interesting, or not really needed, or simply they are too mechanical, anyway, not something which is a challenge. Reality is that we now know that we even don’t have the right frame of mind to understand how to understand the functioning of those neglected, God given processes.

So, that is why I believe that AGI is not realistic. Unless we concentrate on language, or other really puny aspects of GI, but with lots of traditions.

Btw, have I told you that whatever I write, I am always happy to be contradicted?

The points 5 and 6 look indeed very probable. Will be done by corporations, that is sure. Somehow is the same thing behind, namely that there is an essence of the pyramidal way of thinking, such that with enough means, knowledge will accumulate on top of that pyramid. (For the point 1 intelligence is the top and for 5 and 6 corporations are on top, of course).

As regards the point 7, that starts to be genuinely new, therefore less fashionable. The idea of a single molecule quantum computer springs into mind. Should be known better. [See the comments at this G+ post.]

Several concepts are now under development to make a calculation using a single molecule:
1) to force a molecule to look like a classical electronic circuit but integrated inside the molecule
2) to divide the molecule into “qubits” in order to exploit the quantum engineering developed since several years around quantum computers.
3) to use intramolecular dynamical quantum behavior without dividing molecules into “qubits” leading to Hamiltonian quantum computer

Now, to point 9!

It can be clearly done by a combination of decentralized computing with artificial chemistry. 

In a future post I shall describe with details, by using also previous posts from chorasimilarity, which are the ingredients and what are the arguments in favour of this idea.

In this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!

The price of publishing with arXiv

This is a very personal post. It is emotionally triggered by looking at this old question  Downsides of using the arXiv and by reading the recent The coming Calculus MOOC Revolution and the end of math research.

What I think? That a more realistic reason for a possible end (read: shrinking) of math research comes from  thinking  that there are any downsides of using the arXiv. That there are any downsides of using an open peer review system. It comes from those who are moderately in favour of open research until they participate into a committee or until it comes to protecting their own little church from strange ideas.

And from others, an army of good but not especially creative researchers, a high mediocracy (high because selected, however) who will probably sink research for a time, because on the long term a lot of mediocre research results add to noise. But on the short term, this is a very good business: write many mediocre, correct articles, hide them behind a paywall and influence the research policy to favour the number (and not the content) of those.

What I think  is that will happen exactly like it happened with the academic painters, a while ago.

You know that I’m right.

Now, because the net is not subtle enough, in order to show you that indeed, these people are right from a social point of view, to say that there is a price for not behaving as they expect, indulge me to explain what was the price which I paid for using the arXiv as the principal means of publication.

The advantage: I had a lot of fun. I wrote articles which contain more than one idea, or which use more than one field of research. I wrote articles on subjects which genuinely interest me, or articles which contain more questions than answers. I wrote articles which were not especially designed to solve problems, but to open ones. I changed fields, once about 3-4 years.

The price: I was told that I don’t have enough published articles. I lost a lot of cites, either because the citation was incorrectly done, or because the databases (like ISI) don’t count well those (not that I care, really). Because I change fields (for those who know me, it’s clear that I don’t do this randomly, but because there are connections between fields) I seem to come from nowhere and go nowhere. Socially, and professionally, is very bad for the career to do what I did. Most of the articles I sent for publication (to legacy publishers) have spent incredible amounts of time there and most of the refusals were of the type “seems OK but maybe another journal” or “is OK but our journal …”. I am incredibly (i.e. the null hypothesis statistically incredible) unlucky to publish in legacy journals.

But, let me stress this, I survived. And I still have lots of ideas, better than before, and I’m using dissemination tools (like this blog) and I am still having a lot of fun.

So, it’s your choice: recall why you have started to do research, what dreams you had. I don’t believe you that you dreamed, as a kid, to write a lot of ISI papers about a lot of arcane problems of others, in order to attract grant financing from bureaucrats who count what is your social influence.


A user interface for GLC

Before programming, is better to play first!

Really, not a joke, a new programming environment needs a lot of motivation and playing is a way to build some. Also, by playing we start to understand more, get all sorts of gut feelings and start collaborate with others.

As a first step towards programming with distributed GLC, a sort of playful gui seems attainable. Should be open source, of course.

Waiting for your input!

Further I describe what kind of gui would like to have. But before that, just a few words.

1. Recall that we want to play with GLC and chemlambda, at this moment, therefore the gui should be done with humans first, computers 2nd frame of mind.

2.  As we shall see, if this project starts to  pick momentum, there will a stream of ideas about how to really program with distributed GLC

3. But for the moment, forget about actors and asynchronous computation and let’s stick to GLC and chemlambda.

Here is how I see it, in the next figure. Which is btw probably not  pretty. It is very important to be pretty and simple, something not obvious to achieve, but desired.


You can click on the figure to make it bigger.

The numbers in blue are for explanations.

So, what do we see? A window with some buttons.

The gui has some drawing related capabilities, like (following the blue numbers)

1)  select some nodes and arrows and form a group with them

2) deselect the group

3) magnify ; or magnify the selection, or put in front the selection, with the rest in the background

4) about how to draw arrows: if you choose this then arrows can cross (but see the problems from 14) and 15) )  and they try to be as straight as possible

5) about how to draw arrows: if you choose this then arrows are like in a kind of 2.5 dimensions; also they are not straight (in the euclidean sense) but like electronic circuits (i.e. straight in some Manhattan distance)

6) connect two arrows (by selecting the button and clicking on them

7) cut an arrow into two half arrows; delete arrow; delete node

Now, it’s not obvious how to well draw the arrows:

14) they should avoid the nodes , but they should also be as straight as possible, according to the choices made at 4) 5)

15) the arrow > should be positioned far from nodes and other arrows

We should be able to click on a node of the graph and move it with the mouse, and the gui should keep the connectivities.

The gui also has some basic graphical bricks, which depend on the formalism we use:

13) in the figure the chemlambda is chosen, which offers the 4 nodes (or maybe even the dilation node), arrows and loops. These should be buttons, click on one node and move it into the main window. The gui should propose connectivity variants, based on proximity with other available nodes. This should work like in a chemistry drawing program.


  • if chemlambda is chosen at 13), then the list of moves, reductions, macros, i.e. 8) 9) 10) changes to be compatible with chemlambda
  • and if another choice is made, like GLC or tibit, then the gui should be able to convert the graph from the main window in the respective formalism!
  • in particular, if you don’t like the aspect of the nodes, then you should be free to change their look. For example, I’ve done this with chemlambda, where there are two notations used:


11) there is also the possibility to use cores. Cores are inputs/outputs from Distributed GLC, but let’s forget this for the moment. We discuss later about this.

12) this is a way to draw graphs which represent lambda calculus terms. The gui has a windows where we write a lambda term and the program converts it into a graph, by running the algorithm described here (for GLC, but same for chemlambda).


Before continuing with the other buttons, let’s look a bit at the first figure. We see in the TEXT window (\lambda x . xx)(\lambda x . xx) and we see a graph in the main figure.

Are they related anyways? Yes, look at the post about “metabolism of loops“.  You can see that graph (but it is made now with the more recent drawing convention) in the figure which explains how that lambda calculus combinator reduces in chemlambda. I reproduce that figure here:



Let’s go back to the buttons 8) 9) 10)

8) If you click this then you get the list of moves (from chemlambda for example, if at 13) you picked chemlambda). You click on a move, then you see on the graph the places where you can apply the move (they shine, or they are bigger, or something), or maybe as you hover the the mouse over the graph the places where this move can be applied are highlighted. Of course, this should work only for the “+” direction of the moves. In the opposite sense, should  work after you pick, for example, a pair of arrows (and then you can apply there, by clicking, a (beta-) move).

I picked this particular graph because it has the property that some of its nodes  are part of several different patterns where different moves can apply. (In the last figure you see this, because the patterns and the possible moves are described there).

9) two ways to use this: either you click on a highlighted pattern (of a move, selected with 8)) and the gui does the move for you (which involves redrawing the graph according to the graphical constraints described previously). Or the gui proposes to make choices (if possible) among different patterns which overlap and then does the reduction for al these at once. The chosen patterns should not overlap. The program does only one reduction step, not all (possible) reduction steps.

There should be the possibility to record a sequence of reductions, to make a kind of a movie with those.

10) Macros are special graphs, like the zipper. Practically you should be able to select a graph and save it as a macro, with a name. Likewise, you should be able to define and save macro moves, i.e. particular sequences of moves among particular graphs (or patterns).

Assembling the puzzle of computing with space (I)

A lot of material has accumulated, it is time to start assembling the puzzle.

Let’s look first what we have:

These are the pieces of the puzzle.


Today I want to show you that the extended beta move can be done in a formalism which is made by chemlambda with dilation nodes added,  and with the local emergent algebra moves added.



Digital materialization and synthetic life

Digital materialization (DM) is not the name of a technology from Star Trek.  According to the wikipedia page

  DM can loosely be defined as two-way direct communication or conversion between matter and information that enables people to exactly describe, monitor, manipulate and create any arbitrary real object.

I linked to the Digital Materialization Group, here is a quote from their page.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

Such an approach can be applied not only to tangible objects but can include the conversion of things such as light and sound to/from information and matter. Systems to digitally materialize light and sound already largely exist now (e.g. photo editing, audio mixing, etc.) and have been quite effective – but the representation, control and creation of tangible matter is poorly supported by computational and digital systems.

My initial interest in DM came from possible interactions with the Unlimited Detail idea, see this post written some time ago.

Well, there is much more into this idea, if we think about life forms.
In the discussion section of this  article  by Craig Venter et al. we read:

This work provides a proof of principle for producing cells based on computer-designed genome sequences. DNA sequencing of a cellular genome allows storage of the genetic instructions for life as a digital file.

In  his book Life at the speed of light  Craig Venter writes (p. 6)

All living cells run on DNA software, which directs hundreds to thousands of protein robots. We have been digitizing life for decades, since we first figured out how to read the software of life by sequencing DNA. Now we can go in the other direction by starting with computerized digital code, designing a new form of life. chemically synthesizing its DNA, and then booting it up to produce the actual organism.

That is clearly a form of Digital Materialization.

Now, we have these two realms, virtual and real, and the two way bridge between them called DM.

It would be really nice if we would have the same chemistry ruled world:

  • an artificial version for the virtual one, in direct correspondence with those parts of
  • the real version (from the real world) which are relevant for the DM translation process.

This looks like a horribly complex goal to reach, because of the myriad concrete, real stumbling blocks, but hey, this is math for, right? To simplify, to abstract, to define, to understand.

[posted also here]


How to use chemlambda for understanding DNA manipulations

… or the converse: how to use DNA manipulations to understand chemlambda, this is a new thread starting with this post.

This is a very concrete, nice project, I already have some things written, but it is still in a very fluid form.

Everybody is invited to work with me on this. It would be very useful to collaborate with people which have knowledge about DNA and enzymes involved into the processes around DNA.

So, if you want to contribute, then you can do it in several ways:

  • by dedicating a bit of your brain power to concrete parts of this
  • by sending me links to articles which you have previously  read and understood
  • by asking questions about concrete aspects of the project
  • by proposing alternative ideas, in a clear form
  • by criticizing the ideas from here.

I am not interested just to discuss about it, I want to do it.

Therefore, if you think that there is this other project which does this and that with DNA and computation, please don’t mention it here unless you have clear explanations about the connections with this project.

Don’t use authority arguments and name dropping, please.


Now, if anybody is still interested to learn what is this about, after the frightening introduction, here is what I am thinking.

There is a full load of enzymes like this and that,  which cut, link, copy, etc. strings of DNA.  I want to develop a DNA-to-chemlambda dictionary which translates what happens in one world into the other.

This is rather easy to do. We need a translation of arrows and the four nodes from chemlambda into some DNA form.

Like this one, for example:


Then we need a translation of the chemlambda moves (or some version of those, see later) into processes involving DNA.

There is plenty of stuff in the DNA world to do the simple things from chemlambda. In turn, because chemlambda is universal, we get a very cheap way of defining DNA processes as computations.

Not as boolean logic computations. Forget about TRUE, FALSE and AND gates.  Think about translating DNA processes into something like lambda calculus.

I know that there is plenty of research about using DNA for computation, and there is also plenty of research about relations between lambda calculus and chemistry.

But I am not after some overarching theory which comprises everything DNA, chemistry and lambda calculus.

Instead, I am after a very concrete look at tiny parts of the whole huge field, based on a specific formalism of chemlambda.

It will of course turn out that there are many articles relevant for what will be found here and there will be a lot of overlap with research already done.

Partially, this is one of the reasons I am searching collaborations around this, in order to not invent wheels all the time, due to my ignorance.


Who wins from failed peer reviews?

The recent retraction of 120 articles from non-OA journals, coming after the attack on OA by the John Bohannon experiment, is the subject of Predatory Publishers: Not Just OA (and who loses out?). The article asks:

Who Loses Out Under Different “Predator” Models?

and an answer is proposed.  Further I want to comment on this.

First, I remark that the results of the  Bohannon experiment (which is biased because it is done only on a selected list of OA journals) show that the peer review process may be deeply flawed for some journals (i.e. those OA journals which accepted the articles sent by Bohannon) and for some articles at least (i.e. those articles sent by Bohannon which were acepted by the OA journals).

The implication of that experiment is that maybe there are other articles which were published by OA journals after a flawed peer review process.

On the other side, Cyril Labbé discovered  120 articles in some non  OA journals which were nonsense automatically generated by SCIgen. It is clear that the publication of these 120 article shows that the peer review process (for those articles and for those journals) was flawed.

The author of the linked article suggests that the one who loses from the publication of flawed articles, in OA or non OA journals, is the one who pays! In the case of legacy publishers this is the reader. In the case of Gold OA publishers this is the author.

This is correct. The reason why the one who pays loses is that the one who pays is cheated by the flawed peer review. The author explains this very well.

But it is an incomplete view. Indeed, the author recognizes that the main service offered by the publishers is the  well done peer review. Before discussing who loses from publication of flawed articles, let’s recognize that this is what the publisher really sells.

At least in a perfect world, because the other thing a publisher sells is vanity soothing. Indeed, let’s return to the pair of discoveries made by Bohannon and Labbé and see that while in the case of Bohannon experiment the flawed articles were made up with for the experiment purpose,  Labbé discovered articles written by researchers who tried to publish something for the sake of publishing.

So, maybe before asking who loses from flaws in the peer review, let’s ask who wins?

Obviously, unless there is a conspiracy going on from some years,  the researchers who submitted  automatically generated articles to prestigious non OA publishers did not want their papers to be well peer reviewed. They hoped their papers will pass this filter.

My conclusion is:

  • there are two things a publisher sells: peer review as a service and vanity
  • some Gold OA journals and some legacy journals turned out to have flawed peer review service
  • indeed, the one who pays and does not receive the service looses
  • but also the one who exploits the flaws of the badly done  peer review service wins.

Obviously green OA will lead to fewer losses and open peer review will lead to fewer wins.

The true Internet of Things, decentralized computing and artificial chemistry

A thing is a discussion between several participants.  From the point of view of each participant, the discussion manifests as an interaction between the participant with the other participants, or with itself.

There is no need for a global timing of the interactions between participants involved in the discussion, therefore we talk about an asynchronous discussion.

Each participant is an autonomous entity. Therefore we talk about a decentralized discussion.

The thing is the discussion and the discussion is the thing.

When the discussion reaches an agreement, the agreement is an object. Objects are frozen discussions, frozen things.

In the true Internet of Things, the participants can be humans or virtual entities. The true internet of Things is the thing of all things, the discussion of all discussions. Therefore the true Internet of Things has to be asynchronous and decentralized.

The objects of the true Internet of Things are the objects of discussions. For example a cat.

Concentrating exclusively on objects is only a manifestation of the modern aversion of having a conversation. This aversion manifests in many ways (some of them extremely useful):

  • as a preference towards analysis, one of the tools of the scientific method
  • as the belief in the semantics, as if there is a meaning which can be attached to an object, excluding any discussion about it
  • as externalization of discussions, like property rights which are protected by laws, like the use of the commons
  • as the belief in objective reality, which claims that the world is made by objects, thus neglecting the nature of objects as agreements reached (by humans) about some selected aspects of reality
  • as the preference towards using bottlenecks and pyramidal organization as a mean to avoid discussions
  • as various philosophical currents, like pragmatism, which subordinates things (discussions) to their objects (although it recognizes the importance of the discussion itself,  as long as it is carefully crippled in order that it does not overthrow the object’s importance).

Though we need agreements, we need to rely on objects (as evidence), there is no need to limit the future true Internet of Things to an Internet of Objects.


We already have something  called Internet of Things, or at least something which will become an Internet of Things, but it seems to be designed as an Internet of Objects. What is the difference? Read Notes for “Internet of things not Internet of objects”.

Besides humans, there will be  the other participants in the  IoT,  in fact the underlying connective mesh which should support the true Internet of Things.  My proposal is to use an artificial chemistry model mixed with the actor model, in order to have only the strengths of both models:

  1.   decentralized,
  2. does not need an overlooking controller,
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

without having the weaknesses:

  1.  the global view of Chemical Reaction Networks,
  2. the generality of behaviours of the actors in the actor model, which forces the model to be seen as a high level, organizing the way of thinking about particular computing tasks, instead of being a very low level, simple and concrete model.


With these explanations, please go and read again  three  older posts and a page, if interested to understand more:


Open peer review as a service

The recent discussions about the creation of a new Gold OA journal (Royal Society Open Science)  made me to write this post. In the following there is a concentrate of what I think about the legacy publishers, Gold OA publishers and the open peer review as a service.

Note: the idea is to put in one place the various bits of this analysis, so that it is easy to read. The text is assembled from slightly edited parts of several posts from chorasimilarity.

(Available as a published google drive doc here.)

Open peer review as a service   

Scientific publishers are in some respects like Cinderella. They used to provide an immense service to the scientific world, by disseminating  new results and archiving old results into books. Before the internet era, like Cinderella at the ball, they were everybody’s darling.

Enters the net. At the last moment, Cinderella tries to run from this new, strange world.

Cinderella does not understand  what happened so fast. She was used with the scarcity (of economic goods), to the point that she believed everything will be like this all her life!

What to do now, Cinderella? Will you sell open access for gold?

But wait! Cinderella forgot something. Her lost shoe, the one she discarded when she ran out from the ball.

In the scientific publishers world, peer-review is the lost shoe. (As well, we may say that up to now, researchers who are writing peer-reviews are like Cinderella too, their work is completely unrewarded and neglected.)

In the internet era the author of a scientific research paper is free to share his results with the scientific world by archiving a preprint version of her/his paper in free access repositories.  The author, moreover, HAS to do this  because the net offers a much better dissemination of results than any old-time publisher. In order (for the author’s ideas) to survive, making a research paper scarce by constructing pay-walls around it is clearly a very bad idea.  The only thing which the gold open access  does better than green open access is that the authors pay the publisher for doing the peer review (while in the case of, say, the archived articles are not peer-reviewed).

Let’s face it: the publisher cannot artificially make scarce the articles, it is a bad idea. What a publisher can do, is to let the articles to be free and to offer the peer-review service.

Like Cinderella’s lost shoe, in this moment the publisher throws away the peer-reviews (made gratis by fellow researchers) and tries to sell the article which has acceptable peer-review reports.

Context. Peer-review is one of the pillars of the actual publication of research practice. Or, the whole machine of traditional publication is going to suffer major modifications, most of them triggered by its perceived inadequacy with respect to the needs of researchers in this era of massive, cheap, abundant means of communication and organization. In particular, peer-review is going to suffer transformations of the same magnitude.

We are living interesting times, we are all aware that internet is changing our lives at least as much as the invention of the printing press changed the world in the past. With a difference: only much faster. We have an unique chance to be part of this change for the better, in particular  concerning  the practices of communication of research.

In front of such a fast evolution of  behaviours, a traditionalistic attitude is natural to appear, based on the argument that slower we react, a better solution we may find. This is however, in my opinion at least, an attitude better to be left to institutions, to big, inadequate organizations, than to individuals.

Big institutions need big reaction times because the information flows slowly through them, due to their principle of pyramidal organization, which is based on the creation of bottlenecks for information/decision, acting as filters. Individuals are different in the sense that for them, for us, the massive, open, not hierarchically organized access to communication is a plus.

The bottleneck hypothesis. Peer-review is one of those bottlenecks, traditionally. It’s purpose is to separate the professional  from the unprofessional.  The hypothesis that peer-review is a bottleneck explains several facts:

  • peer-review gives a stamp of authority to published research. Indeed, those articles which pass the bottleneck are professional, therefore more suitable for using them without questioning their content, or even without reading them in detail,
  • the unpublished research is assumed to be unprofessional, because it has not yet passed the peer-review bottleneck,
  • peer-reviewed publications give a professional status to authors of those. Obviously, if you are the author of a publication which passed the peer-review bottleneck then you are a professional. More professional publications you have, more of a professional you are,
  • it is the fault of the author of the article if it does not pass the peer-review bottleneck. As in many other fields of life, recipes for success and lore appear, concerning means to write a professional article, how to enhance your chances to be accepted in the small community of professionals, as well as feelings of guilt caused by rejection,
  • the peer-review is anonymous by default, as a superior instance which extends gifts of authority or punishments of guilt upon the challengers,
  • once an article passes the bottleneck, it becomes much harder to contest it’s value. In the past it was almost impossible because any professional communication had to pass through the filter. In the past, the infallibility of the bottleneck was a kind of self-fulfilling prophecy, with very few counterexamples, themselves known only to a small community of enlightened professionals.

This hypothesis explains as well the fact that lately peer-review is subjected to critical scrutiny by professionals. Indeed, in particular, the wave of detected plagiarisms in the class of peer-reviewed articles lead to the questioning of the infallibility of the process. This is shattering the trust into the stamp of authority which is traditionally associated with it.  It makes us suppose that the steep rise of retractions is a manifestation of an old problem which is now revealed by the increased visibility of the articles.

From a cooler point of view, if we see the peer-review as designed to be a bottleneck in a traditionally pyramidal organization,  is therefore questionable if the peer-review as a bottleneck will survive.

Social role of peer-review. There are two other uses of peer-review, which are going to survive and moreover, they are going to be the main reasons for it’s existence:

  • as a binder for communities of peers,
  • as a time-saver for the researchers.

I shall take them one-by-one.

On communities of peers. What is strange about the traditional peer-review is that although any professional is a peer, there is no community of peers. Each researcher does peer-reviewing, but the process is organized in such a manner that we are all alone.

To see this, think about the way things work: you receive a demand to review an article, from an editor, based on your publication history, usually, which qualifies you as a peer. You do your job, anonymously, which has the advantage of letting you be openly critical with the work of your peer, the author. All communication flows through the editor, therefore the process is designed to be unfriendly with communications between peers. Hence, no community of peers.

However, most of the researchers who ever lived on Earth are alive today. The main barrier for the spread of ideas is a poor mean of communication. If the peer-review becomes open, it could foster then the appearance of dynamical communities of peers, dedicated to the same research subject.

As it is today, the traditional peer-review favours the contrary, namely the fragmentation of the community of researchers which are interested in the same subject into small clubs, which compete on scarce resources, instead of collaborating. (As an example, think about a very specialized research subject which is taken hostage by one, or few, such clubs which peer-reviews favourably only the members of the same club.)

Time-saver role of peer-review. From the sea of old and new articles, I cannot read all of them. I have to filter them somehow in order to narrow the quantity of data which I am going to process for doing my research.

The traditional way was to rely on the peer-review bottleneck, which is a kind of pre-defined, one size for all solution.

With the advent of communities of peers dedicated to narrow subjects, I can choose the filter which serves best my research interests. That is why, again, an open peer-review has obvious advantages. Moreover, such a peer-review should be perpetual, in the sense that, for example, reasons for questioning an article should be made public, even after the “publication” (whatever such a word will mean in the future). Say, another researcher finds that an older article, which passed once the peer-review, is flawed for reasons the researcher presents. I could benefit from this information and use it as a filter, a custom, continually upgrading filter of my own, as a member of one of the communities of peers I am a member of.

All the steps of the editorial process used by legacy publishers are obsolete. To see this, is enough to ask “why?”.

  1. The author sends the article to the publisher (i.e. “submits” it). Why? Because in the old days the circulation and availability of research articles was done almost exclusively by the intermediary of the publishers. The author had to “submit” (to) the publisher in order for the article to enter through the processing pipe.
  2. The editor of the journal seeks reviewers based on  hunches, friends advice, basically thin air. Why? Because, in the days when we could pretend we can’t search for every relevant bit of information, there was no other way to feed our curiosity but from the publishing pipe.
  3. There are 2 reviewers who make reports. (With the author, that makes 3 readers of the article, statistically more than 50% of the readers the article will have,  once published.) Why? Because the pyramidal way of organization was, before the net era, the most adapted. The editor on top, delegates the work to reviewers, who call back the editor to inform him first, and not the author, about their opinion. The author worked, let’s say, for a year and the statistically insignificant number of 2 other people make an opinion on that work in … hours? days? maybe a week of real work? No wonder then that what exits through the publishing pipe is biased towards immediate applications, conformity of ideas and the glorified version of school homeworks.
  4. The editor, based solely on the opinion of 2 reviewers, decides what to do with the article. He informs the author, in a non-conversational way, about the decision. Why? Because again of the pyramidal organization way of thinking. The editor on top, the author at the bottom. In the old days, this was justified by the fact that the editor had something to give to the author, in exchange of his article: dissemination by the means of industrialized press.
  5. The article is published, i.e. a finite number of physical copies are typed and sent to libraries and particulars, in exchange for money. Why? Nothing more to discuss here, because this is the step the most subjected to critics by the OA movement.
  6. The reader chooses which of the published articles to read based on authority arguments. Why? Because there was no way to search, firsthand, for what the reader needs, i.e. research items of interest in a specific domain. There are two effects of this.

(a) The raise of importance of the journal over the one of the article.

(b) The transformation of research communication into vanity chasing.

Both effects were (again, statistically) enforced by poor science policy and by the private interests of those favoured by the system, not willing to  rock the boat which served them so well.

Given that the entire system is obsolete, what to do? It is, frankly, not our business, as researchers, to worry about the fate of legacy publishers, more than about, say, umbrella repairs specialists.

Does Gold OA sell the peer-review service?  It is clear that the reader is not willing to pay for the research publications, simply because the reader does not need the service which is classically provided by a publisher: dissemination of knowledge. Today the researcher who puts his article in an open repository does a much better dissemination  than legacy publishers with their old tricks.

Gold OA is the idea that if we can’t force the reader to pay, maybe we can try with the author. Let’s see what exactly is the service which Gold OA publishers offer to the author (in exchange for money).

1.  Is the author a customer of a Gold OA publisher?

I think it is.

2. What is the author paying for, as a customer?

I think the author pays for the peer-review service.

3. What offers the Gold OA publisher  for the money?

I think it offers only the peer-review service, because dissemination can be done by the author by submitting to open repositories, like the , for free. There are opinions that  that the Gold OA publisher offer much more, for example the service of assembling an editorial board, but who wants to buy an editorial board? No, the authors pays for the peer-review process, which is managed by the editorial board, true, which is assembled by the publisher. So the end-product is the peer-review and the author pays for that.

4. Is there any other service  else sold to the author by the Gold OA publisher?

Almost 100% automated services, like formatting, citation-web services, hosting the article are very low value services today.

However, it might be argued that the Gold OA publisher offers also the service of satisfying the author’s vanity, as the legacy publishers do.

Conclusion.  The only service that publishers may provide to the authors of research articles is the open, perpetual peer-review.  There is great potential here, but Gold OA sells this for way too much money.


What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).


All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)


Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.


A passage from Rodney Brooks’ “Intelligence without representation” applies to distributed GLC

… almost literally.  I am always very glad to discovered that some research subject where I contribute is well anchored in the past. Otherwise said, it is always well for a researcher to  learn that he’s on the shoulder of some giant, it gives faith that there is some value in the respective quest.

The following passage resembles a lot with some parts and  principles of distributed GLC:

  • distributed
  • asynchronous
  • done by processing structure to structure (via graph rewrites)
  • purely local
  • this model of computation does not need or use any  evaluation procedure, nor in particular evaluation strategies. No names of variables, no values are used.
  • the model does not rely on signals passing through gates, nor on the sender-receiver setting of Information Theory.
  • no semantics.

Now, the passage from “Intelligence without representation” by Rodney Brooks.

It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  [...]

… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  [...]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.

However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.


Good news: Royal Society Open Science has what is needed


Royal Society Open Science will be the first of the Royal Society’s journals to cover the entire range of science and mathematics. It will provide a scalable publishing service, allowing the Society to publish all the high quality work it receives without the restrictions on scope, length or impact imposed by traditional journals. The cascade model will allow the Royal Society to make more efficient use of the precious resource of peer review and reduce the duplication of effort in needlessly repeated reviews of the same article.

The journal will have a number of distinguishing features:

objective peer review (publishing all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader)
• it will offer open peer review as an option
• articles will embody open data principles
• each article will have a suite of article level metrics and encourage post-publication comments
• the Editorial team will consist entirely of practicing scientists and draw upon the expertise of the Royal Society’s Fellowship
• in addition to direct submissions, it will accept articles referred from other Royal Society journals

Looks great!  That is important news, for two reasons:

  • it has some key features: “objective peer review” ,  “open peer review as an option” , “post-publication comments”
  • it is handled by a learned society.

It “will launch officially later in 2014″.  I believe them.  (And if not then another learned society should take the lead, because it’s just the right time.)

For me, as a mathematician, it is also important that it covers math.

After reading one more time, I realize that in the announcement there is nothing about the OA colour: green or gold?

What I hope is that in the worst case they will choose a PeerJ green (the one with the discrete, humanly pleasant shade of gold, see Bitcoin, figshare, dropbox, open peer-review and hyperbolic discounting).  If not, anyway they will be the first, not the last,  academic society (true?) which embraces an OA system with those mentioned important features.


UPDATE:  Graham Steel asked and quotes  “The APC will be waived for the first year. After this it will be £1000″.

Disappointing!  I am a naive person.

So, I ask once more: What’s needed to make a PeerJ like math journal?


Bitcoin, figshare, dropbox, open peer-review and hyperbolic discounting

Thinking out loud about the subject  of models of OA publication

  1. which are also open peer-review friendly,
  2. which work in the real world,
  3. which offer an advantage to the researchers using them,
  4. which have  small costs for the initiators.

PeerJ  is such an example, I want to understand why does it work and find a way to emulate it’s success, as quickly as possible.

You may wonder what difference is between 2 (works in real world) and  3(gives advantage to the user). If it gives an advantage to the user than it should work in real life, right? I don’t think so, because the behaviour of real people is far from being rational .

A hypothesis for achieving 2 is to exploit hyperbolic discounting.  I believe that one of the reasons PeerJ works is not only that it is cheaper than PLOS, but it also exploits this human behaviour.

It motivates the users to  review and to submit and it also finances the site (buys the cloud time, etc).

How much of the problem 4 can be solved by using the trickle of money which comes from exploiting hyperbolic discounting? Some experiments can be made.

What else? Let’s see, there is more which intrigues me:

  • the excellent figshare   of Mark Hahnel. It’s a  free repository,  which provides a DOI and collects some citation and use data.
  • there is a possibility to make blogs on dropbox. I have to understand well, but it seems that offers this service, which is an interesting thing in many ways. For example can one use a dropbox blog for sharing the articles, making it easy to collect reactions to them (as comments), in parallel with using figshare for getting a DOI for the article and for depositing versions of the article?
  • tools like  for collecting twitter reactions to the articles (and possibly write other tools like this one)
  •  what is a review good for? a service which an open review could bring to the user is to connect the user with other people interested in the same thing. Thus, by collecting “social mentions” of the article, the author of the article might contact the interested people.
  • finally, and coming back to the money subject (and hyperbolic discounting), if you think, there is some resemblance in the references of an article and the block chains of bitcoin.  Could this be used?

I agree that these are very vague ideas, but it looks like there may be several sweet spots in this 4 dim space

  • (behavioral pricing , citing as  block chain)
  • (stable links like DOI , free repository)
  • (editor independent blog as open article)
  • (APIs for collecting social mentions as reviews)


What’s needed to make a PeerJ like math journal?

 I want to see or even want to participate into the making of a PeerJ like journal for mathematics. Is anybody interested into that? What is needed for starting such a thing?

Here is the motivation: it works and it has open peer-review. It is not exactly green OA, but it is a valid model.  You pay a $99 for one article per year, to $299 for unlimited number of articles and time. But one has also to have a reviewing activity in order to keep these publishing plans privileges, one has to submit a review at least once per year (a review can even be a comment to an article). That’s a very clever mechanism which takes into account the human nature :)

In my opinion we, mathematicians are in dire need for something like this!

Speaking for myself, I am bored to wait for others to do what they suggested they will do.
(Only crickets noise until now, as a response to my questions here  )

Also, I believe that mathematicians  form a rather big community today and they deserve better publication models than the ones they have. Free from ego battles and who’s got the biggest citation count.

We do have the arXiv, which is the oldest (true?) and greatest math and physics repository ever.

But it looks that after an early and very beneficial  adoption of this invention of  physicists, we are loosing the pace.

Moreover, if there is any reason to mention this, I also think that such a PeerJ-like publication vehicle will not harm, in the long term, the interests of the mathematical learned societies.


The same post is here too.

The graphical moves of projective conical spaces (II)

Continues from  The graphical moves of projective conical spaces (I).

In this post we see the list of moves.

The colours X and O are added to show that the moves preserve the class of graphs PROJGRAPH.

The drawing convention is the following:  columns of colours represent possible different choices of colouring. In order to read correctly the choices, one should take, for example, the first elements from all columns, as a choice, then the second element from all columns, etc. When there is only one colour indicated, in some places, then there is only one choice for the respective arrow. Finally, I have not added symmetric choices obtained by replacing everywhere X by O and O by X.

1. The PG move.


As you see, there is only one PG move. There are 3 different choices of colours, which results into 3 versions of the PG move, as explained in the post A simple explanation with types of the hexagonal moves of projective spaces.

2. The DIST move.


This is the projective version of the “mystery” move which appeared in the posts

Look at  the chemlambda DIST moves to see that this move is in the same family.

3. The R1 move.


This is a projective version of the GLC move R1 (more precisely R1a). The name comes from “Reidemeister 1″ move, as seen through the lens of emergent algebras.

4. The R2 move.

new_colour_5This is a projective version of the GLC move R2 . The name comes from “Reidemeister 2″ move.

5. The ext2 move.

new_colour_7This is a projective version of the GLC move ext2.

6. CO-COMM, CO-ASSOC and LOC PRUNING.  These are the usual  moves  associated to the fanout node. The LOC PRUNING move for the dilation node is also clear.


All these moves are local!


The graphical moves of projective conical spaces (I)

This post continues from A simple explanation with types of the hexagonal moves of projective spaces .  Here I put together all the story of projective conical spaces, seen as a graph rewriting system, in the same style as (the emergent algebra sector of) the graphic lambda calculus.

What you see here is part of the effort to show that there is no fundamental difference between geometry and computation.

Moreover, this graph rewriting system can be used, along the same lines as GLC and chemlambda, for:

  •  artificial chemistry
  • a model for distributed computing
  • or for thinking about an “ethereal” spatial substrate of the Internet of Things, realized as a very low level (in terms of resources needs) decentralized computation,

simply by adapting the Distributed GLC  model for this graph rewriting system, thus transforming the moves (like the hexagonal moves) into interactions between actors.


All in all,  this post (and the next one) completes the following list:


1. The set of “projective graphs” PROJGRAPH.  These graphs are made by a finite number of nodes and arrows, obtained by assembling:

  •  4 valent nodes called (projective) dilations (nodes), with 3 arrows pointing to the node and one arrow pointing from the node. The set of 4 arrows is divided into

4 = 3+1

with 1 incoming arrow and the remaining 3 (two incoming and 1 outcoming) . Moreover, there is a cyclical order on those 3 arrows.

  • Each dilation node is decorated  by a Greek letter like \varepsilon, \mu, ...,  which denotes an element of a commutative group \Gamma. The group operation of \Gamma is denoted multiplicatively.  Usual choices for \Gamma are the real numbers with addition, or the integers with addition, or the positive numbers with multiplication. But any commutative group will do.
  • arrows which don’t point to, or which don’t come from any nodes are accepted
  • as well as loops with no node.
  • there are also 3 valent nodes, called “fanout nodes”, with one incoming arrow and two outcoming arrows, along with a cyclic order of the arrows (thus we know which is the outcoming left arrow and which is the outcoming right arrow).
  • moreover, there is a 1-valent termination node, with only 1 incoming arrow.

Sounds like a mouthful?  Let’s think like this: we can colour the arrows of the 4 valent dilation nodes with two colours, such that

  • both colours are used
  • there are 2 more incoming arrows coloured like the outcoming arrow.

I shall call this colours “O” and “X”,  think about them as being types, if you want. What matters is when two colours are equal or different, and not which colour is “O” and which is “X”.

From this collection of graphs we shall choose a sub-collection, called PROJGRAPH, of “projective graphs”, with the property that we can colour all the arrows of such a graphs, such that:

  • the 3 arrows of a fanout node are always coloured with the same colour (no matter which, “O” or “X”)


  • the 4 arrows of a 4 valent dilation node are coloured such that the special 1 incoming arrow is coloured differently than the other 3 arrows.

With the colour indications, we can simplify the drawing of the 4 valent nodes, like indicated in the examples from this figure.


Thus, the condition that a graph (made of 4 valent and 3 valent nodes) is in PROJGRAPH is global. That means that there is no  a priori upper bound on the number of nodes and arrows which have to be checked by an  algorithm which determines if the graph is in PROJGRAPH.

In the next post we shall see the moves, which are all local.


No semantics principle in GEB book

A very interesting connection between the no semantics principle of Distributed GLC and the famous GEB book by Douglas R. Hofstadter has been made by Louis Kauffman, who indicated the preface of the 20th anniversary edition.

Indeed, I shall quote from the preface (boldface by me), hoping that the meaning of the quoted will not change much by being taken out of context.

… I felt sure I had spelled out my aims over and over in the text itself. Clearly, however, I didn’t do it sufficiently often, or
sufficiently clearly. But since now I’ve got the chance to do it once more – and in a prominent spot in the book, to boot – let me try one last time to say why I wrote this book, what it is about, and what its principal thesis is.

In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. [...]

GEB approaches these questions by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves (or ‘I”s or “souls”, if you prefer – whatever it is that distinguishes animate from inanimate matter) to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.

I have not read this book, instead I arrived, partially, to conclusion which are close to these from trying to understand some articles,  written by Jan Koenderink, being mesmerized by the beautiful meme “Brain a geometry engine“.  Last time I discussed about this in the post The front end visual system performs like a distributed GLC computation.

Questions about epijournals and the spnetwork

I start the post by asking you to prove me wrong. (with their epijournal concept) and The Selected Papers Network are the only new initiatives in new ways of publication and refereeing in mathematics  (I deliberately ignore Gold OA).

It looks to me they are dead.

Compare with the appearance of new vehicles of research communication in (other) sciences, like PeerJ, which is almost green OA and which has a system of open peer-review!

Are mathematicians … too naive?

There is only one initiative in mathematics which is really interesting:  the writing of the HOTT book.

I would be glad to be wrong, that is why I ask some questions about them.

1. Episciences.    Almost a year ago, on Feb 17 2013, I wrote the post  Episciences-Math, let’s talk about this , asking for a discussion about the almost opaque creation of epijournals.

What is new in this initiative? Nothing, besides the fact that some of the articles in arXiv will be refereed, which is a great thing in itself.

Their have not started yet. In one of the comments, I am instructed to look, for discussions, at

In the post I wrote:

Finally, maybe I am paranoid, but from the start (I can document by giving links to previous comments) I saw the potential of this project as an excuse for more delay until real changes are done. I definitely don’t believe that your project is designed for that purpose, I am only afraid that your project might be used for that, for example by stifling any public discussion about new OA models in math publishing, because you know, there are these epijournals coming, let’s wait and see.

Here is what I found about this,  almost a year after: progress in 2014?

[Mark C. Wilson] I am surprised at the low speed of change in mathematical publishing since early 2012. The Episciences project is now advertised as starting in 2014, but I recall it being April 2013 originally. No explanation is given for the delay. Forum of Mathematics seems to have  a few papers now, at least. SCOAP3 seems to moving at a glacial pace.

Researchers in experimental fields have reasons to be concerned about changing peer review, but surely arXiv is good enough for most mathematicians. Yet it is very far from being universally used. Gowers’ latest idea (implemented by Scott Morrison) of cataloguing free versions of papers in “important” math journals on a wiki seems useful, and initial results do seem to show that some kind of arXiv overlay would suffice for most needs.

Staying in the traditional paradigm, in 2013 I helped completely revamp an existing electronic journal ( and it is now in pretty good shape. We could certainly scale up in number of submissions by a factor of 10 (not sure about 100) without any extra resources. I have had a few emails from Elsevier editors explaining how they get resources to help them do their job. I still remain completely unconvinced that free tools like OJS can’t duplicate this easily. Why is it so hard to get traction with editors, and get them to bargain hard with the “owners”?

[Benoit Kloeckner] Just about Episciences: it is true that the project has been delayed and that the communication about this has been scarce, to say the least. The reason for the delay has been the time needed to develop the software, which includes some unconventional feature (notably importation from arXiv and HaL of pdf and more importantly metadata). The development has really started later than expected and we chose not to rush into opening the project, in order to get a solid software. Things have really progressed now, even if it is not perceptible from the outside. The support of partners is strong, and I am confident the project will open this year, probably closer to now than December.

I thought it is already clear for everybody that “software” is a straw man, the real problem is psychological. Why nobody tries to make a variant of PeerJ for math, or other project which works already in other sciences?

2. Spnetwork.   Do you see a great activity related to the spnetwork project,  hailed by John Baez? I don’t, although I  wish to, because at the moment it was the only “game in [the mathematical] town”.

But maybe I am wrong, so I looked for usage statistics of the spnetwork.

Are there any, publicly available? I was not able to find them.

What I did was to login into the spnetwork and search for comments  with “a” inside. There are 1578, from the start of the spnetwork.  Looked for people with “a” in the name, there are 1422.  By randomly clicking on their comments in the last 20 days,  it appears that about 0 of them made any comment.


So, please prove me wrong. Or else, somebody start a PeerJ like site for math!


A simple explanation with types of the hexagonal moves of projective spaces

This post continues from the previous  A beautiful move in projective spaces    .

I call those “beautiful” moves hexagonal. So, how many hexagonal moves are?

The answer is 3.

In the post  Axioms for projective conical spaces (towards qubits II) I give 4 moves and I mention that I discard other two moves, because  don’t see their use in generalized projective geometry.

I was wrong, because in the usual projective geometry these two moves are discarded because they can be deduced from the barycentric move (which makes the generalized, “non-commutative”, projective geometry, into the usual one, in the same way as it makes the non-commutative affine geometry into the usual one). You really have to read the linked posts (and probably also the linked articles) in order to understand these precise statements. (So this is a kind of a filter for those with long attention span.)

The idea of using two types is natural: indeed, instead of using dual spaces, we use two types,  say “a” and “x”, and the decoration rule of the 4-valent dilation nodes:  two of the input arrows and the output arrow are decorated with the same type and the remaining input arrow is decorated with the other type. [ added: the "remaining input arrow" is always the one which points to the center of the circle which denoted the dilation node.]

By looking at the hexagonal moves from the last post, we see that there are only three ways of decorating the common part of all diagrams with types.  This gives 3 hexagonal moves.

This is explained in the next figure (click on it to make it bigger).


We see that some arrows are decorated with one type (arbitrarily called “x”) and other arrows are decorated, apparently with columns of 3 types. In fact, that should be read as 3 possibilities, which correspond to taking from each column the first, the second or the third element.

The choice corresponding to the first element of each column corresponds to the neglected hexagonal move, say (PG3). (Can you draw it? is easy!)

The choice corresponding to the second element of each column corresponds to (PG2).

The choice corresponding to the third element of each column corresponds to (PG1).

Remarking that the rule of decoration with types is symmetric with the switch between the types “a” and “x”, this corresponds to the 6 moves of generalized projective geometry.

In conclusion, by selecting only those graphs (and moves) which can be decorated in the mentioned way with two types, we get the moves of generalized projective geometry.


A beautiful move in projective spaces

The post Axioms for projective conical spaces (towards qubits II)  introduces a generalization of projective spaces to projective conical space. These are a kind of non-commutative version of projective spaces, exactly in the same sense as the one that affine conical spaces are a non-commutative generalization of affine spaces.

That post has been done before the discovery of graphic lambda calculus. [UPDATE: no, I see that it was done after, but GLC was not used in that post.]

Now, the beautiful thing is that all the 4 axioms of projective conical spaces have the same form, if represented according to the same ideas as the ones of graphic lambda calculus.

There will be more about this, but I show you for the moment only how the first part of  (PG1)  looks like, in the original version and in the new version.


Here is the first part of (PG2) in old and new versions.



Is the Seed possible?

Is the Seed possible? Neal Stephenson, in the book The Diamond Age, presents the idea of the Seed, as opposed to the Feed.

The Feed is a hierarchical network of pipes and matter compilers (much like  an Internet of Things done not with electronics, but with nanotechnology, I’d say).

The Seed is a different technology. I selected some  paragraphs from the book, which describe the Seed idea.

“I’ve been working on something,” Hackworth said. Images of a nanotechnological system, something admirably compact and elegant, were flashing over his mind’s eye. It seemed to be very nice work, the kind of thing he could produce only when he was concentrating very hard for a long time. As, for example, a prisoner might do.
“What sort of thing exactly?” Napier asked, suddenly sounding rather tense.
“Can’t get a grip on it,” Hackworth finally said, shaking his
head helplessly. The detailed images of atoms and bonds had been replaced, in his mind’s eye, by a fat brown seed hanging in space, like something in a Magritte painting. A lush bifurcated curve on one end, like buttocks, converging to a nipplelike point on the other.

CryptNet’s true desire is the Seed—a technology that, in their diabolical scheme, will one day supplant the Feed, upon which our society and many others are founded. Protocol, to us, has brought prosperity and peace—to CryptNet, however, it is a contemptible system of oppression. They believe that information has an almost mystical power of free flow and self-replication, as water seeks its own level or sparks fly upward— and lacking any moral code, they confuse inevitability with Right. It is their view that one day, instead of Feeds terminating in matter compilers, we will have Seeds that, sown on the earth, will sprout up into houses, hamburgers, spaceships, and books—that the Seed
will develop inevitably from the Feed, and that upon it will be
founded a more highly evolved society.

… her dreams had been filled with seeds for the last several years, and that every story she had seen in her Primer had been replete with them: seeds that grew up into castles; dragon’s teeth that grew up into soldiers; seeds that sprouted into giant beanstalks leading to alternate universes in the clouds; and seeds, given to hospitable, barren couples by itinerant crones, that grew up into plants with bulging pods that contained happy, kicking babies.

Arriving at the center of the building site, he reached into his bag and drew out a great seed the size of an apple and pitched it into the soil. By the time this man had walked back to the spiral road, a tall shaft of gleaming crystal had arisen from the soil and grown far above their heads, gleaming in the sunlight, and branched out like a tree. By the time Princess Nell lost sight of it around the corner, the builder was puffing contentedly and looking at a crystalline vault that nearly covered the lot.

All you required to initiate the Seed project was the rational,
analytical mind of a nanotechnological engineer. I fit the bill
perfectly. You dropped me into the society of the Drummers like a seed into fertile soil, and my knowledge spread through them and permeated their collective mind—as their thoughts spread into my own unconscious. They became like an extension of my own brain.


Now, suppose the following.

We already have an Internet of Things, which would serve as an interface between the virtual world and the real world, so there is really not much difference between the two, in the specific sense that something in the former could easily produce effects in the latter.

Moreover, instead of nanotechnology, suppose that we are content with having, on the Net, an artificial chemistry which would mirror the real chemistry of the world, at least in it’s functioning principles:

  1.   it works in a decentralized, distributed way
  2. does not need an overlooking controller, because all interactions are possible only when there is spatial and temporal proximity
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

In this  world, I see a Seed as a dormant, inactive artificial chemical molecule.  When the Seed is planted (on the Net),

  1. it first grows into a decentralized, autonomous network (i.e. it starts to multiply, to create specialized parts, like a real seed which grows into a tree),
  2. then it starts computing (in the chemical sense, it starts to self-process it’s structure)
  3. and interacts with the real world (via the sensors and effectors available via the IoT) until it creates something in the real world.



  •  clearly, the artificial chemistry I am thinking about is chemlambda
  •  the principles of the sort of working of this artificial chemistry are those of the Distributed GLC


How to plant a Seed (I)

In  The Diamond Age there is the Feed and, towards the end, appears the Seed.

There are, not as many as I expected, but many places where this Seed idea of Neal Stephenson is discussed. Most of them discuss it in relation to the Chinese  Ti yong  way of life, following the context where the author embeds the idea.

Some compare the Seed idea with open source.

For me, the Seed idea becomes interesting when is put together with distributed, decentralized computing. How to make a distributed Seed?

If you start thinking about this, it makes even more sense if you add one more ingredient: the Internet of Things.

Imagine a small, inactive, dormant, virtual thing (a Seed) which is planted somewhere in the fertile ground of  the  IoT. After that it becomes active, it grows, becomes a distributed, decentralized computation. Because it lives in the IoT it can have effects in the physical world, it can interact with all kinds of devices connected with the IoT, thus it can become a Seed in the sense of Neal Stephenson.

Chemlambda is a new kind of  artificial chemistry, which is intended to be used in distributed computing, more specifically in decentralized computing.  As a formalism it is a variant of graphic lambda calculus, aka GLC.  See the page  Distributed GLC for details of this project.

So, I am thinking about how to plant a chemlambda Seed. Concretely, what could pass for a Seed in chemlambda and in what precise sense can be planted?

In the next post I shall give technical details.

Another discussion about math, artificial chemistry and computation

I have to record this ongoing discussion from G+, it’s too interesting.  Shall do it from  my subjective viewpoint, be free to comment on this, either here or in the original place.

(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post   Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)

Further I shall only lightly edit the content of the discussion (for example by adding links).

It started from this post:

 “I will argue that it is a category mistake to regard mathematics as “physically real”.”  Very interesting post by Louis Kauffman: Mathematics and the Real.
Discussed about it here: Mathematics, things, objects and brains.
Then followed the comments.
Greg Egan‘s “Permutation City” argues that a mathematical process itself is identical in each and any instance that runs it. If we presume we can model consciousness mathematically it then means: Two simulated minds are identical in experience, development, everything when run with the same parameters on any machine (anwhere in spacetime).

Also, it shouldn’t matter how a state came to be, the instantaneous experience for the simulated mind is independent of it’s history (of course the mind’s memory ought to be identical, too).He then levitates further and proposes that it’s not relevant wether the simulation is run at all because we may find all states of such a mind’s being represented scattered in our universe…If i remember correctly, Egan later contemplated to be embarassed about this bold ontological proposal. You should be able to find him reasoning about the dust hypothesis on the accompanying material on his website.Update: I just saw that wikipedia’s take on the book has the connection to Max Tegmark in the first paragraph:
> [...] cited in a 2003 Scientific American article on multiverses by Max Tegmark.

+Refurio Anachro  thanks for the reference to Permutation city and to the dust hypothesis, will read and comment later. For the moment I have to state my working hypothesis: in order to understand basic workings of the brain (math processing included), one should pass any concept it is using through the filter
  • local not global
  • distributed not sequential
  • no external controller
  • no use of evaluation.

From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit :)

That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.

After, I prefer to wonder not about consciousness and it’s simulation, but instead about vision and other much more fundamental processes related to awareness. These are taken for granted, usually, and they have the bad habit of contradicting any “bit”-based explanation given up to date.


the past lives in a conceptual domain
would one argue then that the past
is not real
+Peter Waaben  that’s easy to reply, by way of analogy with The short history of the  rhino thing .
Is the past real? Is the rhinoceros horn on the back real? Durer put a horn on the rhino’s back because of past (ancient) descriptions of rhinos as elephant killers. The modus operandi of that horn was the rhino, as massive as the elephant, but much shorter, would go under the elephant’s belly and rip it open with the dedicated horn.  For centuries, in the minds of people, rhinos really have a horn on the back. (This has real implications, alike, for example, with the idea that if you stay in the cold then you will catch a cold.) Moreover, and even more real, there are now real rhinos with a horn on the back, like for example Dali’s rhino.
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
+Peter Waaben, that seems what the dust hypothesis suggests.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
[My comment added here: see the posts



+Marius Buliga :

+Refurio Anachro  concretely, in the model of computation based on lambda calculus you have to add an evaluation strategy to make it work (for example, lazy, eager, etc).  The other model of computation, the Turing machine is just a machine, in the sense that you have to build a whole architecture around to use it. For the TM you use states, for example, and the things work by knowing the state (and what’s under the reading head).  Even in pure functional programming, besides their need for an evaluation strategy, they live with this cognitive dissonance: on one side they they rightfully say that they avoid the use of states of the imperative programming, and on the other side they base their computation on evaluation of functions! That’s funny, especially if you think about the dual feeling which hides behind “pure functional programming has no side effects” (how elegant, but how we get real effects from this?).
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you  see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
[My comment added here: see behaviour 5 of a GLC actor explained in this post.
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying.  Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
+Marius Buliga “there are no values in this computation” Not to disagree, but is there a distinction between GLC graphs that is represented to a collection of possible values? For example, topological objects can differ in their chromatic, Betti, genus, etc. numbers. These are not values like those we see in the states, signals and bits of a TM, but are a type of value nonetheless.
+Stephen Paul King  yes, of course, you can stick values to them, but  the fact is that you can do without, you don’t need them for the sake of computation. The comparison you make with the invariants of topological objects is good! +Louis Kauffman  made this analogy between the normal form of a lambda term and such kinds of “values”.
I look forward for his comments about this!
+Refurio Anachro  thanks again for the Permutation city reference. Yes, it is clearly related to the budding Artifficial Connectomes idea of GLC and chemlambda!
It is also related with interests into Unlimited Detail :) , great!
[My comment added here: see the following quote from the Permutation city wiki page

The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than "real" time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being "filled in" as necessary if its owner were to turn their model-of-a-head towards it.


But I keep my claim that that’s enough to understand for 100 years. Consciousness is far away. Recall that first electricity appeared as  kind of life fluid (Volta to Frankenstein monster), but actually it has been used with tremendous success for other things.
I believe the same about artificial chemistry, computation, on one side, and consciousness on the other. (But ready to change this opinion if faced with enough evidence.)


Mathematics, things, objects and brains

This is about my understanding of the post Mathematics and the Real by Louis Kauffman.

I start from this quote:

One might hypothesize that any mathematical system will find natural realizations. This is not the same as saying that the mathematics itself is realized. The point of an abstraction is that it is not, as an abstraction, realized. The set { { }, { { } } } has 2 elements, but it is not the number 2. The number 2 is nowhere “in the world”.

Recall that there are things and objects. Objects are real, things are discussions. Mathematics is made of things. In Kauffman’s example the number 2 is a thing and the set { { }, { { } } } is an object of that thing.

Because an object is a reification of a thing. It is therefore real, but less interesting than the thing, because it is obtained by forgetting (much of) the discussion about it.

Reification is not a forgetful functor, though. There are interactions in both directions, from things to objects and from objects to things.

Indeed, in the rhino thing story, a living rhinoceros is brought in Europe. The  sight of it was new. There were remnants of ancient discussions about this creature.

At the beginning that rhinoceros was not an object, not a thing. For us it is a thing though, and what I am writing about it is part of that thing.

From the discussion about that rhinoceros, a new thing emerged. A rhinoceros is an armoured beast which has a horn on its back which is used for killing elephants.

The rhino thing induced a wave of reifications:  nearby the place where that rhino was seen for the first time in Portugal, the Manueline Belém Tower  was under construction at that moment. “The tower was later decorated with gargoyles shaped as rhinoceros heads under its corbels.[11]” [wiki dixit]

Durer’s rhino, another reification of that discussion. And a vector of propagation of the discussion-thing. Yet another real effect, another  object which was created by the rhino thing is “Rinoceronte vestido con puntillas (1956) by Salvador Dalí in Puerto Banús, Marbella, Spain” [wiki dixit].

Let’s take another example. A discussion about the reglementations of the sizes of cucumbers and carrots to be sold in EU is a thing. This will produce a lot of reifications, in particular lots of correct size cucumbers and carrots and also algorithms for selecting them. And thrash, and algorithms for dispensing of that trash. And another discussions-things, like is it moral to dump the unfit carrots to the trash instead of using them to feed somebody who’s in need? or like the algorithm which states that when you go to the market, if you want to find the least poisoned vegetables then you have to pick them among those which are not the right size.

The same with the number 2, is a thing. One of it’s reifications is the set { { }, { { } } }. Once you start to discuss about sets, though, you are back in the world of things.

And so on.

I argue that one should understand from the outset that mathematics is distinct from the physical. Then it is possible to get on with the remarkable task of finding how mathematics fits with the physical, from the fact that we can represent numbers by rows of marks |  , ||, |||, ||||, |||||, ||||||, … (and note that whenever you do something concrete like this it only works for a while and then gets away from the abstraction living on as clear as ever, while the marks get hard to organize and count) to the intricate relationships of the representations of the symmetric groups with particle physics (bringing us back ’round to Littlewood and the Littlewood Richardson rule that appears to be the right abstraction behind elementary particle interactions).

However, note that   “the marks get hard to organize and count” shows only a limitation of the mark algorithm as an object, and there are two aspects of this:

  • to stir a discussion about this algorithm, thus to create a new thing
  • to recognize that such limitations are in fact limitations of our brains in isolation.

Because, I argue, brains (and their working) are real.  Thoughts are objects, in the sense used in this post! When we think about the number 2, there is a reification of out thinking about the number 2 in the brain.

Because brains, and thoughts, are made of an immensely big number of chemical reactions and electromagnetic  interactions, there is no ghost in these machines.

Most of our brain working is “low level”, that is we find hard to account even for the existence of it, we have problems to find the meaning of it, we are very limited into contemplating it in whole, like a self-reflecting mirror. We have to discuss about it, to make it into a thing and to contemplate instead derivative objects from this discussion.

However, following the path of this discussion, it may very well be that brains working thing can be understood as structure processing, with no need for external, high level, semantic, information based meaning.

After all, chemistry is structure processing.

A proof of principle argument for this is Distributed GLC.

The best part, in my opinion, of Kauffman’s post is, as it should, the end of it:

The key is in the seeing of the pattern, not in the mechanical work of the computation. The work of the computation occurs in physicality. The seeing of the pattern, the understanding of its generality occurs in the conceptual domain.

… which says, to my mind at least, that computation (in the usual input-output-with-bits-in-between sense) is just one of the derivative objects of the discussion about how brains (and anything) work.

Closer to the brain working thing, including the understanding of those thoughts about mathematics, is the discussion about “computation” as structure processing.

UPDATE: A discussion started in this G+ post.


Clocks, guns, propagators and distributors

Playing a bit with chemlambda, let’s define:

  • multipliers
  • propagators
  • two types of distributors

described in the first figure.


The blue arrows are compositions of moves from chemlambda. For instance, referring to  the picture from above,  a graph (or molecule) A is a multiplier if there is a definite finite sequence of moves in chemlambda which transforms the LHS of the first row into the RHS of the first row, and so on.

For example:

  • any combinator (molecule from chemlambda) is a multiplier; I proved this for the BCKW system in this post,
  • the bit is a propagator
  • the application node is a distributor of the first kind, because of the first DIST move in chemlambda
  • the abstraction node is a distributor of the second kind, because of the second DIST move in chemlambda.

Starting from those, we can build a lot of others.

If A \rightarrow  is  a multiplier and \rightarrow B \rightarrow  is a propagator then A \rightarrow B \rightarrow is a multiplier. That’s easy.

From a multiplier and a distributor of the first kind we can make a propagator, look:


From a distributor of the second kind we can make a multiplier.


We can make as well guns, which shoot graphs, like the guns from the Game of Life.  Here are two examples:


We can make clocks (which are also shooting like guns):


Funny! Possibilities are endless.


Tibit game with two players: trickster and webster

Here is a version of the tibit game with two players, called “trickster” and “webster”.

The webster is the first player. The trickster is the second player.

The webster manipulates (i.e. modifies) a “web”, which is a

  • trivalent graph
  • with oriented arrows
  • with a cyclic orientation of arrows around any node (i.e. locally planar)
  • it may have free arrows, i.e. ones with free tail or free tip.

Tokens.   The webster  has one type of token, called a “termination token”.  The trickster has two types of tokens, one yellow, another magenta.

Moves.  When his turn comes,  any player can do one of the moves listed further, or he may choose to pass.


Some of the webster moves are “reversible”, meaning that the webster may do them in both direction. Let’s say that the “+” direction is the one from left to right in the graphical moves and also means that the webster may put a termination token (but not take one). The “-” direction is from right to left in the graphical moves and also means that the webster may take a termination token (but not put one).


The loop rule.  It is possible that, after a move by one of the players, the web is no longer a trivalent graph, because a loop (an arrow which closes itself, without any node or token on it) appears. In such a case the loop is erased before the next move.


This is a collaborative game.

There are two webs, with tokens on them, the first called the “datum” and the second called the “goal”.

The players have to modify the datum into the goal, by playing collaboratively, in the following way.

The game has two parts.

Preparation.     The players start from a given web with given tokens placed on it (called the “datum”).  Further,   the webster builds a web which contains the datum and the trickster places tokens on it, but nowhere in the datum .

Eventually they obtain a larger web which contains the datum.

Alternatively, the players may  choose an initial   web, with tokens on it, which contains the datum.

Play.   Now the webster can do only the “+” moves and the trickster can’t put any token on the web.  The players try to obtain a web, with tokens on it, which contains the goal by using their other moves.


Tibbit game!

This is a new thread, anybody wants to contribute to make a multiplayer game, which is also an exploration tool?

Explained here:

Tibbit game

Will update in the next few days, bookmark the link!

UPDATE:  “Tibit” is better, thanks Louis Kauffman!

Peer-review, is good or bad?

I shall state my belief about this, along with my advice for you to make your own, informed, opinion:

  • Peer-review as a bottleneck on the road to legacy publication is BAD
  • Peer-review as an authority argument (I read the article because is published in a peer-reviewed journal) is UNSCIENTIFIC
  • Open, perpetual peer-review offers a huge potential for scientific communication, thus is GOOD.
  • It is though the option of the author to choose to submit the article to public attention, this should not be mandatory.
  • Moreover, smart editors should jump on the possibility to exploit open peer-review instead of
  • the old way to throw to the wastebasket the peer-reviews, once the article is accepted or rejected, which is BAD.
  • Finally, there is NO OBLIGATION for youngsters to peer-review, contrary to the folklore that it is somehow their duty to the community to do this. No, this is only a perverse way to keep the legacy publishing going, as long as the publishers use them only as an anonymous filter. On the contrary, youngsters, everybody honest in fact, should be encouraged to use rewarded for using any of the abundant new means of communication for the benefit of research.

This post is motivated by the Mike Taylor’s Why peer-review may be worth persisting with, despite everything and by comments at the post Two pieces of all too obvious propaganda.

See also the post Journal of uncalled advices (and links therein).

An experiment in open writing and open peer-review

I shall try the following experiment in open writing/open peer-review which uses only available soft and tools.

No technical  knowledge is needed to do this.

No new platform is needed for this.

The idea is the following. I take an article (written by me) and I copy-paste it as text + figures in a publicly shared google document with comments allowed.

On top of the document I mention the source (where is the article from) , then I add a CC-BY licence.

This is all. If anybody wishes to comment the article, it can be done precisely, by pointing to the controversial paragraphs.

In the comments are allowed links, of course, therefore there is no limit to the quantity of data which can be put in such a comment.

There could be comment replies.

In conclusion, this is a very cheap way to do both a (limited) way of open writing and to allow open peer-review.

For the moment I started not with articles directly, but with edited content from this open notebook.  I made until now two “archives”

Even better would be to make a copy of the doc and put it in the figshare, to get a DOI. Then you stick the DOI  link in the doc.

Is there an uncanny valley between reality and computing?

A very deep one, maybe.

I was looking for news about UD and euclideon. I found this pair of videos [source], the first by BetterReality and the second by The Farm 51 .

Now, suppose that we scanned the whole world (or a part of it) and we put the data in the cloud. Do we have a mirror of reality now on the cloud? No! Why not, the data, according to mainstream CS ideology, is the same: coordinates and tags (color, texture, etc) in the cloud, the same in reality.

Think about the IoT, we do have the objects, lots of them, in potentially unlimited detail. But there is still this uncanny valley between reality and computation.

We can’t use the data, because:

  • there is too much data (for our sequential machines? for our dice and slice ideology,  a manifestation of the cartesian disease ? )
  • there is not enough time (because we ask the impossible: to do, on one very limited PC, the work  done by huge parts of reality? or because the data is useful only together with the methodology (based on  absolute, God’s eye view of reality, based on passive space as a receptacle), and the methodology is what stops us?)

I think that we can use the data (after reformatting it) and we can pass the uncanny valley between reality and computing. A way to do this supposes that:

  • we get rid of the absolute, passive space and time, get rid of global views (not because these don’t exist, but because this is a hypothesis we don’t need!)
  • we go beyond Turing Machine and Von Neumann architecture, and we seriously include P2P asynchronous, local, decentralized way of thinking into the model of computation (like CSP, or Actor Model or why not Distributed GLC?)

This is fully compatible with  the response given by Neil Gershenfeld to the question


(Thank you Stephen P. King for the G+ post which made me aware of that!)

Public shared chemlambda archive

Let’s try this experiment.  I prepared and shared publicly the

Chemlambda archive

which is an all-in-one document about the building of chemlambda (or the chemical concrete machine), from the vague beginnings to the moment when the work on Distributed GLC started.

The sources are from this open notebook.

I hope it makes an interesting reading, but what I hope more is that you shall comment on it and

  • help to make it better
  • identify new ideas which have potential
  • improve the presentation
  • ask questions!


No extensionality, no semantics

Extensionality, seen as eta reduction, or eta conversion, is a move which can be described in GLC, but it is global, not local.

See this post for the description of the move. The move is called “ext1″. I borrow from that post the description of the move.

Move ext1.  If there is no oriented path from “2″ to “1″ outside the left hand side picture then one may replace this picture by an arrow. Conversely, if there is no oriented path connecting “2″ with “1″ then one may replace the arrow with the graph from the left hand side of the following picture:


Why is this move global? Because of the condition “there is no oriented path from “2″ to “1″ outside the left hand side picture”.  This condition involves an unbounded number of nodes and arrows, therefore it is global.

Further I shall make the usual comments:

  • the move ext1, by itself,  can be applied to any GLC graph, not only to those graphs which represent lambda terms. In this respect the move ext1 is treated like the graphic beta move, which, in GLC, can be applied to any graph (for example to graphs which are used to encode knot diagrams, or other examples of graphs which are outside the lambda calculus sector of GLC, but nevertheless are interesting),
  • if we restrict the move ext1 to lambda graphs then we have another problem: the condition “is a lambda graph” is global.

That is why the move ext1 is not considered to be part of GLC, nor will be used in distributed GLC.

You may remark that   GLOBAL FAN-OUT is a global move, which is part of GLC.  The answer to the implicit question is that chemlambda, which is a purely local graph rewriting system, solves this. Even if chemlambda and GLC are different formalisms, when it comes to universality, then they behave the same on lambda graphs, with the bonus on the chemlambda side of being able to prove that we can replace the use of the GLOBAL FAN-OUT with a succession of moves from chemlambda.

This can’t be done for the move ext1.

And this is actually good, because it opens a very intriguing research path: how far can we go by using only locality?

That means in particular: how far can we go without semantics?


  • extensionality brings functions
  • functions bring types
  • types bring the type police
  • the type police brings semantics.

It looks that life (basic works of real lifeforms) goes very well:

  • without semantics
  • with self-duplication instead of GLOBAL FAN-OUT (or even instead recursion).

In this respects distributed GLC is even more life-like than it looks at first sight.

This one looks like a bit, but wait, what’s the other one, a hobbit?

I shall use chemlambda in this post. To make it more readable for those who are used with GLC, I shall change the notations  from chemlambda, as described in the first figure.


The node \phi  is the fan-in node, which replaces the dilation node from GLC.

There is a particular small graph which behaves like a bit, a bit. Look:


The (composite) move PROP has been used before. Let me recall that PROP is needed for making the graph of the combinator W to self-replicate. There is a whole discussion about whether is reasonable to have all moves reversible. The conclusion, for chemlambda, is that if we want chemlambda to be Turing universal, then we need only the “+” moves, supplemented by the PROP+ move. Of course that chemlambda is TURING universal if we can use all moves, but using PROP+ instead of using FAN-IN-  seems like a reasonable idea.

In this second figure we see that the “bit” from the left of the first graph, when grafted to the “in” arrow of a fan-out node, comes out by the out arrows of the fan-out node. Behaves like a bit, only that there is only an appearance of a “signal” which circulates through the “wires” (i.e. arrows).

There is another reason to call it a bit, namely that the same pair fan-out node — application node appears in the Church encoding of the naturals, as seen in GLC or chemlambda. More precisely, the natural 3, for example, has 3 such pairs, the natural n (when n is not 0) has n such pairs.  (see Figure 22, p. 14, arXiv:1312.4333)

This “bit” circulates as well through an application node, as explained in the following figure:


A strange thing happens if we graft it to the right out arrow of a lambda abstraction node.


Wait! What’s that at the left out arrow … of the fan-out node? A co-bit? A hobbit?

First, let us notice that this time we used only “+” moves, that’s very good. Then, it looks like that we lost the lambda node, which transformed magically into a fan-out node, and the bit transformed into the hobbit.

Hobbit, or better co-bit? The next figure show that a hobbit and a bit annihilate and produce a loop.

propagator_4Is the hobbit a co-bit?


Laws of Form and Parmenides

This is a notebook about relations between Spencer-Brown book Laws of Form and Plato dialogue Parmenides.

It will be repeatedly updated and maybe, if productive, will transform into a page.

The motivation of starting it comes from Louis Kauffman post    Is mathematics real? .


1.  one is infinitely many (Parmenides, source used) vs using the empty set to construct all numbers.

Introduction (142b—c) 
b Shall we return to the hypothesis and go over it again from the beginning, to see if some other result may appear? 

By all means. 

Then if unity is, we say, the consequences that follow for it must be agreed to, whatever they happen to be? Not so? 


Then examine from the beginning. If unity is, can it be, but not have a share of being? 

It cannot. 

Then the being of unity would not be the same as unity; otherwise, it would not be the being of it, nor would unity have a share of being; rather, to say that unity is would be like saying that unity is one. But as it is, the hypothesis is not what must follow if unity is 
unity, but what must follow if unity is. Not so? 


Because "is" signifies something other than "one"? 


So when someone says in short that unity is, that would mean that unity has a share of being? 

Of course. 

Then let us again state what will follow, if unity is. Consider: must not this hypothesis signify that unity, if it is of this sort, has parts? 

 How so? 

For the following reason: if being is said of unity, since it is, and if unity is said of being, since it is one, and if being and unity are not the same, but belong to that same thing we have hypothesized, namely, the unity which is, must it not, since it is one, be a whole of 
which its unity and its being become parts? 


Then shall we call each of those parts only a part, or must part be called part of whole? 

Part of whole. 

So what is one is a whole and has a part. 

Of course. 

What about each of the parts of the one which is, namely, its unity and its being? Would unity be lacking to the part which is, or being to the part which is one? 


So once again, each of the parts contains unity and being, and the least part also turns out to consist of two parts, and the same account is ever true: whatever becomes a part ever contains the two parts. For unity ever contains being, and being unity; so that they are ever necessarily becoming two and are never one.

Quite so. 

Then the unity which is would thus be unlimited in multitude? 

It seems so. 

Consider the matter still further. 

In what way? 

We say that unity has a share of being, because it is. 


And for this reason unity, since it is, appeared many. 


Then what about this: if in the mind we take unity itself, which we say has a share of being, just alone by itself, without that of which we say it has a share, will it appear to be only one, or will that very thing appear many as well? 
One, I should think. 

Let us see. Since unity is not being, but, as one, gets a share of being, the being of it must be one thing, and it must be another. 


Now, if its being is one thing and unity is another, unity is not different from its being by virtue of being one, nor is its being other than unity by virtue of being; but they are different from each other by virtue of the different and other. 

Of course. 

So difference is not the same as unity or being. 

Well then, if we were to pick out, say, being and difference, or being and unity, or unity and difference, would we not in each selection pick out some pair that is rightly called "both"? 

What do you mean? 

This: it is possible to mention being? 


And again to mention unity? 


Then each of two has been mentioned? 


But when I mention being and unity, do I not mention both? 

Yes, certainly. 

Again, if I mention being and difference, or difference and unity, and so generally, I in each case mean both? 


But for whatever is rightly called both, is it possible that they should be both but not two? 

It is not. 

But for whatever is two, is there any device by which each of two is not one? 


So since together they are pairs, each would also be one? 

It appears so. 

But if each of them is one, then when any one whatever is added to any couple whatever, does not the sum become three? 


Three is odd, and two even? 


What about this? If there are two things, must there not also be twice, and if three things, thrice, since it pertains to two to be twice 
one, and three, thrice one? 


But if there are two things and twice, must there not be twice two, and if three things and thrice, thrice three? 

Of course. 

What about this: if there are three things and twice, and two things and thrice, must there not also be twice three and thrice two? 

Yes, necessarily. 

So there will be even-times even numbers, odd-times odd numbers, even-times odd numbers, and odd-times even numbers. 


Then if this is so, do you think there is any number left which must not necessarily be? 

None whatever. 

So if unity is, number must also be.