How to simulate an enzyme with a quine in chemlambda (II)

Continues from part (I).

How to simulate a move with a quine. Let’s think.

The starting point is a mol file which describes the graph. Then there is a program which does the reductions. We should see the program (or the specific part of it which does a move) as the enzyme of that move. OK, but a program is something and a mol file is a different thing.

Recall though that chemlambda is Turing universal. That means in particular that for any particular mol file A.mol and any particular program P which reduces mol files there is a chemlambda molecule and a reduction algorithm which simulate how the program reduces the mol file. Kind of a bootstrapping, right?

Yes, but let’s say it again: given A.mol and P there exist B.mol such that P reduces B.mol simulates the computation (P reduces A.mol).

In particular there is a part E.mol of the B.mol file (or more abstractly a subgraph of the molecule B) whose reductions in the “context” B simulate the part of the program P acting on A and doing a reduction.

That part is actually a molecule, just as A, which can be thought as being the enzyme of the move (which is implemented in P).

Then, instead of making a (theoretically possible) algorithm which generates from A.mol and P the file B.mol, etc, instead of that we may do something simpler.

We would like to use a quine as E.mol.

That is because it respects the chemical analogy in the sense that the reduction of E.mol preserves the number of nodes (atoms).

For this we look again at what is a move, and also at what new primitive we need in order to make all this to function.

A move is described by a pair of patterns (i.e. molecules), called LEFT and RIGHT pattern, with the property that there is a pairing between the free ports of the LEFT pattern and the RIGHT pattern.

The move then is implemented by an algorithm which:

  • does a pattern matching for the LEFT pattern
  • then replaces the LEFT pattern by the RIGHT pattern.

(I should add that not any instance of the LEFT pattern in A.mol is replaced by a RIGHT pattern, because there may be conflicts created by the fact that a node of A.mol appears in two different LEFT patterns, so there is a need for a criterion which decides which move to do. I neglect this, you’ll see later why, but roughly I concentrate on the move, because the criterion (or priority choice) is a different problem.)

Aha! We need the possibility to make pairings of edges, or even better (seen the mol format) we need a way to make pairings of ports.

I shall use the notation a:b for such a pairing. Notice that:

  • we see such pairings when we say that x is of type a, i.e x:a (recall that one can see types as pairings between edges of molecules and edges of other molecules which represent types from the Calculus of Constructions)
  • the pairing a:A saying that the edge a belongs to the actor A is a pairing (induces a pairing)  between the edges of a molecule and the edges of an actor diagram in distrubted GLC
  • the pairing a:b, where a is an edge of a molecule and x is an edge of another molecule which represents an emergent algebra term (see the thread on projective conical spaces) says that edge “a” is in place “b”
  • the pairing a:b can be the consequence of a copmmunication; non-exclusively one may think of a:b as a consequence of c?(b) | c!(a)
  • in conclusion moves are particular pairings coupled with an algorithm for associating a LEFT pattern to a RIGHT pattern, pretty much as typing is a pairing with another algorithm, or places (i.e. emergent algebras and their reductions) is of the same kind; the algorithm which does the pairing is a matter of communication procedure and it can be implemented by process calculi, actor models or whatever you like best.

So, let’s pick a small quineas E.mol, like a 9- or 10-quine which has the property that it has one LEFT pattern for (say) beta move and let’s do the following:

  • pair a LEFT pattern from A.mol with the same (pattern matching) from E.mol
  • reduce E.mol  (bootstrapping) and get E’.mol which is identical to E.mol up to renaming some of the port (i.e.edge) names
  • update the pairing (which is akin of the intention of  replacement of  LEFT pattern by RIGHT pattern)
  • let the program in charge of A.mol to rewire (apply the update)

It’s perhaps a too short explanation, will come with details from A to Z next year.

Wish you the best for 2015!


How to simulate an enzyme with a quine in chemlambda (I)

I discussed many times here about the view which has chemlambda about graph rewrites as invisible enzymes. Here is, I believe, the first picture describing this idea for the beta move (taken from this post)

enzyme_2Since that first proposal the formalism changed, see this page which collects the updates, explanations and some useful links. However I shall lay down soon a detailed tutorial on the actual state of chemlambda.

Further I want to explain how we can make “move enzymes” visible in chemlambda. For this I shall use chemlambda quines and a kin dof bootstrapping, along with a new primitive construct, which appeared already several times before, denoted by “:”, wait a minute and let me take the ingredients one by one.

1. I start by stating again a very simple fact about what is chemlambda (and what is not). Chemlambda is a graph rewrite system which uses an analogy with chemistry. The graphs are called “molecules” and the graph rewrites are seen as chemical reations with invisible “move enzymes”. That’s about how far the analogy goes. It is not a part of the analogy the use of other concepts, as for example chemical reaction networks, Petri nets, probabilities, etc, because these are only external ingredients (or modules, or layers) which can be added over chemlambda in order to obtain a model of computation. Moreover, they are not, by far, the most interesting modules to add.

So, chemlambda is just that: a graph rewrite system. It is comparable with the Alchemy of Fontana and Buss, in the following precise sense: lambda calculus terms and reductions can be simulated by chemical molecules and reactions. But it is very different  Alchemy (and also from the CHAM of Berry and Boudol) because in chemlambda the abstraction and the application appear as nodes in the graph-molecules, while in the Alchemy the application and abstraction have different statuses.

2. If you want to get a model of computation from chemlambda then you have to add new ingredients. My ultimate goal is a model of asynchronous, distributed, decentralized model of computation, sketched already in the article M. Buliga, L.H. Kauffman, GLC actors, artificial chemical connectomes, topological issues and knots, arXiv:1312.4333 , as well as in many posts here, in the category distributed GLC.

But say that there is a lot of programming work to do, therefore it may be a good idea to take it slower and explore as many possible simpler models of computation based on chemlambda.

The first such model is one which is sequential, using the mol files convention, quickly described by the following:

  • if no LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR are present in the g-pattern, then stop, else
  • identify the LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR. If there is a graphical element which is part of two distinct patterns then apply a PRIORITY  CHOICE
  • apply the moves from LEFT to RIGHT
  • apply the COMB moves until no Arrow element
    can be eliminated
  • repeat.

This is the model implemented in the scripts from the chemlambda gui repo, see for the mol files convention this crude intro.

This makes a model of computation which is already very interesting. I just mention as examples the Ackermann function computation , and a no nonsense clarification of the mechanism behind the Y combinator: the Y combinator is just a gun shooting pairs of applciation and fanout nodes, the recursion is in fact done by self-multiplication of the molecules. The significant fact is that (some) molecules can self-multiply without any external management, only by the local graph rewrites, in this model of computation.

On the open questions side, concerning this model of computation, the most interesting seems to be: what kind of computation is this? Is it functional programming (just because it can do, but is not limited to lambda calculus)? [It appears that not, despite superficial resemblances, because it can do untyped lambda beta — and not eta — calculus, therefore there is no notion of a function in chemlambda] If not, what is it? Maybe the reduction strategy helps to classify it in one of the big categories. But the reduction strategy does not use any evaluation step. More concretely, for some particular examples of reductions of molecules which represent lambda terms (I choose this molecules in order to have a common ground for making comparisons, not because of a limitation of chemlambda to lambda calculus only), the reduction process resembles with one based on call-by-need evaluation, other times not, see this post for some details.

All this — self-multiplication instead of recursion, reduction by something akin to signal transduction — make already this simplest model of computation curiously alive. Alife.

3. Still in this simple model of computation there are some molecules, called chemlambda quines, which have the property that even if the reduction process goes forever, at each step the molecule is globally the same. (Some of) these quines are derived from molecules called walkers. A walker is a molecule which stays the same in a bigger one (imagine a walker on the road; even if the walker advances, by translation symmetry the road-and-walked ensemble stays the same). You just take a walker on a closed road and you get a quine, for example.

What is nice about quines is that they respect the analogy with chemical molecules-chemical reactions in the following sense. During the reduction process (chemical reaction) the number of atoms (nodes) is conserved.


A year review at chorasimilarity, second half

In parallel with the stuff about chemlambda, described in the first half, there was something else happening. I prepared some noted for a talk at the 4th Offtopicarium, in the form of a post here:

Notes for the “Internet of things not internet of objects”

The starting point is that what almost everybody describes as the Internet of Things is actually an Internet of Objects. We don’t want an Internet of Objects, because that would be only the usual accumulation of gadgetry and fads, i.e. only a very limited and uninspired construction. It is as imaginative as a big budget movie or as tasty as a fastfood menu.

Because reality is not objective. Reality is made of things, i, e. discussions between us humans about everything. When a certain agreement (or boredom, or faith, etc) is attained in the discussion, the said thing dies and dries into an object.

Discussions between humans thrive when individualities are respected and where there is a place which allows free mixing of ideas. A space. A theater.

Not the theater-in-a-box of perception, where the space is a scene, the dual of the homunculus, the king of fallacies. Because a scene is not a thing, but an object. On the scene the discussion is replaced by an ideology (see this or, from the scenographer point of view this).

Instead, a Greek theater under the sun could be a good starting point. As a machine which implements a theatrical distributed computing.

In order to do so, a necessary step is to separate computation from meaning. That would be only a small step forward after the separation of form from content principle. That’s a very short post, I reproduce most of it here:

One of the principles which make the net possible, as stated by Tim Berners-Lee,

separation of form from content: The principle that one should represent separately the essence of a document and the style with which it is presented.

Applied to decentralized computing, this means no semantics.

[One more confirmation of my impression that logic is something from the 21st century disguised in 19th century clothes.]

At this point the thread described here meets the one from the first half review: one of the discoveries of chemlambda, seen as artificial chemistry, is that it is possible to make this separation.


I stop here with the second half, there will be surely more halves to come 🙂


500: a year review at chorasimilarity, first half

Personal post triggered by the coincidence of the year’s end and a round number of posts here: 500.

I started this year with high hopes about the project described in the article GLC actors, artificial chemical connectomes, topological issues and knots. Louis Kauffman wrote in the introduction some unbelievably nice words about graphic lambda calculus:

Even more generally, the movement between graphs and algebra is part of the larger picture of the relationship of logical and mathematical formalisms with networks and systems that was begun by Claude Shannon in his ground-breaking discovery of the relationship of Boolean algebra and switching networks.

We believe that our graphical formulation of lambda calculus is on a par with these discoveries of Shannon. We hope that the broad impact of this proposal will be a world-wide change in the actual practice of distributed computing. Implemented successfully, this proposal has a potential impact on a par with the internet itself.

But in June we got news that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF, see my comments in this post.  We got good reactions, from the reviewers, on the theoretical side (i.e. what is described in the GLC actors article), but fair critics on the IT part of the project. Another mistake was the branding of the project as oriented towards security.

I should have follow my initial plan, namely to start from the writing of simple tools, programs, which should also have some fun side, ideally, but which at least would allow to start the exploration of the formalism in much more detail than the what pen and papers allows. Instead of that, as concerns this project, there has been a waste of time from Jan to Jun, waiting for one puny source of funding before doing any programming work.

I parallel, a better trend appeared, one which I have not dreamed about two years ago: artificial life. During the summer of 2013 I thought it is worthy to try to get rid of a weakness of the graphic lambda calculus, namely that is has one global graph rewrite, called GLOBAL FAN-OUT. That’s why I wrote the Chemical concrete machine article, describing a purely local graph rewrite formalism which later was renamed  chemlambda. That was great, I even made an icon for it:


which is made by two lambda symbols, one being up side down, to suggest that writing linear conventions are obsolete. The lambdas are  arranged into a DNA like double spiral, to suggest connections with life. (Of course that means I entered in the alife field, but everything about that was so fresh for me. Later I learned about Alchemy and other approaches, mixing either lambda calculus with alife, or rewriting systems with alife, but not both, and surely not the way is done in chemlambda, as abundantly documented in this blog)

Several (personal as well) novelties related to the article. One was that the article appeared on figshare too, not only on arXiv. This relates with another subject which I follow, namely what means of dissemination of research to use, seen that publishing academic articles is no longer enough, unless you want your work to be used only as a kind of fertilizer for future data mining projects.

More about this later.

The initial purpose of chemlambda was to be aimed at biologists, chemists, somewhere there, but apparently, with almost certainly, if a guy does computing then he will not do chemistry, or if he does chemistry then he has no idea about lambda calculus. I’m mean, there are exceptions, like to be part of an already existing team which do both, which I’m not. Anyway, so initially I hoped for interactons with biochemists (I still do, looking for that rare bird who would dare to lower himself to talk with a geometer from a non dominant country, motivated purely by the research content, and who would have the trivial idea to use his chemistry notions for something which is not in the scope of already existing projects).

With Louis Kauffman, we set out to write a kind of continuation of the GLC actors article, this time concentrated on chemlambda, as seen from our personal interests. The idea was to participate at the biggest event in the field, the ALIFE 14 conference. Which we did: Chemlambda, universality and self-multiplication.

Putting together the failure of the NSF project and the turn towards alife, is only natural that I set to write more explanations about the formalism, like this series  of 7  expository posts on chemlambda described with the help of g-patterns:

This was a bridge towards starting to write programs, that’s for later.

In parallel with other stuff, which I’ll explain in the second half.


Chemlambda task got bigger, explanation

I learned how to reproduce the type inference system of the Calculus of Constructions in a (slight modification of) chemlambda . This comes with the previous caveats concerning untyped lambda beta calculus and chemlambda:

  • (a) for any term (or type) there is a chemlambda molecule, but not for any chemlambda molecule there is a correspondent in CoC,
  • (b) the inference rules are implemented as (compositions of) graph rewrites, but not all graph rewrites have a correspondent in the inference rules,
  • (c) the reduction strategy (I think corresponding to the “tactics” from COQ) is an ingredient which has to be added in order to obtain a full model of computation,
  • (d) all graph rewrites are local (i.e. there is an a priori bound on the number of nodes and arrows of the left and right pattern of graph rewrites, as well on the number of nodes and arrows needed to be taken into account in a condition in case the move has the form: if condition then replace left pattern by right pattern) ,
  • (e) there is no correspondence between the reduction in chemlambda and reduction in CoC (in the sense that the reduction in chemlambda may pass by molecules which do not correspond to terms in CoC).

Some time is needed to eliminate possible bugs, but the task has become even bigger: a purely local version of COQ, based exclusively on graph rewrites, with “tactics” that never use values or names passing.
I have kept a distance, previously, from types because they seemed to me an added ingredient, not really needed, to the purity of a geometrical version of untyped lambda beta calculus. But slowly I understood that the initial try to mix emergent algebras (i.e. space as program) with lambda calculus. from , called “lambda-scale”, is a timid and a bit skew attempt for a type system.
Added over this is the fact that it’s hard, at least for me, to discern in the big literature on the subject, what holds for lambda beta, but not for any other “superior” version, because almost everywhere, if you don’t read carefully, extensionality is mixed in the pot by default, like in all famously well known category theory treatment of the matter.
Or, I have been forced from the beginning to exclude extensionality, because it appears as a non-local graph rewrite. That’s it, I have not made it on purpose, it turns out to be so. Without extensionality, no functions, so no functional style. Without terms and variable passing there is no evaluation, no call, lazy, by need, or otherwise.

There is something else happening here. I looked and found a (surprising)  kind of historical precendent for this situation. At the beginning of chemistry, or even before the rigorous establishment of it, there has been a long and well looked functional period, when chemicals were identified with their function. (In popular thinking even now we speak or hear about the soothing quality of this herbal tea, say.) This was later called “vitalism”, and has been rejected on the grounds of the lack of enough explanatory power.

So I asked: regardless of the fashion of the moment, if you think that untyped lambda beta is Turing universal, then why use extension (or types btw)? What gives a geometrized version (and enlarged, in the sense (a, b, c, e) from the beginning of this post)  version of untyped lambda beta?

At the surface, of one takes a lazy look at my drawings from a year ago, it looks like known facts, mixed with crazy statements. But if you care to look better and read it, then you see that this geometrical view is quite different, because basically has no name, from the previous mentioned reasons.

Now the task is even bigger, so I have to settle for picking some proof of principle bits, which can be attained in finite time by myself.

I tried to go a bit into programming, of course that it works, the limit being only my procrastination in front of so many possible choices.

Not quite, even with the small attempts contained in the chemlambda gui, a patch of awk, sh and js scripts, I am drowned in data.

The field is huge, probably will got bigger.


Question about public key cryptography with chemlambda

In this post Alice and Bob will communicate through #chemlambda computations.

Alice and Bob have each a mol file: Alice has A.mol and Bob has B.mol.

Step 1.  Alice partitions her file into A1.mol and A2.mol. She keeps A1.mol secret. The partition of the A.mol into two files creates some new free ports in A1.mol and A2.mol. Indeed, recall that free port variables are those which appear only once in the mol file. Thus every port variable which appears twice in A.mol, but only once in A1.mol and once in A2.mol, is not free in A.mol, but is free in both A1.mol and A2.mol. Every such variable corresponds to an edge which goes between a node from A1.mol and a node from A2.mol.

Let’s call these port variables “inner” port variables.



Alice renames all (or some of) the port variables from A2.mol and keeps the correspondence between the new names and the old names of the inner port variables.

Then she reduces only the A2.mol and she publishes it.


The published file, A2_red.mol is a mol file. Alice publishes also the partition of free port variables into inner and outer public free ports.

Important remark.  I see the reduction of the  A2.mol  to the reduced A2_red.mol as an encryption of A2mol. This is not a very accurate point of view, but the idea is that the reduction can be seen as a one way function which is easy to compute, but with an inverse which is hard to compute. More accurately,  chemlambda is universal, therefore the reduction of A2.mol may correspond to any computation. The file  i.e. A2_red.mol represents the result of the computation and A2.mol represents the program which has as result A2_red.mol.

Step 2. Bob takes the file A2_red.mol (which is public) and adds it to his own file B.mol. The file B.mol is secret. He connects the outer free ports of the file A2_red.mol to the free ports of the file B.mol, in a way which he keeps secret.


Then Bob reduces this mol file and obtains a file (A2_red-B)_red.mol. He sends this file to Alice.

Step 3. Alice uses her secret correspondence between public and private inner free ports variables to connect her A1.mol with the mol file received from Bob.

She reduces this file.



What happened?

Let’s look at the whole picture at once.


We have a mol file which is partitioned into three parts A1, A2 and B. Then it is reduced in the following way.

1. First the A1 and B parts of the file are frozen and only the A2 is reduced.

2. Then the wave of reductions spread and now only A1 is frozen and reductions are allowed everywhere else.

3. Finally A1 is unfrozen and the reductions continue in the whole file.

This is what Alice gets after the step 3.


Question: what can Alice learn about B.mol by looking at the result?


 UPDATE:   I started thinking about that since the summer 2014, when Louis Kauffman described a  Rope Ceremony in his visual style, using his characters Cookie and Parabel:


Trying to understand what’s going on, I made [my first] video. Here is it:


Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself”

I enjoyed very much the presentation given by Jeff Hawkins “Computing like the brain: the path to machine intelligence”


Around 8:40


This is something which could be read in parallel with the passage I commented in the post   The front end visual system performs like a distributed GLC computation.

I reproduce some parts

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” .

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Of course, today I would say “performs like a distributed chemlambda computation”, according to one of the strategies described here.

Around 17:00 (Sparse distributed representation) . ” You have to think about a neuron being a bit (active:  a one, non active: a zero).  You have to have many thousands before you have anything interesting [what about C. elegans?]

Each bit has semantic meaning. It has to be learned, this is not something that you assign to it, …

… so the representation of the thing itself is its semantic representation. It tells you what the thing is. ”

That is exactly what I call “no semantics”! But is much better formulated as a positive thing.

Why is this a form of “no semantics”? Because as you can see the representation of the thing itself edits out the semantics, in the sense that “semantics” is redundant, appears only at the level of the explanation about how the brain works, not in the brain workings.

But what is the representation  of the thing itself? A chemical blizzard in the brain.

Let’s put together the two ingredients into one sentence:

  • the sensory-motor model of the world is a learned representation of the thing itself.

Remark that as previously, there is too much here: “model” and “representation” sort of cancel one another, being just superfluous additions of the discourse. Not needed.

What is left: a local, decentralized.  asynchronous, chemically based, never ending “computation”, which is as concrete as the thing (i.e. the world, the brain) itself.

I put “computation” in quotes marks because this is one of the sensible points: there should be a rigorous definition of what that “computation” means. Of course, the first step would be fully rigorous mathematical proof of principle that such a “computation”, which satisfies the requierements listed in the previous paragraph, exists.

Then, it could be refined.

I claim that chemlambda is such a proof of principle. It satisfies the requirements.

I don’t claim that brains work based on a real chemistry instance of chemlambda.

Just a proof of principle.

But how much I would like to test this with people from the frontier mentioned by Jeff Hawkins at the beginning of the talk!

In the following some short thoughts from my point of view.

While playing with chemlambda with the strategy of reduction called “stupid” (i.e. the simplest one), I tested how it works on the very small part (of chemlambda) which simulates lambda calculus.

Lambda calculus, recall, is one of the two pillars of computation, along with the Turing machine.

In chemlambda, the lambda calculus appears as a sector, a small class of molecules and their reactions. Contrary to the Alchemy of Fontana and Buss, abstraction and application (operations from lambda calculus) are both concrete (atoms of molecules). The chemlambda artificial chemistry defines some very general, but very concrete local chemical interactions (local graph rewrites on the molecules) and some (but not all) can be interpreted as lambda calculus reductions.

Contrary to Alchemy, the part which models lambda calculus is concerned only with untyped lambda calculus without extensionality, therefore chemical molecules are not identified with their function, not have they definite functions.

Moreover, the “no semantics” means concretely that most of the chemlambda molecules can’t be associated to a global meaning.

Finally, there are no “correct” molecules, everything resulted from the chemlambda reactions goes, there is no semantics police.

So from this point of view, this is very nature like!

Amazingly,  the chemical reductions of molecules which represent lambda terms reproduce lambda calculus computations! It is amazing because with no semantics control, with no variable passing or evaluation strategies, even if the intermediary molecules don’t represent lambda calculus terms, the computation goes well.

For example the famous Y combinator reduces first to only a small (to nodes and 6 port nodes molecule), which does not have any meaning in lambda calculus, and then becomes just a gun shooting “application” and “fanout” atoms (pair which I call a “bit”). The functioning of the Y combinator is not at all sophisticated and mysterious, being instead fueled by the self-multiplication of the molecules (realized by unsupervised local chemical reactions) which then react with the bits and have as effect exactly what the Y combinator does.

The best example I have is the illustration of the computation of the Ackermann function (recall: a recursive but not primitive recursive function!)

What is nice in this example is that it works without the Y combinator, even if it’s a game of recursion.

But this is a choice, because actually, for many computations which try to reproduce lambda calculus reductions, the “stupid” strategy used with chemlambda is a bit too exuberant if the Y combinator is used as in lambda calculus (or functional programming).

The main reason is the lack of extension, there are no functions, so the usual functional programming techniques and designs are not the best idea. There are shorter ways in chemlambda, which employ better the “representation of the thing itself is its own semantic interpretation” than FP.

One of those techniques is to use instead of long linear and sequential lambda terms (designed as a composition of functions), so to use instead of that another architecture, one of neurons.

For me, when I think about a neural net and neural computation, I tend to see the neurons and synapses as loci of chemical activity. Then  I just forget about these bags of chemicals and I see a chemical connectome sort of thing, actually I see a huge molecule suffering chemical reactions with itself, but in a such a way that its spatial extension (in the neural net), phisically embodied by neurons and synapses and perhaps glial cells and whatnot, this spatial extention is manifested in the reductions themselves.

In this sense, the neural architecture way of using the Y combinator efficiently in chemlambda is to embed it into a neuron (bag of chemical reactions), like sketched in the following simple experiment

Now, instead of a sequential call of duplication and application (which is the way the Y combinator is used in lambda calculus), imagine a well designed network of neurons which in very few steps build a (huge, distributed) molecule (instead of a perhaps very big number of sequential steps) which at it’s turn reduce itself in very few steps as well, and then this chemical connectome ends in a quine state, i.e. in a sort of dynamic equilibrium (reactions are happening all the time but they combine in such a way that the reductions compensate themselves into a static image).

Notice that the end of the short movie about the neuron is a quine.

For chemlambda quines see this post.

In conclusion there are chances that this massively parallel (bad name actually for decentralized, local) architecture of a neural net, seen as it’s chemical working, there are chances that chemlambda really can do not only any computer science computation, but also anything a neural net can do.


Beggar Access: use DRM for academics, enjoy watching them fight for the copyright

 #BeggarAccess   is  Ross Mounce name for the flavour of OA proposed by Nature, see
This is a move which some people expected for a while. It is about the fact that who has the copyright is less relevant than who can enforce digital management rights.
A while ago I got interested into that (see the post )
and made a list of moves by publishers which may be seen as related to the one made by Nature.
So it’s a trend. It looks like academic publishers just discovered what everybody else discovered before: the power of DRM.
I guess that people listening to music and watching movies  form so big a population that we can say they define the average level of human intelligence.
These moves show that publishers (and your average academic boss) believe that you, researcher, are more stupid than this average.

There’s no contradiction: everybody writes academic articles, nobody reads them

In academia we write highly technical pieces of text called “articles” which appear in publications misleadingly called “journals”.

The authors work incredibly hard to write them. Compared to a regular office job, to have a subject for an academic article and to develop it into a piece of nice research, that’s extremely hard. People doing that should have a reason for wasting their minds for such a small material reward.

They may be like hounds with extremely developed smell sense. They have to hunt, they have to detect the prey. And after the hunt they are OK with a small bit of the prey to chew, while the hunter, with no particularly good nose, takes the most of the prey.

Academic articles and academic journals are from another planet than articles in journals. Nobody reads them.

Nobody reads them. When they do, it is because they seem to be supporting some trivial idea which can be communicated in a regular journal article.

Almost nobody reads them. The peers read them. For any article there are about a handful of other people in the world, beside the author, which may think that it is not a complete loss of time to read it. Who think their is their duty to claim they are going to read it. And who, in most of the cases, unless is trivial from their point of view, they don’t read it.

People, even in academia, are still monkeys. Emotions and leaders, authority and the pack order are very important from them as well. They only claim that’s not so, because they are scientists.