# let x=A in B in chemlambda

The beta reduction from lambda calculus is easy to be understood as the command
let x=B in A
It corresponds to the term (Lx.A)B which reduces to A[x=B], i.e to the term A where all instances of x have been replaced by B.
(by an algorithm which is left to be added, there are many choices among them!)

In Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971, and later John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30, there are proposals to replace the beta rule by a graph rewrite on the syntactic tree of the term.

These proposals opened many research paths, related to call by name strategies of evaluations, and of course going to the Interaction Nets.

The beta rule, as seen on the syntactic tree of a term, is a graph rewrite which is simple, but the context of application of it is complex, if one tries to stay in the real of syntactic trees (or that of some graphs which are associated to lambda terms).

This is one example of blindness caused by semantics. Of course that it is very difficult to conceive strategies of applications of this local rule (it involves only two nodes and 5 edges)  so that the graph after the rewrite has a global property (means a lambda term).

But a whole world is missed this way!

In chemlambda the rewrite is called BETA or A-L.
see the list of rewrites
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

In mol notation this rewrite is

L 1 2 c , A c 3 4  – – > Arrow 1 4 , Arrow 3 2

(and then is the turn of COMB rewrite to eliminate the arrow elements, if possible)

As 1, 2, 3, 4, c are only port variables in chemlambda, let’s use other:

L A x in , A in B let  – – >  Arrow A let , Arrow B x

So if we interpret Arrow (via the COMB rewrite) as meaning “=”, we get to the conclusion that

let x=B in A

is in chemlambda

L A x in

A in B let

Nice. it almost verbatim the same thing.  Remark that the command”let” appears as a port variable too.

Visually the rewrite/command is this:

As you see this is a Wadsworth-Lamping kind of graph rewrite, with the distinctions that:
(a) x, A, B are only ports variables, not terms
(b) there is no constraint for x to be linked to A

The price is that even if we start with a graph which is related to a lambda term, after performing such careless rewrites we get out of the realm of lambda terms.

But there is a more subtle difference: the nodes of the graphs are not gates and the edges are not wires which carry signals.

The reduction works well for many fundamental examples,
http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html
by a simple combination of the beta rewrite and those rewrites from the DIST family I wrote about in the post about duplicating trees.
https://chorasimilarity.wordpress.com/2015/05/04/the-illustrated-shuffle-trick-used-for-tree-duplication/

So, we get out of lambda calculus, what’s wrong with that? Nothing, actually, it turns out that the world outside lambda, but in chemlambda, has very interesting features. Nobody explored them, that is why is so hard to discuss about that without falling into one’s preconceptions (has to be functional programming, has to be lambda calculus, has to be a language, has to have a global semantics).
___________________________________________

# Nothing vague in the “no semantics” point of view

I’m a supporter of “no semantics” and I’ll try to convince you that it is nothing vague in it.

Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.
But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.
Now, let’s pass from trees to other graphs.
The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.
Nothing wrong with this!
Decorated graphs have a long tradition in mathematics, think for example at knot theory.
In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.
The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between  two graphs which have the same associated family of decorations, as it is to  find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

What if this global semantic point of view makes things harder than they really are?

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.

TO TELL THE STORY!

Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?

http://chorasimilarity.github.io/chemlambda-gui/dynamic/28_syn.html

Dear Marius,
There is no such thing as no-semantics. Every system that YOU deal with is described by you and observed by you with some language that you use. At the very least the system is interpreted in terms of its own actions and this is semantics. But your point is well-taken about not using more semantic overlay than is needed for any given situation. And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe. In doing mathematics it is often the case that one must find the least semantics and just the right syntax to explore a given problem. Then work freely and see what comes.
Then describe what happened and as a result see more. The description reenters the syntactic space and becomes ‘uninterpreted’ by which I mean  open to other interactions and interpretations. It is very important! One cannot work at just one level. You will notice that I am arguing both for and against your position!
Best,
Lou Kauffman
Dear Louis,
Thanks! Looks that we agree in some respects: “And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe.” Not in others; this is the base of any interesting dialogue.
Related to the “no semantics” earlier g+ post [*], here is a passage from Rodney Brooks “Intelligence without representation”

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

This brings to mind also this quote from the end of Vehicle 3 section from V. Braintenberg book Vehicles: Experiments in Synthetic Psychology:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Louis Kauffman reply to this post:
Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Parabel. Are you becoming inchoate?
Parabel. Y
Parabel.

Best,
Lou

Dear Louis, I see that the Minsky reference in the beginning of the quote triggered a reaction. But recall that Minsky appears in a quote by Brooks, which itself appears in a post by Marius, which is a follow up of an older post. That’s where my interest is. This post only gathers evidence that what I call “no semantics” is an idea which is not new, essentially.
So let me go back to the main idea, which is that there are positive advances which can be made under the constraint to never use global notions, semantics being one of them.
As for the story about Cookie and Parabel, why is it framed into text strings universe and discusses about  a “central locus of control”? I can easily imagine Cookie and Parabel having a discussion before writing was invented, say for example in a cave which much later will be discovered by modern humans in Lascaux.
I don’t believe that there is a central locus of control. I do believe that semantics is a mean to tell the story, any story, as if there is a central locus of control. There is no “central” and there is very little “control”.
This is not a negative stance, it is a call for understanding life phenomena from points of view which are not ideologically loaded by “control” and “central”. I am amazed by the life variety, beauty and vastness, and I feel limited by the semantics point of view. I see in a string of text thousands of years of cultural conventions taken for granted, I can’t forget that a string of text becomes so to me only after a massive processing which “semantics” people take as granted as well, that during this discussion most of me is doing far less trivial stuff, like collaborating and fighting with billions of other beings in my gut, breathing, seeing, hearing, moving my fingers. I don’t forget that the string of text is recreated by my brain 5 times per second.
And what is an “illusion”?
A third post
In the last post https://plus.google.com/+MariusBuliga/posts/K28auYf69iy I gave two quotes, one from Brooks “Intelligence without representation” (where he quotes Minsky en passage, but contains much more than this brief Minsky quote) and the other from Braitenberg “Vehicles: Experiments in Synthetic Psychology”.
Here is another quote, from a reputed cognitive science specialist, who convinced me about the need for a no semantics point of view with his article “Brain a geometry engine”.
The following quote is by Jan Koenderink “Visual awareness”
http://www.gestaltrevision.be/pdfs/koenderink/Awareness.pdf

“What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. […]
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.”  [p. 2-3]

[Remark: all these quotes appear in previous posts at chorasimilarity]

_______________________________________________________________

# A year review at chorasimilarity, second half

In parallel with the stuff about chemlambda, described in the first half, there was something else happening. I prepared some noted for a talk at the 4th Offtopicarium, in the form of a post here:

Notes for the “Internet of things not internet of objects”

The starting point is that what almost everybody describes as the Internet of Things is actually an Internet of Objects. We don’t want an Internet of Objects, because that would be only the usual accumulation of gadgetry and fads, i.e. only a very limited and uninspired construction. It is as imaginative as a big budget movie or as tasty as a fastfood menu.

Because reality is not objective. Reality is made of things, i, e. discussions between us humans about everything. When a certain agreement (or boredom, or faith, etc) is attained in the discussion, the said thing dies and dries into an object.

Discussions between humans thrive when individualities are respected and where there is a place which allows free mixing of ideas. A space. A theater.

Not the theater-in-a-box of perception, where the space is a scene, the dual of the homunculus, the king of fallacies. Because a scene is not a thing, but an object. On the scene the discussion is replaced by an ideology (see this or, from the scenographer point of view this).

Instead, a Greek theater under the sun could be a good starting point. As a machine which implements a theatrical distributed computing.

In order to do so, a necessary step is to separate computation from meaning. That would be only a small step forward after the separation of form from content principle. That’s a very short post, I reproduce most of it here:

One of the principles which make the net possible, as stated by Tim Berners-Lee,

separation of form from content: The principle that one should represent separately the essence of a document and the style with which it is presented.

Applied to decentralized computing, this means no semantics.

[One more confirmation of my impression that logic is something from the 21st century disguised in 19th century clothes.]

At this point the thread described here meets the one from the first half review: one of the discoveries of chemlambda, seen as artificial chemistry, is that it is possible to make this separation.

__________________________________________________________________

I stop here with the second half, there will be surely more halves to come 🙂

___________________________________________________________________

# Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself”

I enjoyed very much the presentation given by Jeff Hawkins “Computing like the brain: the path to machine intelligence”

Around 8:40

This is something which could be read in parallel with the passage I commented in the post   The front end visual system performs like a distributed GLC computation.

I reproduce some parts

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” .

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

• (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
• (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
• (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
• (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
• (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Of course, today I would say “performs like a distributed chemlambda computation”, according to one of the strategies described here.

Around 17:00 (Sparse distributed representation) . ” You have to think about a neuron being a bit (active:  a one, non active: a zero).  You have to have many thousands before you have anything interesting [what about C. elegans?]

Each bit has semantic meaning. It has to be learned, this is not something that you assign to it, …

… so the representation of the thing itself is its semantic representation. It tells you what the thing is. ”

That is exactly what I call “no semantics”! But is much better formulated as a positive thing.

Why is this a form of “no semantics”? Because as you can see the representation of the thing itself edits out the semantics, in the sense that “semantics” is redundant, appears only at the level of the explanation about how the brain works, not in the brain workings.

But what is the representation  of the thing itself? A chemical blizzard in the brain.

Let’s put together the two ingredients into one sentence:

• the sensory-motor model of the world is a learned representation of the thing itself.

Remark that as previously, there is too much here: “model” and “representation” sort of cancel one another, being just superfluous additions of the discourse. Not needed.

What is left: a local, decentralized.  asynchronous, chemically based, never ending “computation”, which is as concrete as the thing (i.e. the world, the brain) itself.

I put “computation” in quotes marks because this is one of the sensible points: there should be a rigorous definition of what that “computation” means. Of course, the first step would be fully rigorous mathematical proof of principle that such a “computation”, which satisfies the requierements listed in the previous paragraph, exists.

Then, it could be refined.

I claim that chemlambda is such a proof of principle. It satisfies the requirements.

I don’t claim that brains work based on a real chemistry instance of chemlambda.

Just a proof of principle.

But how much I would like to test this with people from the frontier mentioned by Jeff Hawkins at the beginning of the talk!

In the following some short thoughts from my point of view.

While playing with chemlambda with the strategy of reduction called “stupid” (i.e. the simplest one), I tested how it works on the very small part (of chemlambda) which simulates lambda calculus.

Lambda calculus, recall, is one of the two pillars of computation, along with the Turing machine.

In chemlambda, the lambda calculus appears as a sector, a small class of molecules and their reactions. Contrary to the Alchemy of Fontana and Buss, abstraction and application (operations from lambda calculus) are both concrete (atoms of molecules). The chemlambda artificial chemistry defines some very general, but very concrete local chemical interactions (local graph rewrites on the molecules) and some (but not all) can be interpreted as lambda calculus reductions.

Contrary to Alchemy, the part which models lambda calculus is concerned only with untyped lambda calculus without extensionality, therefore chemical molecules are not identified with their function, not have they definite functions.

Moreover, the “no semantics” means concretely that most of the chemlambda molecules can’t be associated to a global meaning.

Finally, there are no “correct” molecules, everything resulted from the chemlambda reactions goes, there is no semantics police.

So from this point of view, this is very nature like!

Amazingly,  the chemical reductions of molecules which represent lambda terms reproduce lambda calculus computations! It is amazing because with no semantics control, with no variable passing or evaluation strategies, even if the intermediary molecules don’t represent lambda calculus terms, the computation goes well.

For example the famous Y combinator reduces first to only a small (to nodes and 6 port nodes molecule), which does not have any meaning in lambda calculus, and then becomes just a gun shooting “application” and “fanout” atoms (pair which I call a “bit”). The functioning of the Y combinator is not at all sophisticated and mysterious, being instead fueled by the self-multiplication of the molecules (realized by unsupervised local chemical reactions) which then react with the bits and have as effect exactly what the Y combinator does.

The best example I have is the illustration of the computation of the Ackermann function (recall: a recursive but not primitive recursive function!)

What is nice in this example is that it works without the Y combinator, even if it’s a game of recursion.

But this is a choice, because actually, for many computations which try to reproduce lambda calculus reductions, the “stupid” strategy used with chemlambda is a bit too exuberant if the Y combinator is used as in lambda calculus (or functional programming).

The main reason is the lack of extension, there are no functions, so the usual functional programming techniques and designs are not the best idea. There are shorter ways in chemlambda, which employ better the “representation of the thing itself is its own semantic interpretation” than FP.

One of those techniques is to use instead of long linear and sequential lambda terms (designed as a composition of functions), so to use instead of that another architecture, one of neurons.

For me, when I think about a neural net and neural computation, I tend to see the neurons and synapses as loci of chemical activity. Then  I just forget about these bags of chemicals and I see a chemical connectome sort of thing, actually I see a huge molecule suffering chemical reactions with itself, but in a such a way that its spatial extension (in the neural net), phisically embodied by neurons and synapses and perhaps glial cells and whatnot, this spatial extention is manifested in the reductions themselves.

In this sense, the neural architecture way of using the Y combinator efficiently in chemlambda is to embed it into a neuron (bag of chemical reactions), like sketched in the following simple experiment

Now, instead of a sequential call of duplication and application (which is the way the Y combinator is used in lambda calculus), imagine a well designed network of neurons which in very few steps build a (huge, distributed) molecule (instead of a perhaps very big number of sequential steps) which at it’s turn reduce itself in very few steps as well, and then this chemical connectome ends in a quine state, i.e. in a sort of dynamic equilibrium (reactions are happening all the time but they combine in such a way that the reductions compensate themselves into a static image).

Notice that the end of the short movie about the neuron is a quine.

For chemlambda quines see this post.

In conclusion there are chances that this massively parallel (bad name actually for decentralized, local) architecture of a neural net, seen as it’s chemical working, there are chances that chemlambda really can do not only any computer science computation, but also anything a neural net can do.

_________________________________________________________________

# Why process calculi are old industrial revolution thinking: the example with the apple pies

I have strong ideological arguments against process calculi, exactly because of the parallel composition. I think that parallel composition is not realistical, because there is no meaning in the “parallel” unless you have a God view over the distributed computation.

This is a very brute argument, but I can make it detailed (and I did it, here and there in this open notebook).

In my opinion we are still in the process of letting go the old ideas of the industrial revolution. The main idea which we need to exorcise out of every corner of the mind is that there is a benevolent (or not) dictator who organize the process of the world (be it a factory, a government, a ministry, or a school class) in a way which is easy to lead because it has well placed bottlenecks which give a global meaning to the process.

Concretely, the very successful idea of organizing stuff, which comes from the industrial revolution, is that one has to abstract over the individuals, the subjects, then  to stream the interactions between them (the individual abstracted into functions) by creating a hierarchy of bottlenecks. The advantage is that structure gives a meaning to what is happening.

A meaning is simply like a hash table.

The power of this system of organization is tremendous.  It led to the creation of the modern states, as well as to the creation of economic and ideological systems, like capitalism and communism, which are both alike in the way they treat individuals as abstractions.

This kind of organisation pervades everything, in very concrete and punctual ways, so much so that the material structure which holds together our society (like  in particular the server-client structure of the net, as a random example, but less some of the net protocols) has grown in the way it is not only because there are some universal laws and invisible hands which constrain it, but also because this structure is an addition of a myriad of components which have been designed in this way and not in another because of the industrial revolution ideology of control and abstraction.

The power of the industrial revolution main idea is that you can take any naturally occurring process (like apples growing in trees and people culling them and making pies) and structure it in a meaningful way and transform it into a viral process (apple pies making factory). You just have to abstract apples and peoples into resources and synchronize the various parts, to define the inputs and outputs and then optimize your control over it and then you can make 10^9 evaluations of the abstract notion of “apple pie”  and put them on the shelves of the supermarket, instead of 10^3 individual apple pies as grandmothers used to make in their kitchens.

Now, in the factory of apple pies, the notion of parallel processes makes perfect sense. Contrary to that, in the real real world with trees and apples and grandmothers with their ovens, P | Q makes sense only in retrospect.

If you were God then you could look from far above at all these grandmas and see lots of P | Q. But the grandmas don’t need the parallel composition  to make their delicious apple pies. Moreover, the way of life is that generally there is no need for a centralized control, no need for a meaning. Viruses and cells don’t know they are viruses and cells. They work very well without knowing they do some tasks inside an environment.

The life ozone cell  may be in parallel with the life of another from the God’s point of view, but this relation is certainly not a part of, nor a need for these life processes to function.
The big question for me is: how to replicate this by techne? It is clearly possible, as proved by the world we live in. It looks to me very promising to try to work under these self-imposed constraints: no meaning, no parallel composition in particular, no abstraction, no levels. It is surprising that chemlambda works at all already.

_____________________________________________

# Microbes take over and then destroy the HAL 9000 prototype

Today was a big day for the AI specialists and their brainchild, the HAL 9000. Finally, the decision was made to open the isolation bubble which separated the most sophisticated artificial intelligence from the Net.  They  expected that  somehow their brainchild will survive unharmed  when exposed to the extremely dynamic medium of decentralized, artificial life based computation we all use every day.

As the video made by Jonathan Eisen shows,  in about 9   seconds after the super-intelligence was taken out of the quarantine and relayed to the Net “microbes take over and then destroy the” HAL 9000 prototype.

After the experiment, one of the creators of the HAL 9000 told us: “Maybe we concentrated too much on higher level aspects of the mind. We aim for understanding intelligence and rational behaviour, but perhaps we should learn this lesson from Nature, namely that real life is a wonderful, complex tangle of local, low level interactions, and that rational mind is a very fragile epiphenomenon. We tend to take for granted the infrastructure of life which runs in the background.”

“I was expecting this result” said a Net designer. “The strong point of the Alife decentralized functioning of the Net is exactly this: as the microbes, the Net needs no semantics to function. This is what keeps us free from the All Seeing Eye, corporation clouds based Net which was the rule some years ago. This is what gives everybody the trust to use the Net.”

___________________________________________

This is another post  which  respond to the challenge from Alife vs AGI.  You are welcome to suggest another one or to make your own.

___________________________________________

# What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

# A passage from Rodney Brooks’ “Intelligence without representation” applies to distributed GLC

… almost literally.  I am always very glad to discover that some research subject where I contribute is well anchored in the past. Otherwise said, it is always well for a researcher to  learn that he’s on the shoulder of some giant, it gives faith that there is some value in the respective quest.

The following passage resembles a lot with some parts and  principles of distributed GLC:

• distributed
• asynchronous
• done by processing structure to structure (via graph rewrites)
• purely local
• this model of computation does not need or use any  evaluation procedure, nor in particular evaluation strategies. No names of variables, no values are used.
• the model does not rely on signals passing through gates, nor on the sender-receiver setting of Information Theory.
• no semantics.

Now, the passage from “Intelligence without representation” by Rodney Brooks.

It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]

… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.

However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.

________________________________________________

# No semantics principle in GEB book

A very interesting connection between the no semantics principle of Distributed GLC and the famous GEB book by Douglas R. Hofstadter has been made by Louis Kauffman, who indicated the preface of the 20th anniversary edition.

Indeed, I shall quote from the preface (boldface by me), hoping that the meaning of the quoted will not change much by being taken out of context.

… I felt sure I had spelled out my aims over and over in the text itself. Clearly, however, I didn’t do it sufficiently often, or
sufficiently clearly. But since now I’ve got the chance to do it once more – and in a prominent spot in the book, to boot – let me try one last time to say why I wrote this book, what it is about, and what its principal thesis is.

In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. […]

GEB approaches these questions by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves (or ‘I”s or “souls”, if you prefer – whatever it is that distinguishes animate from inanimate matter) to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.

I have not read this book, instead I arrived, partially, to conclusion which are close to these from trying to understand some articles,  written by Jan Koenderink, being mesmerized by the beautiful meme “Brain a geometry engine“.  Last time I discussed about this in the post The front end visual system performs like a distributed GLC computation.

# Another discussion about math, artificial chemistry and computation

I have to record this ongoing discussion from G+, it’s too interesting.  Shall do it from  my subjective viewpoint, be free to comment on this, either here or in the original place.

(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post   Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)

Further I shall only lightly edit the content of the discussion (for example by adding links).

It started from this post:

“I will argue that it is a category mistake to regard mathematics as “physically real”.”  Very interesting post by Louis Kauffman: Mathematics and the Real.
Discussed about it here: Mathematics, things, objects and brains.
Greg Egan‘s “Permutation City” argues that a mathematical process itself is identical in each and any instance that runs it. If we presume we can model consciousness mathematically it then means: Two simulated minds are identical in experience, development, everything when run with the same parameters on any machine (anwhere in spacetime).

Also, it shouldn’t matter how a state came to be, the instantaneous experience for the simulated mind is independent of it’s history (of course the mind’s memory ought to be identical, too).He then levitates further and proposes that it’s not relevant wether the simulation is run at all because we may find all states of such a mind’s being represented scattered in our universe…If i remember correctly, Egan later contemplated to be embarassed about this bold ontological proposal. You should be able to find him reasoning about the dust hypothesis on the accompanying material on his website.Update: I just saw that wikipedia’s take on the book has the connection to Max Tegmark in the first paragraph:
> […] cited in a 2003 Scientific American article on multiverses by Max Tegmark.
http://en.wikipedia.org/wiki/Permutation_City

___________________________
+Refurio Anachro  thanks for the reference to Permutation city and to the dust hypothesis, will read and comment later. For the moment I have to state my working hypothesis: in order to understand basic workings of the brain (math processing included), one should pass any concept it is using through the filter
• local not global
• distributed not sequential
• no external controller
• no use of evaluation.

From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit 🙂

That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.

After, I prefer to wonder not about consciousness and it’s simulation, but instead about vision and other much more fundamental processes related to awareness. These are taken for granted, usually, and they have the bad habit of contradicting any “bit”-based explanation given up to date.

___________________________

the past lives in a conceptual domain
would one argue then that the past
is not real
___________________________
+Peter Waaben  that’s easy to reply, by way of analogy with The short history of the  rhino thing .
Is the past real? Is the rhinoceros horn on the back real? Durer put a horn on the rhino’s back because of past (ancient) descriptions of rhinos as elephant killers. The modus operandi of that horn was the rhino, as massive as the elephant, but much shorter, would go under the elephant’s belly and rip it open with the dedicated horn.  For centuries, in the minds of people, rhinos really have a horn on the back. (This has real implications, alike, for example, with the idea that if you stay in the cold then you will catch a cold.) Moreover, and even more real, there are now real rhinos with a horn on the back, like for example Dali’s rhino.
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
___________________________
+Peter Waaben, that seems what the dust hypothesis suggests.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
___________________________
[My comment added here: see the posts

]

___________________________

+Refurio Anachro  concretely, in the model of computation based on lambda calculus you have to add an evaluation strategy to make it work (for example, lazy, eager, etc).  The other model of computation, the Turing machine is just a machine, in the sense that you have to build a whole architecture around to use it. For the TM you use states, for example, and the things work by knowing the state (and what’s under the reading head).  Even in pure functional programming, besides their need for an evaluation strategy, they live with this cognitive dissonance: on one side they they rightfully say that they avoid the use of states of the imperative programming, and on the other side they base their computation on evaluation of functions! That’s funny, especially if you think about the dual feeling which hides behind “pure functional programming has no side effects” (how elegant, but how we get real effects from this?).
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you  see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
[My comment added here: see behaviour 5 of a GLC actor explained in this post.
]
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying.  Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
___________________________
+Marius Buliga “there are no values in this computation” Not to disagree, but is there a distinction between GLC graphs that is represented to a collection of possible values? For example, topological objects can differ in their chromatic, Betti, genus, etc. numbers. These are not values like those we see in the states, signals and bits of a TM, but are a type of value nonetheless.
___________________________
+Stephen Paul King  yes, of course, you can stick values to them, but  the fact is that you can do without, you don’t need them for the sake of computation. The comparison you make with the invariants of topological objects is good! +Louis Kauffman  made this analogy between the normal form of a lambda term and such kinds of “values”.
___________________________
+Refurio Anachro  thanks again for the Permutation city reference. Yes, it is clearly related to the budding Artifficial Connectomes idea of GLC and chemlambda!
It is also related with interests into Unlimited Detail 🙂 , great!
[My comment added here: see the following quote from the Permutation city wiki page

The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than “real” time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being “filled in” as necessary if its owner were to turn their model-of-a-head towards it.

]

But I keep my claim that that’s enough to understand for 100 years. Consciousness is far away. Recall that first electricity appeared as  kind of life fluid (Volta to Frankenstein monster), but actually it has been used with tremendous success for other things.
I believe the same about artificial chemistry, computation, on one side, and consciousness on the other. (But ready to change this opinion if faced with enough evidence.)

___________________________

# No extensionality, no semantics

Extensionality, seen as eta reduction, or eta conversion, is a move which can be described in GLC, but it is global, not local.

See this post for the description of the move. The move is called “ext1”. I borrow from that post the description of the move.

Move ext1.  If there is no oriented path from “2″ to “1″ outside the left hand side picture then one may replace this picture by an arrow. Conversely, if there is no oriented path connecting “2″ with “1″ then one may replace the arrow with the graph from the left hand side of the following picture:

Why is this move global? Because of the condition “there is no oriented path from “2″ to “1″ outside the left hand side picture”.  This condition involves an unbounded number of nodes and arrows, therefore it is global.

Further I shall make the usual comments:

• the move ext1, by itself,  can be applied to any GLC graph, not only to those graphs which represent lambda terms. In this respect the move ext1 is treated like the graphic beta move, which, in GLC, can be applied to any graph (for example to graphs which are used to encode knot diagrams, or other examples of graphs which are outside the lambda calculus sector of GLC, but nevertheless are interesting),
• if we restrict the move ext1 to lambda graphs then we have another problem: the condition “is a lambda graph” is global.

That is why the move ext1 is not considered to be part of GLC, nor will be used in distributed GLC.

You may remark that   GLOBAL FAN-OUT is a global move, which is part of GLC.  The answer to the implicit question is that chemlambda, which is a purely local graph rewriting system, solves this. Even if chemlambda and GLC are different formalisms, when it comes to universality, then they behave the same on lambda graphs, with the bonus on the chemlambda side of being able to prove that we can replace the use of the GLOBAL FAN-OUT with a succession of moves from chemlambda.

This can’t be done for the move ext1.

And this is actually good, because it opens a very intriguing research path: how far can we go by using only locality?

That means in particular: how far can we go without semantics?

Because:

• extensionality brings functions
• functions bring types
• types bring the type police
• the type police brings semantics.

It looks that life (basic works of real lifeforms) goes very well:

• without semantics

In this respects distributed GLC is even more life-like than it looks at first sight.

# Computation with GLC actors: pure syntax, no semantics

The notion of computation from   Distributed GLC has the following properties:

• distributed
• asynchronous
• done by processing structure to structure (via graph rewrites)
• purely local
• uses an analogy with molecules and chemical reactions, but the spatial relations between the interacting molecules are replaced by actors interactions in a variant of the Actor Model.

Therefore:

• the model is not sequential, although it may do sequential computation, if constrained; however, the model does not need the idea of a sequential computation,
•  synchronization is not needed for the model, in fact any synchronization hypothesis would only cripple the power of the model,
• this model of computation does not need or use any  evaluation procedure, nor in particular evaluation strategies; the model does not rely on signals passing through gates, nor on the sender-receiver setting of Information Theory.  No names of variables, no values are used. Because the eta reduction is not part of the model,  no functions are used, needed, or make sense.
• this model of computation is not based on a  global picture. There is no global state, no global space,  no global time, no semantics.

More details:

The nodes of the distributed computation are the actors. The connectivity between the actors is not externally controlled or imposed. Instead the connectivity is the result, and changes because of the actors interactions.

Actors are not controlled externally, they are autonomous. The interactions between actors are purely local.

Any interaction with an external device, database, or program is done via a GLC actor.

_________________________________

# The front end visual system performs like a distributed GLC computation

In this post I want to explain why the Distributed GLC  model of computation can be seen as a proof of principle that it is possible to describe rigorously some complex functioning of the brain as computation.

If you are not aware about this as being a problem, then please learn that the matter whether what brains do is computation is very controversial. On one side there are rigorous notions of computation (expressed in terms of Turing Machines, or in terms of lambda calculus, for example) which are used with full competence in CS. On the other side, in (some parts of) neuroscience the word “computation” is used in a non-rigorous sense, not because the neuroscience specialists are incapable of understanding computation in the rigorous CS sense, but because in real brains the matters are far more complex to make sense than in regards to paper  computers. Nevertheless, (some) CS specialists believe (without much real evidence) that brains compute in the CS sense, and (some) neuroscience specialists believe that their vague notions of computation deserve to bear this name, even it does not look like computation in CS rigorous sense.

OK, I shall concentrate on a particular example which I think it is extremely interesting.

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” . From section 1, quotes indexed by me with (1), (2), (3).

(1) Vision […]  is sustained by a dense, hierarchically nested and heterarchically juxtaposed tangle of cyclical processes.”

(2) In this chapter we focus upon the interface between the light field and those parts of the brain nearest to the transduction stage. We call this the “visual front end”.

(3) Of course, the exact limits of the interface are essentially arbitrary, but nevertheless the notion of such an interface
is valuable.

• (2) is the definition of the front end
• (3) is a guard against a possible entry path of the homunculus in the brain
• (1)  has these very nice expression “dense tangle of cyclical processes”, will come back to this!

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

• (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
• (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
• (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
• (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
• (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Comments and reformulations, indexed by (I), … (IV)

• (I) the front end is a syntactical transformer, it processes structure [from (a), (b)]
• (II) there is no semantics [from (b)]; semantical interpretations are not part of the front end [from (d)]
• (III) the front end does not compute, in the sense that there is no categorical like chasing diagrams type of computing [not formulated in terms of signals processed by gates?] [from (d)]
• (IV) there is a clear mechanism, based on something like a “dense tangle of cyclical processes” which processes the total input (from the light field) from the immediate past [from (e) and (1)]

These (I)-(IV) are exactly the specifications of a distributed computation with GLC actors, namely:

• a distributed, asynchronous, rigorously defined computation
• based on local graph rewrites which are purely syntactic transformers,  a correspondent of both “dense tangle of cyclical processes” and also of “processes structure”
• there is no semantics,  because there are no names or values which decorate the arrows of the GLC graphs, nor they travel through the nodes of such graphs. There is no evaluation procedure needed for the computation with GLC actors
• the computation with GLC actors is done starting from an initial graph (structure) , which may use also external constructs (the cores are equivalents of the light field which triggers chemical reaction in the retina, which are then processed by the front end)

This is no coincidence! One of the reasons of building GLC was exactly the one of making sense of the front end visual system.

In conclusion:

• yes, there is a way to rigorously describe what the front end does as computation in the CS sense, although
• this notion of computation has some unique features: no evaluation, graph reduction based asynchronous, distributed, purely local. No semantics needed, no global notions or global controllers, neither in space, nor in time.

# Distributed GLC, discussion (II)

Continues from   Distributed GLC, discussion (I) , and contains further notes about the distributed computing model with GLC actors from GLC actors, artificial chemical connectomes, topological issues and knots , arXiv:1312.4333, written with Louis Kauffman .

The first part of this post concerns more explanations about the fact that we don’t need to use signal passing through gates in this model. This has been explained in  Distributed GLC, discussion (I) ,  here I want to insist by saying that an implication of this is that evaluation (in the sense used for  expressions, terms, etc, from lambda calculus) is not needed for computation!

This has been already mentioned in an older post  I-don’t-always advantage of the chemical concrete machine . I borrow from it some parts:

Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.

But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer.

This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)

Btw, maybe is good to write few words about the common things and differences between GLC and chemlambda.

• they are both graph rewriting systems
• the graphs they are using are the same, based on 4 trivalent nodes and one univalent one, only the drawing convention is different
• they are Turing universal, because both contain combinatory logic and untyped lambda calculus (without eta reduction!)
• but lambda calculus is just one part of the things they can both do!
• most of the graph rewrites (moves) are the same
• but there are some moves which are different: the FAN-IN move from chemlambda applies to a node which is called here “fan-in”, replacing the emergent algebra moves which apply to the node $\varepsilon$ from GLC. Also the GLOBAL FAN-OUT move from GLC is replaced by DIST moves in chemlambda
• therefore chemlambda has only local moves, while GLC has also some global moves
• moreover, the replacement of the GLOBAL FAN-OUT by DIST moves does make them different formalisms.

I hope that until now it is clear that lambda calculus is one sector (i.e. one part) of GLC and chemlambda and that, explicitely, both GLC and chemlambda can do other things than lambda calculus.

Mind you that I am not claiming that lambda calculus can’t do everything that GLC or chemlambda can (actually I simply think that this is a bad formulated statement). What is proved is that GLC and chemlambda can be applied to several formalisms, the contrary implications are to be proved (but this is part of a larger discussion, like the one concerning the Actor Model of Hewitt compared with the Turing Machine, and so on, I don’t want to enter into this).

Now I am going to borrow something from an older post, in order to show you in a very short way that indeed, GLC and chemlambda are different.

The following pictures are taken from the post Metabolism of loops (a chemical reaction network in the chemical concrete machine)

With the drawing conventions from chemlambda, but using only the GLC moves, we see in the upper side of this figure, the graph associated to the combinator $\Omega = (\lambda x . (xx)) (\lambda x . (xx))$. In lambda calculus, we may go forever trying to reduce this. The figure shows the same, we turn in place after the application of a graphic beta move, followed by a GLOBAL FAN-OUT.

This is the situation in GLC. Now let’s do the same in chemlambda:

There are multiple ways to perform moves, some of them can be done in parallel. There is no effort here to recast this into a GLC actors version, this is simply a drawing which tries to convey what can happen by applying moves, starting from the graph of $\Omega$.

But, rather amazingly, there is a way to get out the loop of moves, look at the lower part of the figure. Because of the DIST and FAN-IN moves, we went out from the lambda calculus sector. We can “reduce” the $\Omega$ combinator to a pair of loops and a weird fan-out node with one exit arrow connected to it’s input arrow.

______________________

# Model of computation vs programming language in molecular computing

An interesting discussion (?) started in the comments of this John Baez  G+ post concerns differences between “model of computation” and “programming language” denominations (in that post, in particular, for Petri nets).

I reproduce here what I think that are relevant bits for this discussion and later, after possibly several updates, I shall try to write something useful by using these bits.

1.  Turing machine and lambda calculus are both models of computation, but lambda calculus is also  a programming language and Turing machine is not.

2. Zachariah Hargis makes the point of comparing this model of computation  vs  programming language distinction as related to the one  made by Leibniz between calculus ratiocinator  and  lingua characteristica. (Among other references, note to self to explore further.)

3. Chemical reaction networks (CRNs)  is one fundamental ingredient of molecular computing, no matter what formalization of CRNs is used. Don’t believe that all “computational part” of a CRN is a Petri net (because it is often very important which are concretely the molecules and reactions involved in the CRN, not only the abstract number of species and reaction rates between those).

4. Petri nets, as used in chemistry, are a tool for getting quantitative information from CRNs, once the CRNs are designed by other means. Petri nets might be useful for thinking about CRNs, but not necessary for designing CRNs.

5.  CRNs  which are designed by using, or embody in some sense lambda calculus is a much more interesting path towards a “programming language”, and a classical one (Fontana and Buss Algorithmic chemistry), than the more studied engineering style implementation of imperative programming and by force copy-paste of TM thinking into bio computing.

6. Among the specifications for a “programming language for chemistry” are the following:

• (a) geometrical (in particular no use of currying, along more obvious features as being parallel, asynchronous, the latter being achievable already by many well known, non exotic programming languages),
• (b) purely syntactic (in particular no names management, nor input-output signalling),
• (c) (maybe an aspect of (b), but not sure)  no use of evaluation in computation (strategy).

(The chemical concrete machine satisfies these requirements and moreover it contains lambda calculus, thus showing that  such a “language” is possible. However, the chemical concrete machine is based on a made-up, artificial chemistry, thus it provides only a proof of principle for the existence of such a “language”. Or is it?  Biochemists help is needed to identify, or search for real chemical reactions which could be used for implementing the chemical concrete machine in reality.)

7. The problem of tacit axioms in history of the Church-Turing thesis might be especially relevant for biochemists, and it could also be mentioned as an argument in favour of making a distinction between “model of computation” and “programming language”:  any model of computation uses some tacit axioms, while a programming language does not (only at the semantic level concerning making (human) sense about the results and ways of functioning of the programming language  such tacit axioms are used in this case). For biochemists, to not use such tacit axioms is a must  when they try to find scientifically valid explanations. CS does well when ignoring these, in most of the cases.

___________________________