# Local machines

Suppose there is a deep conjecture which haunts the imagination of a part of the mathematical community. By the common work of many, maybe even spread over several centuries and continents, slowly a solution emerges and the conjecture becomes a theorem. Beautiful, or at least horrendously complex theoretical machinery is invented and put to the task. Populations of family members experienced extreme boredom when faced to the answers of the question “what are you thinking about?”. Many others expressed a moderate curiosity in the weird preoccupations of those mathematicians, some, say, obsessed with knots or zippers or other childish activities. Finally, a constructive solution is found. This is very very rare and much sought for, mind you, because once we have a constructive solution then we may run it on a computer. So we do it, perhaps for the immense benefit of the finance industry.

Now here is the weird part. No matter what programming discipline is used, no matter which are programmers preferences and beliefs, the computer which runs the program is a local machine, which functions without any appeal to meaning.

I stop a bit to explain what is a local machine. Things are well known, but maybe is better to have them clear in front of the eyes. Whatever happens in a computer, it is only physically local modifications of it’s state. If we look at the Turing machine (I’ll not argue about the fact that computers are not exactly TMs, let’s take this as a simplification which does not affect the main point), then we can describe it as well as a stateless Turing machine, simply by putting the states of the machine on the tape, and reformulating the behaviour of the machine as a family of rewrite rules on local portions of the tape. It is fully possible, well known, and it has the advantage to work even if we don’t add one or many moving heads into the story, or indirection, or other ingredient than the one that these rewrites are done randomly. Believe it or not (if not then read

Turing machines, chemlambda style
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

for an example) but that is a computer, indifferently of what technological complexities are involved into really making one.

(this is an animation showing a harmonious interaction between a chemical molecule derived from a lambda term, in the upper side of the image, and a Turing machine whose tape is visible in the lower side of the image)

Let’s get back to the algorithmic form of the solution of the mathematical problem. On the theoretical side there are lots of high meanings and they were discovered by a vast social collaboration.

But the algorithm run by the computer, in the concrete form it is run, edits out any such meaning. It is a well prepared initial tape (say “intelligently designed”, hope you have taken your daily dose of humour), which is then stupidly, randomly, locally rewritten until there’s no more reaction possible. Gives the answer.

If it is possible to advance a bit, even with this severe constraint to ignore global semantics, then maybe we find really new stuff, which is not visible under all these decorations called “intelligent”, or high level.

[Source:

# Replication, 4 to 9

In the artificial chemistry chemlambda  there exist molecules which can replicate, they have a metabolism and they may even die. They are called chemlambda quines, but a convenient shorter name is: microbes.
In this video you see 4 microbes which replicate in complex ways. They are based on a simpler microbe whose life can be seen live (as a suite of d3.js animations) at [1].
The video was done by screencasting the evolution of the molecule 5_16_quine_bubbles_hyb.mol and with the script quiner_experia, all available at the chemlambda GitHub repository [2].

[1] The birth and metabolism of a chemlambda quine. (browsers recommended: safari, chrome/chromium)
chorasimilarity.github.io/chemlambda-gui/dynamic/A_L_eggshell.html

# 500: a year review at chorasimilarity, first half

Personal post triggered by the coincidence of the year’s end and a round number of posts here: 500.

I started this year with high hopes about the project described in the article GLC actors, artificial chemical connectomes, topological issues and knots. Louis Kauffman wrote in the introduction some unbelievably nice words about graphic lambda calculus:

Even more generally, the movement between graphs and algebra is part of the larger picture of the relationship of logical and mathematical formalisms with networks and systems that was begun by Claude Shannon in his ground-breaking discovery of the relationship of Boolean algebra and switching networks.

We believe that our graphical formulation of lambda calculus is on a par with these discoveries of Shannon. We hope that the broad impact of this proposal will be a world-wide change in the actual practice of distributed computing. Implemented successfully, this proposal has a potential impact on a par with the internet itself.

But in June we got news that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF, see my comments in this post.  We got good reactions, from the reviewers, on the theoretical side (i.e. what is described in the GLC actors article), but fair critics on the IT part of the project. Another mistake was the branding of the project as oriented towards security.

I should have follow my initial plan, namely to start from the writing of simple tools, programs, which should also have some fun side, ideally, but which at least would allow to start the exploration of the formalism in much more detail than the what pen and papers allows. Instead of that, as concerns this project, there has been a waste of time from Jan to Jun, waiting for one puny source of funding before doing any programming work.

I parallel, a better trend appeared, one which I have not dreamed about two years ago: artificial life. During the summer of 2013 I thought it is worthy to try to get rid of a weakness of the graphic lambda calculus, namely that is has one global graph rewrite, called GLOBAL FAN-OUT. That’s why I wrote the Chemical concrete machine article, describing a purely local graph rewrite formalism which later was renamed  chemlambda. That was great, I even made an icon for it:

which is made by two lambda symbols, one being up side down, to suggest that writing linear conventions are obsolete. The lambdas are  arranged into a DNA like double spiral, to suggest connections with life. (Of course that means I entered in the alife field, but everything about that was so fresh for me. Later I learned about Alchemy and other approaches, mixing either lambda calculus with alife, or rewriting systems with alife, but not both, and surely not the way is done in chemlambda, as abundantly documented in this blog)

Several (personal as well) novelties related to the article. One was that the article appeared on figshare too, not only on arXiv. This relates with another subject which I follow, namely what means of dissemination of research to use, seen that publishing academic articles is no longer enough, unless you want your work to be used only as a kind of fertilizer for future data mining projects.

The initial purpose of chemlambda was to be aimed at biologists, chemists, somewhere there, but apparently, with almost certainly, if a guy does computing then he will not do chemistry, or if he does chemistry then he has no idea about lambda calculus. I’m mean, there are exceptions, like to be part of an already existing team which do both, which I’m not. Anyway, so initially I hoped for interactons with biochemists (I still do, looking for that rare bird who would dare to lower himself to talk with a geometer from a non dominant country, motivated purely by the research content, and who would have the trivial idea to use his chemistry notions for something which is not in the scope of already existing projects).

With Louis Kauffman, we set out to write a kind of continuation of the GLC actors article, this time concentrated on chemlambda, as seen from our personal interests. The idea was to participate at the biggest event in the field, the ALIFE 14 conference. Which we did: Chemlambda, universality and self-multiplication.

Putting together the failure of the NSF project and the turn towards alife, is only natural that I set to write more explanations about the formalism, like this series  of 7  expository posts on chemlambda described with the help of g-patterns:

This was a bridge towards starting to write programs, that’s for later.

In parallel with other stuff, which I’ll explain in the second half.

_________________________________________________

# Walker eating bits and a comment on the social side of research

This post has two parts: the first part presents an experiment and the second part is a comment on the social side of research today.

Part 1: walker eating bits.  In this post I introduced the walker, which has been also mentioned in the previous post.

I made several experiments with the walker, I shall start by describing the most recent one, and then I shall recall (i.e. links to posts and vids) the ones before.

I use the chemlambda gui which is available for download from here.

What I did: first I took the walker and made it walk on a trail which is generated on the fly by a pair A-FOE of nodes. I knew previously that such a pair A-FOE generates a trail of A and FO nodes, because this is the basic behaviour of the Y combinator in chemlambda. See the illustration of this (but in an older version, which uses only one type of fanout nodes, the FO) here.  Part of it was described in the pen-and-paper version in the ALIFE14 article with Louis Kauffman.

OK, if you want to see how the walker walks on the trail then you have to download first the gui and then use the gui with the file walker.mol.

Then I modified the experiment in order to feed the walker with a bit.

A bit is a pair of A-FO nodes, which has the property that it is a propagator. See here the illustration of this fact.

For this I had to modify the mol file, which I did. The new mol file is walker_eating_bit.mol .

The purpose of the experiment is to see what happens when the walker is fed with a bit. Will it preserve its shape and spill out a residue on the trail? Or will it change and degenerate to a molecule which is no longer able to walk?

The answer is shown in the following two screenshots. The first one presents the initial molecule described by the walker_eating_bit.mol.

At the extreme right you see the pair A-FOE which is generating the trail (A is the green big node with two smaller yellow ones and a small blue one and the FOE is the big yellow node with two smaller blue ones and a small yellow one). If you feel lost in the notation, then look a bit at the description in the visual tutorial.

In the middle you see the walker molecule. At the right there is the end of the trail. The walker walks from left to right, but because the trail is generated from right to left, this is seen as if the walker stays in place and the trail at its left gets longer and longer.

OK. Now, I added the bit, I said. The bit is that other pair of two green nodes, at the right of the figure, immediately at the left of the A-FOE pair from the extreme right.

The walker is going to eat this pair. What happens?

I spare you the details and I show you the result of 8 reduction steps in the next figure.

You see that the walker already passed over the bit, processed it and spat it as a pair A-FOE. Then the walker continued to walk some more steps, keeping its initial shape.

GREAT! The walker has a metabolism.

Previous experiments.  If you take the walker on the trail and you glue the ends of the trail then you get a walker tchoo-tchoo going on a circular trail. But wait, due to symmetries, this looks as a molecule which stays the same after each reduction step. Meaning this is a chemlambda quine. I called such a quine an ouroboros. In the post Quines in chemlambda you see such an ouroboros obtained from a walker which walk on a circular train track  made of only one pair.

I previously fed the walker with a node L and a termination node T, see this post for pen and paper description and this video for a more visual description, where the train track is made circular as described previously.

That’s it with the first part.

Part 2: the telling silence of the expert. The expert is no lamb in academia. He or she fiercely protect the small field of expertise where is king or queen. As you know if you read this open notebook, I have the habit of changing the research fields from time to time. This time, I entered into the the radar of artificial chemistry and graph rewriting systems, with an interest in computation. Naturally I tried to consult as many as possible experts in these fields. Here is the one and only contribution from the category theory church.  Yes, simply having a better theory does not trump racism and turf protection.  But fortunately not everything can be solved by good PR only. As it becomes more and more clear, the effect of promotion of mediocrity in academia, which was consistently followed  since the ’70, has devastating effects on the academia itself. Now we have become producers of standardised units of research, and the rest is just the monkey business about who’s on top. Gone is the the trust in science, gone are the funds, but hey, for some the establishment will still provide a good retirement.

The positive side of this big story, where I only offer concrete, punctual examples, is that the avalanche which was facilitated by the open science movement (due to the existence of the net) will change forever the academia in particular. Not into a perfect realm, because there are no such items in the real world catalogue. But the production of scientific research in the old ways of churches and you scratch my back and I’ll scratch yours is now exposed to more eyes than before and soon enough we shall witness a phenomenon similar to the one happened more than 100 years ago in art, where ossified academic art sunk into oblivion and an explosion of creativity ensued, simply because of the exposure of academic painting along with alternative (and, mixed with garbage, much more creative artists) works in the historical impressionist revolution.

______________________________________________

# List of Ayes/Noes of artificial chemistry chemlambda

List of noes

• distributed (no unique place, no external passive space)
• asynchronous (no unique time, no external global time)
• decentralized (no unique boss, no external acyclic hierarchy)
• no semantics (no unique meaning, no signal propagation, no values)
• no functions (not vitalism)
• no probability

List of ayes

__________________________________________________________

# Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]

As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]

Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.

______________________________

# Alife vs AGI

Artificial general intelligence  is, of course, on the top of the mind of some of the best or most interesting researchers. In the post Important research avenues on my mind, Ben Goertzel writes:

1. AGI, obviously … creating robots and virtual-world robots that move toward human-level general intelligence

….

5. Build a massive graph database of all known info regarding all organisms, focused on longevity and associated issues, and set an AI to work mining patterns from it… I.e. what I originally wanted to do with my Biomind initiative, but didn’t have the \$ for…

6. Automated language learning — use Google’s or Microsoft’s databases of text to automatically infer a model of human natural languages, to make a search engine that really understands stuff.  This has overlap with AGI but isn’t quite the same thing…

7. I want to say femtotech as I’m thinking about that a fair bit lately but it probably won’t yield fruit in the next few years…

….

9. Nanotech-using-molecular-bio-tools and synthetic biology seem to be going interesting places, but I don’t follow those fields that closely, so I hope you’re pinging someone else who knows more about them…

I believe that 9 is far more likely to achieve sooner than 1. Will explain a bit later, after looking a bit at the frame of mind which, I think, constrains this ordering.

AGI is the queen, the graal, something which almost everybody dreams to see. It is an old dream. Recent advances in cognition show that yeah, we, Natural general intelligence beings, are kind of robots with many, many processes going in parallel in the background, all of them giving the feeling of reality. On top of all these processes are the ones related to consciousness and high level functioning of the brain. It is admirable to try to model those, but it is naive, and coming from a old way of seeing things, to believe that the other processes are somehow not as interesting, or not really needed, or simply they are too mechanical, anyway, not something which is a challenge. Reality is that we now know that we even don’t have the right frame of mind to understand how to understand the functioning of those neglected, God given processes.

So, that is why I believe that AGI is not realistic. Unless we concentrate on language, or other really puny aspects of GI, but with lots of traditions.

Btw, have I told you that whatever I write, I am always happy to be contradicted?

The points 5 and 6 look indeed very probable. Will be done by corporations, that is sure. Somehow is the same thing behind, namely that there is an essence of the pyramidal way of thinking, such that with enough means, knowledge will accumulate on top of that pyramid. (For the point 1 intelligence is the top and for 5 and 6 corporations are on top, of course).

As regards the point 7, that starts to be genuinely new, therefore less fashionable. The idea of a single molecule quantum computer springs into mind. Should be known better. [See the comments at this G+ post.]

Several concepts are now under development to make a calculation using a single molecule:
1) to force a molecule to look like a classical electronic circuit but integrated inside the molecule
2) to divide the molecule into “qubits” in order to exploit the quantum engineering developed since several years around quantum computers.
3) to use intramolecular dynamical quantum behavior without dividing molecules into “qubits” leading to Hamiltonian quantum computer

Now, to point 9!

It can be clearly done by a combination of decentralized computing with artificial chemistry.

In a future post I shall describe with details, by using also previous posts from chorasimilarity, which are the ingredients and what are the arguments in favour of this idea.

In this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!
_______________________________________________