Tag Archives: distributed computing

As a handful of slime mold. That’s how smart is the whole net.

I record here, to be sure that it’s read, a message of explanations I wrote today.

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.
However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.
So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.
Another input would be a program which turns a lambda term into a mol file. The algorithm is here.
Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.
While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).
It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:
  1. it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
  2. the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.
The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final  graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)
In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.
Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.
This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).
So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.
On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).
These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.
This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.
Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.
And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “
_________________________________________________________________________
Advertisements

FAQ: chemlambda in real and virtual worlds

Q1. is chemlambda different than previous models, like the algorithmic chemistry of Fontana and Buss, or the CHAM of Berry and Boudol?

A1. Yes. In chemlambda we work with certain graphs made of nodes (atoms) and bond (arrows), call such a graph a  molecule.  Then:

  • (A) We work with individual molecules, not with populations of molecules. The molecules encode information in their shape, not in their number.
  • (B) different from algorithmic chemistry, the application and abstraction are atoms of the molecules.
  • (C) There are no variables in chemlambda and there is no need to introduce one species of molecules per variable, like in the previous models.
  • (D) chemlambda and it’s associated computing model (distributed GLC)  work well in a decentralized world, there is no need for having a global space or a global time for the computation.

There is a number of more technical differences, like (non exhaustively):

  • (E)  molecules  are not identified with their functions. Technically, chemlambda rejects eta reduction, so even for those molecules which represent lambda terms, they are not identified (as happens when we use eta reduction) with their function. This calls for an “extreme” functional programming style.
  • (F) only a small part of the chemlambda molecules correspond to lambda terms (there is a lambda calculus “sector”).
  • (G) there is no  global semantics.

__________________________________________________
Q2.  is chemlambda a kind of computing with chemical reaction networks (CRNs)?

A2. No. Superficially, there are resemblances, and really one may imagine CRNs based on chemlambda, but this is not what we are doing.

__________________________________________________

Q3. Why do you think chemlambda has something to tell about the real or even about the living world?

A3. Because the real world, in it’s fundamental workings, does not seem to care about 1D language based constructs which we cherish in our explanation. The real and especially the living world seems to be based on local, asynchronous interactions which are much more like signal transductions and much less like information passing. (See How is different signal transduction from information theory? )
Everything happens locally, in nontrivial but physical ways, leading to emergent complex behaviours. Nature does not care about coordinates and names of things or abstractions, unless they are somehow physically embodied. This is the way chemlambda functions.

__________________________________________________
Q4. Why do you think chemlambda has something to say about the virtual  world of the Net?

A4. Because it puts the accent on alife instead of AI, on decentralization instead of pyramidal constructs.  A microbial ecology like internet is much more realistic to hope to realize than one based on benevolent pyramidal AI constructs (be them clouds, or other corporations constructs). Because real and virtual worlds are consistent only locally.

__________________________________________________
Q5. What about the Internet of Things?

A5. We hope to give to the IoT the role of the bridge which unites two kinds of computations real-virtual, under the same chemistry.

__________________________________________________
Q6. What would happen in your dream world?

A6. There are already some (fake) news about it here: what if

__________________________________________________

 

 

 

Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]

scoop-of-water-magnified-990x500

As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]

ack_6

 

 

Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.

______________________________

UPDATE.  This is real: Autodesk Builds Its Own Virus, as the Software Giant Develops Design Tools for Life Itself.

Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.

__________________________________

All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.

__________________________________

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

Is the Seed possible?

Is the Seed possible? Neal Stephenson, in the book The Diamond Age, presents the idea of the Seed, as opposed to the Feed.

The Feed is a hierarchical network of pipes and matter compilers (much like  an Internet of Things done not with electronics, but with nanotechnology, I’d say).

The Seed is a different technology. I selected some  paragraphs from the book, which describe the Seed idea.

“I’ve been working on something,” Hackworth said. Images of a nanotechnological system, something admirably compact and elegant, were flashing over his mind’s eye. It seemed to be very nice work, the kind of thing he could produce only when he was concentrating very hard for a long time. As, for example, a prisoner might do.
“What sort of thing exactly?” Napier asked, suddenly sounding rather tense.
“Can’t get a grip on it,” Hackworth finally said, shaking his
head helplessly. The detailed images of atoms and bonds had been replaced, in his mind’s eye, by a fat brown seed hanging in space, like something in a Magritte painting. A lush bifurcated curve on one end, like buttocks, converging to a nipplelike point on the other.

CryptNet’s true desire is the Seed—a technology that, in their diabolical scheme, will one day supplant the Feed, upon which our society and many others are founded. Protocol, to us, has brought prosperity and peace—to CryptNet, however, it is a contemptible system of oppression. They believe that information has an almost mystical power of free flow and self-replication, as water seeks its own level or sparks fly upward— and lacking any moral code, they confuse inevitability with Right. It is their view that one day, instead of Feeds terminating in matter compilers, we will have Seeds that, sown on the earth, will sprout up into houses, hamburgers, spaceships, and books—that the Seed
will develop inevitably from the Feed, and that upon it will be
founded a more highly evolved society.

… her dreams had been filled with seeds for the last several years, and that every story she had seen in her Primer had been replete with them: seeds that grew up into castles; dragon’s teeth that grew up into soldiers; seeds that sprouted into giant beanstalks leading to alternate universes in the clouds; and seeds, given to hospitable, barren couples by itinerant crones, that grew up into plants with bulging pods that contained happy, kicking babies.

Arriving at the center of the building site, he reached into his bag and drew out a great seed the size of an apple and pitched it into the soil. By the time this man had walked back to the spiral road, a tall shaft of gleaming crystal had arisen from the soil and grown far above their heads, gleaming in the sunlight, and branched out like a tree. By the time Princess Nell lost sight of it around the corner, the builder was puffing contentedly and looking at a crystalline vault that nearly covered the lot.

All you required to initiate the Seed project was the rational,
analytical mind of a nanotechnological engineer. I fit the bill
perfectly. You dropped me into the society of the Drummers like a seed into fertile soil, and my knowledge spread through them and permeated their collective mind—as their thoughts spread into my own unconscious. They became like an extension of my own brain.

_________________________

Now, suppose the following.

We already have an Internet of Things, which would serve as an interface between the virtual world and the real world, so there is really not much difference between the two, in the specific sense that something in the former could easily produce effects in the latter.

Moreover, instead of nanotechnology, suppose that we are content with having, on the Net, an artificial chemistry which would mirror the real chemistry of the world, at least in it’s functioning principles:

  1.   it works in a decentralized, distributed way
  2. does not need an overlooking controller, because all interactions are possible only when there is spatial and temporal proximity
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

In this  world, I see a Seed as a dormant, inactive artificial chemical molecule.  When the Seed is planted (on the Net),

  1. it first grows into a decentralized, autonomous network (i.e. it starts to multiply, to create specialized parts, like a real seed which grows into a tree),
  2. then it starts computing (in the chemical sense, it starts to self-process it’s structure)
  3. and interacts with the real world (via the sensors and effectors available via the IoT) until it creates something in the real world.

_____________________

Notes:

  •  clearly, the artificial chemistry I am thinking about is chemlambda
  •  the principles of the sort of working of this artificial chemistry are those of the Distributed GLC

___________________________

How to plant a Seed (I)

In  The Diamond Age there is the Feed and, towards the end, appears the Seed.

There are, not as many as I expected, but many places where this Seed idea of Neal Stephenson is discussed. Most of them discuss it in relation to the Chinese  Ti yong  way of life, following the context where the author embeds the idea.

Some compare the Seed idea with open source.

For me, the Seed idea becomes interesting when is put together with distributed, decentralized computing. How to make a distributed Seed?

If you start thinking about this, it makes even more sense if you add one more ingredient: the Internet of Things.

Imagine a small, inactive, dormant, virtual thing (a Seed) which is planted somewhere in the fertile ground of  the  IoT. After that it becomes active, it grows, becomes a distributed, decentralized computation. Because it lives in the IoT it can have effects in the physical world, it can interact with all kinds of devices connected with the IoT, thus it can become a Seed in the sense of Neal Stephenson.

Chemlambda is a new kind of  artificial chemistry, which is intended to be used in distributed computing, more specifically in decentralized computing.  As a formalism it is a variant of graphic lambda calculus, aka GLC.  See the page  Distributed GLC for details of this project.

So, I am thinking about how to plant a chemlambda Seed. Concretely, what could pass for a Seed in chemlambda and in what precise sense can be planted?

In the next post I shall give technical details.