Category Archives: IoT

Pharma meets the Internet of Things

Pharma meets the Internet of Things, some commented references for this future trend. Use them to understand

[0] After the IoT comes Gaia

There are two realms of computation, which should and will become one: the IT technology and biochemistry.

General stuff

The notion of computation is now well known, we speak about what is computable and about various models of computation (i.e. how we compute) which always turned out to be equivalent in the sense that they give the same class of computable things (that’s the content of the Church-Turing thesis).

It is interesting though how we compute, not only what is computable.

In IT perhaps the biggest (and socially relevant) problem is decentralized asynchronous computing. Until now there is no really working solution of a model of computation which is:
– local in space (decentralized)
– local in time (asynchronous)
– with no pre-imposed hierarchy or external authority which forces coherence

In biochemistry, people know that we, anything living, are molecular assemblies which work:
– local in space (all chemical interactions are local)
– local in time (there is no external clock which synchronizes the reactions)
– random (everything happens without any external control)

Useful links for an aerial view on molecular computing, seen as the biochemistry side of computation:


Some history and details provided. Quote from the end of the section “Biochemistry-based information technology”

“Other experiments have shown that basic computations may be executed using a number of different building blocks (for example, simple molecular “machines” that use a combination of DNA and protein-based enzymes). By harnessing the power of molecules, new forms of information-processing technology are possible that are evolvable, self-replicating, self-repairing, and responsive. The possible applications of this emerging technology will have an impact on many areas, including intelligent medical diagnostics and drug delivery, tissue engineering, energy, and the environment.”


A detailed historical view (written in 2000) of the efforts towards “molecular electronics”. Mind that’s not the same subject as [1], because the effort here is to use biochemistry to mimic silicon computers. While [1] also contains such efforts (building logical gates with DNA, etc), DNA computing does propose also a more general view: building structure from structure as nature does.


Two easy to read articles about real applications of molecular computing:
– “Microscopic machine mimics the ribosome, forms molecular assembly line”
– “Biological computer can decrypt images stored in DNA”


Article about Craig Venter from 2016, found by looking for “Craig Venter Illumina”. Other informative searches would be “Digital biological converter” or anything “Craig Venter”


Interesting talk by an interesting researcher Lee Cronin

[6] The Molecular Programming Project

Worth to be browsed in detail for seeing the various trends and results

Sitting in the middle, between biochemistry and IT:

[1] Algorithmic Chemistry (Alchemy) of Fontana and Buss

Walter Fontana today:

[2] The Chemical Abstract Machine by Berry and Boudol

[3] Molecular Computers (by me, part of an Open Science project, see also my homepage and the chemlambda github page )

On the IT side there’s a beautiful research field, starting of course with lambda calculus by Church. Later on this evolved in the direction of rewriting systems, then graph rewriting systems. I can’t even start to write all that’s done in this direction, other than:

[1] Y. Lafont, Interaction Combinators

but see as well the Alchemy, which uses lambda calculus!

However, it would be misleading to reduce everything to lambda calculus. I came to the conclusion that lambda calculus or Turing machines are only two among the vast possibilities, and not very important. My experience with chemlambda shows that the most relevant mechanism turns around the triple of nodes FI, FO, FOE and their rewrites. Lambda calculus is obtained by the addition of a pair of A (application) and L (lambda) nodes, along with standard compatible moves. One might use as well nodes related to a  Turing Machine instead, as explained in

Everything works just the same. The center, what makes things work, is not related to Logic or Computation as they are usually considered. More later.

Mol language and chemlambda gui instead of html and web browsers gives new Net service?

The WWW is an Internet system, based on the following ingredients:

  • web pages (written in html)
  • a (web) browser
  •  a web server (because of the choice of client-server architecture)

Tim Berners-Lee wrote those programs. Then the WWW appeared and exploded.

The force behind this explosion comes from the separation of the system into independent parts. Anybody can write a web page, anybody who has the browser program can navigate the web, anybody who wants to make a web server needs basically nothing more than the program for that (and the  previously existing  infrastructure).

In principle it works because of the lack of control over the structure and functioning.

It works because of the separation of form from content, among other clever separations.

It is so successful, it is under our noses, but apparently very few people think about the applications of the WWW ideas in other parts of the culture.

Separation of form from content means that you have to acknowledge that meaning is not what rules the world. Semantics has only only a local, very fragile  existence, you can’t go too far if you build on semantics.

Leave the meaning to the user, let the web client build his meaning from the web pages he can access via his browser. He can access and get the info because the meaning has been separated from the form.

How about another Net service, like the WWW, but which does something different, which goes to the roots of computation?

It would need:

  • artificial molecules instead of web pages; these are files written in a fictional language called “Mol”
  • a gui for the chemlambda artificial chemistry, instead of a web browser;  one should think about it as a Mol compiler & gui,
  • a chemical server which makes chemical soups, or broths, leaving the reduction algorithm to the users;

This Mol language  is an idea which holds some potential, but which needs a lot of pondering. Because the “language” idea has bad effects on computation.



Updates on the program about artificial chemistry and decentralized computing

I pass directly to the matter.

UPDATE: now there are demos in D3 for some of the things described here.

Where I’m now.  I have a development of the initial chemlambda formalism (introduced in the Chemical concrete machine) which can be coupled with with various algorithms for performing reductions, which range from the most stupid

  • on one machine run a program which advances in sequential steps, such that each step begins by finding all possible graph rewrites on a given molecule, then chooses according to a criterion called “priority choice” a maximal collection of graph rewrites which can be done simultaneously; after the application of all graph rewrites there is a second stage when the “COMB” moves are applied, which eliminate the supplementary Arrow elements

to more and more intelligent

  • distribute to several machines the stupid strategy  but keep a synchronous global control over this
  • use one or more machines which maintain channels of communication (which pose synchronization problems) between them, i.e. use process algebra style models of computation over the chemlambda formalism
  • on one machine, use a Chemical Reaction Network model of computation by starting the computation with a multiset of molecules and then do as in the stupid strategy, but with a random ingredient added, like for example choosing randomly a subset of graph rewrites from those possible, or allowing randomly moves performed in the opposite direction.  Produce probabilistic results about the distribution of numbers of molecules which appear. This of course is a model which is extremely expensive, but perhaps because of the fact that the stupid strategy is very cheap in terms of IT resources, maybe it works.
  • same as previously, but on many machines, a CRN style on each machine, a process algebra style between machines.

I claim that all these models don’t add anything really interesting over the stupid model. These are like fashion ornamentation over a bad, but popular design.

None of these additions use in a significant way the advantages of chemlambda, which are:

  • is a purely local graph rewrite system
  • there are no correct molecules (graphs), nor wrong, or illegal ones
  • the graphs ale almost never DAGs, nor they represent a flowchart of a computation
  • hence there is no global “meaning” associated to them
  • the formalism does not work by passing values from here to there (so why one should think to couple chemlambda with something adapted to the sender-wire-receiver paradigm?)
  • the molecules encode a (local) meaning not by their number, but by their shape (therefore why would one want to use CRN for these? )
  • the molecules are not identified with their function, so from the point of view of chemlambda it does not matter if you use a functional programming paradigm or an imperative one

I hold that on the contrary, chemlambda is really close to some of Nature’s workings:

  • Nature does not need meaning to work, this meaning is simply a human point of view, a hash table which simplify our explanations. To believe that viruses and cells have tasks, or functions, or that they have a representation of themselves, or that they need one, all this is a sterile and confusing, but pervasive belief.
  • Nature is purely local, in the sense that everything happens by a chain (or a net, or other analogy that our weak minds hungry for meaning propose) of localized interactions (btw this has little to do with the problem of what is space, and more to do with the one of what is a degree of freedom)
  • Nature does not use functions, functionalism is an old and outdated idea which felt into oblivion a long time ago in chemistry, but it is still everywhere used in the hard sciences, especially after the formalization a la Bourbaki (and other blinds), who significantly was incapable of touching more natural fields like geometry
  • Nature rarely uses sender-wire-receiver settings, these are, I suppose, the scars of the WW2, when IT started to take shape.
  • Nature does not work by passing values, or numbers, or bits, we do use these abstractions for understanding and we build our computers like this
  • Nature does not have or need a global point of view, we do in our explanations.

However, there is a social side of research which makes interesting the pursue of the exploration of these familiar models in relation to chemlambda. People believe these models are interesting, so it would be good to see exactly how chemlambda looks with these ornaments on top.

Now let’s pass to really interesting models.

Why put randomness by hand into the model when there is enough randomness in the real world?

Instead of CRN and process algebra (with it’s famous parallel composition operation, which is of course just another manifestation of a global point of view), let’s just preoccupy to understand how the world looks from the point of view of an individual molecule.

Forget about soups of multisets of anonymous molecules, let’s be individualistic. What happens with one molecule?

Well, it enters in chemical interactions with other molecules, depending on where it is relative to others (oups, global pov), depending of many external randomness sources. From time to time the molecule enters in interaction with another one and when it does, the chemical reaction is like a graph rewrite on the molecule and the other, which may happen with some randomness as well, but more important, it happens in certain definite ways, depending on the molecule chemical composition and shape.

That is more or less all, from the chemical point of view.

OK then, let’s ignore the randomness because anyways in the real world there are many sources of randomness, there is plenty of randomness, and let’s try to make a model of one individual molecule which behaves in some ways (does some graph rewrites on the complex formed by itself and the other(s) molecules from the chemical reaction).

In other words, let’s make an actor molecule.

Mind that from the point of view of one molecule there is no global state of the world.

This is what is proposed in section 3 of the article GLC actors, artificial chemical connectomes, topological issues and knots. Not for chemlambda, but for the GLC, the distant parent.

What is the relevance.  You can use this for several purposes.

  • at once, this gives a decentralized computing system based on artificial chemistry which is very close to Nature way, therefore
  • is good to try it on the Net
  • and is good to try it in real world (with the condition to identify real chemistry reactions which are like the chemlambda graph rewrites, something I believe is true)
  • and moreover is good to try it as an ingredient in the future IoT,  where we can import the amazing idea of Craig Venter to send a printer of lifeforms to Mars and then send by radio any DNA encoding from Earth to Mars. Why not do the same entirely on Earth? Imagine: the real world has a chemistry, the virtual one has chemlambda, therefore we can pass smoothly from one to the other because they are based on same principles. The technology behind the IoT would then be a giant, worldwide distributed Venter Printer, in one sense, coupled with a world wide sensor (phones, cameras, fridges and whatnot) which convert the acquired data from the real world into the chemlambda format of the virtual one.

That’s the first batch of uses. There are others, maybe less ambitious but more easily attainable.

  • would you want to do a realistical neural network? We may try it by making a fine grained neuron and distribute it over the network. Indeed, each real neuron is the host of a collection of chemical reactions, from the synaptic cleft, to the soma, to the next synaptic cleft.
  • too ambitions maybe, so let’s restrict to something simpler: can we do Turing neurons in chemlambda? Sure, done already.

That is more of a principle of organizing computations, and a path to pursue.

  • or maybe we want to have a world wide distributed universal computer. It will not be very fast, but the purpose is not be fast, but distributed. I call this “the Soup”. Imagine: everybody who wants to be part of it just downloads a collection of scripts and puts it somewhere which has an URI (like a web page). There is no sneak behaviour there, no net bots or other freaky idea which would transgress any privacy.  Each participant will become like an actor molecule (or maybe 10, or 100, depending on the burden of the scripts on the computer). Anybody could launch a designed molecule into the Soup, starting a network of reactions with the others molecules from the Soup (i.e. with the ones which are stored as states of the actors in other computers). The communication between computers will be decentralized and will not make much sense to a sneaker anyways, which brings me to other possible applications
  • the first one is something like a distributed homomorphic encryption service. Big words, but what really means is that the Soup could offer this service by default. Some said that this would be a form of obfuscation, because you take a program (like a lambda term) and then you execute it in this weird chemlambda. But this is not at all correct, because, recall, chemlambda has only little to do with lambda terms (a thing which I don’t arrive to stress enough) and there is no correct or illegal molecule (differently from any other formalism which superficially resembles to chemlambda, like GOI or ZX), and there is no global meaning, nor passing of values which can be then intercepted. No, the “encryption” works like in Naure, where, for example the workings of a living cells are “encrypted” by default, by the fact that they carry no meaning, they are purely local and decentralized.

The list of possible application forked, I go back to the previous.

  • it is possible that not neurons, which are big chemical entities, may be better understood, but much smaller ones, like bio molecules. Maybe some chemlambda style worlings are relevant at the genetic level. More concretely, perhaps one can identify real molecules or use DNA and enzymes to do chemlambda in reality. Better still, and more realistical I think, maybe chemlambda is only a proof of principle that it is possible to understand life processes at the levels of molecule, not by the usual way. The usual way consists largely into trying to make sense, attribute tasks and functions and do probabilistic calculi on huge quantities of data collected in real chemistry. This new way would consist into the exploration of the molecules as embodied abstractions, embodied programs… Oh, it does not sound original enough, let me try again.  Compare with Alchemy of Fontana and Buss. In that amazing research program they propose that molecules are like lambda terms, reactions are like the application operation and active chemical sites like abstraction. In chemlambda the application and abstraction are not operations, but atoms or small parts of molecules. And the molecules are not only lambda terms, but much more varied. Chemical reactions are graph rewrites. Therefore, even if we restrict to molecules which have to do with lambda terms, we see that in chemlambda the application and abstraction are embodied, they are made of matter, be it atoms or molecules. Chemical reactions are like reductions in lambda calculus, or more general, they are graph rewrites. In Alchemy the function of the molecule is the normal form of the lambda term it represents. In chemlambda there is no function, in the sense that there may be actually several, or none, depending on the interaction with other molecules.  I believe that this view is closer to Nature than the classical, pervasive one, and that it might help the understanding of the inner working of bio molecules.

What needs to be done. Many things:

  • if we use chemlambda with a model of choice, ranging from stupid to intelligent, nevertheless there are new things to learn if you want to program with it. Only by looking at the exploration of the stupid model, started with the scripts made by this mathematician (see the github repo), one has a feeling of overwhelming in front of a new continent. There are many questions to be answered, like: what is the exact rigorous relation between recursion and self-reproduction, seen in chemlambda? how to better program without passing values, as chemlambda proposes? how to geometrize this sort of computation, because when translated to chemlambda many programs are made mostly of currying and uncurrying and other blind procedures which are really not necessary in this model? what is the exact relation between various priority choices and various evaluation strategies in usual programming? how to program without functions, i.e. without extensionality?

In order to answer to such questions are needed CS professionals.

  • visualizations of graphs (molecules) and their rewrites are not needed for the formalism to work, but they are helpful for learning how to use it. Like it or not, but most of our brains process non linguistic stuff, and as you know an image is worthy a 1000 words. Finding good ways to visualize chemlambda and the various models helps learning to use it and it offers as well a bridge towards less IT sophisticated researchers, like chemists or biologists.

Help me to build a better chemlambda gui. Step by step, according to opportunistic needs of the explorers, there is no blueprint to execute here.

  • as for the decentralized computing fork of the project, this is not hard to do in principle, or at least this is how it appears to this mathematician. However, in practice there are certainly ways better than others to do this.

Again CS guys are needed here. But leave at the door, please, process calculi, CRNs and other ornaments, and after that look at the body under the dress. Does the dress falls well over that body, or if not what is to be done? Ornaments are only a cheap way to trick the customer.

  • for the real chemistry branch, real chemists are needed. This is way outside of my competences, but it may be helpful for you, the biochemist. If so, then I have something to learn as well, and maybe you’ll see that a mathematician is much more useful than only as a source of equations and correct probabilistic calculus.

Real chemists, with labs and knowledge about this are needed here! Let’s discuss less about making molecules to do boolean algebra and more about making them into embodied programs.

What do I need for the program. Money, of course. Funds. Brain time. Code. Proofs. Experiments.


Why process calculi are old industrial revolution thinking: the example with the apple pies

I have strong ideological arguments against process calculi, exactly because of the parallel composition. I think that parallel composition is not realistical, because there is no meaning in the “parallel” unless you have a God view over the distributed computation.

This is a very brute argument, but I can make it detailed (and I did it, here and there in this open notebook).

In my opinion we are still in the process of letting go the old ideas of the industrial revolution. The main idea which we need to exorcise out of every corner of the mind is that there is a benevolent (or not) dictator who organize the process of the world (be it a factory, a government, a ministry, or a school class) in a way which is easy to lead because it has well placed bottlenecks which give a global meaning to the process.

Concretely, the very successful idea of organizing stuff, which comes from the industrial revolution, is that one has to abstract over the individuals, the subjects, then  to stream the interactions between them (the individual abstracted into functions) by creating a hierarchy of bottlenecks. The advantage is that structure gives a meaning to what is happening.

A meaning is simply like a hash table.

The power of this system of organization is tremendous.  It led to the creation of the modern states, as well as to the creation of economic and ideological systems, like capitalism and communism, which are both alike in the way they treat individuals as abstractions.

This kind of organisation pervades everything, in very concrete and punctual ways, so much so that the material structure which holds together our society (like  in particular the server-client structure of the net, as a random example, but less some of the net protocols) has grown in the way it is not only because there are some universal laws and invisible hands which constrain it, but also because this structure is an addition of a myriad of components which have been designed in this way and not in another because of the industrial revolution ideology of control and abstraction.

The power of the industrial revolution main idea is that you can take any naturally occurring process (like apples growing in trees and people culling them and making pies) and structure it in a meaningful way and transform it into a viral process (apple pies making factory). You just have to abstract apples and peoples into resources and synchronize the various parts, to define the inputs and outputs and then optimize your control over it and then you can make 10^9 evaluations of the abstract notion of “apple pie”  and put them on the shelves of the supermarket, instead of 10^3 individual apple pies as grandmothers used to make in their kitchens.

Now, in the factory of apple pies, the notion of parallel processes makes perfect sense. Contrary to that, in the real real world with trees and apples and grandmothers with their ovens, P | Q makes sense only in retrospect.

If you were God then you could look from far above at all these grandmas and see lots of P | Q. But the grandmas don’t need the parallel composition  to make their delicious apple pies. Moreover, the way of life is that generally there is no need for a centralized control, no need for a meaning. Viruses and cells don’t know they are viruses and cells. They work very well without knowing they do some tasks inside an environment.

The life ozone cell  may be in parallel with the life of another from the God’s point of view, but this relation is certainly not a part of, nor a need for these life processes to function.
The big question for me is: how to replicate this by techne? It is clearly possible, as proved by the world we live in. It looks to me very promising to try to work under these self-imposed constraints: no meaning, no parallel composition in particular, no abstraction, no levels. It is surprising that chemlambda works at all already.



Visual tutorial for “the soup”

I started here a visual tutorial for chemlambda and it’s gui in the making. I call it a tutorial for the “soup” because it is about a soup of molecules. A living soup.

Hope that in  the  recent future will become THE SOUP. The distributed soup. The decentralized living soup.

Bookmark the page because content will be added on a daily basis!


List of Ayes/Noes of artificial chemistry chemlambda

List of noes

  • distributed (no unique place, no external passive space)
  • asynchronous (no unique time, no external global time)
  • decentralized (no unique boss, no external acyclic hierarchy)
  • no semantics (no unique meaning, no signal propagation, no values)
  • no functions (not vitalism)
  • no probability


List of ayes