Tag Archives: functional programming

Do you know these guys?

We need CS collaborators for our project.  The project is about a computation model (Distributed GLC) which is:

  • decentralized
  • asynchronous
  • based on message passing
  • Actor Model style
  • using a Graph Rewriting System called chemlambda.

Here is the portrait of the ideal collaborator:

  • is a big fan of functional programming, but not a fanatic
  • understands the difference between the Actor Model and process calculi
  • has some understanding of real world asynchronous computation, on the Internet
  • is at ease with graph rewriting systems, say he has read Functional programming and parallel graph rewriting

 

Oh, and can discuss over the border with  quick learning mathematicians.

ALTERNATIVELY:

  • has money to fund us and
  • a supply of docile programmers who do what we need first, then optimize the programs after they see by example that, from some obscure reasons, they really work.

 

If interested please call and let’s make stuff that counts!

______________________________________________________________

Past vitalism in extreme functional programming

In Vitalism everything is a function.  Just like in lambda calculus with eta reduction.

opium

With chemlambda we are someplace corresponding to The Sceptycal Chymist: or Chymico-Physical Doubts and Paradoxes, by Boyle.

Recall that chemlambda (and GLC) rejects extensionality, because it is a global move, not a local one.

Therefore, chemlambda and GLC are not functional, properly speaking. Which is good, actually, for a variety of reasons, kind of the same as the ones why chemistry is better than vitalism.

In another post, this way of seeing was called “extreme functional programming“.

I hope this might help the understanding of distributed GLC model of computation and the reasons behind choices which were made in the model.

___________________________________

Notes:

1. Criticism of vitalism  supports the view that functional programming with extensionality is an reenactment of vitalism.

Vitalism has sometimes been criticized as begging the question by inventing a name. Molière had famously parodied this fallacy in Le Malade imaginaire, where a quack “answers” the question of “Why does opium cause sleep?” with “Because of its soporific power.”[29] Thomas Henry Huxley compared vitalism to stating that water is the way it is because of its “aquosity”.[30] His grandson Julian Huxley in 1926 compared “vital force” or élan vital to explaining a railroad locomotive’s operation by its élan locomotif (“locomotive force”). [quote from the wiki source on vitalism]

2. From the wiki source:

The Sceptical Chymist: or Chymico-Physical Doubts & Paradoxes is the title of Robert Boyle‘s masterpiece of scientific literature, published in London in 1661. In the form of a dialogue, the Sceptical Chymist presented Boyle’s hypothesis that matter consisted of atoms and clusters of atoms in motion and that every phenomenon was the result of collisions of particles in motion. For these reasons Robert Boyle has been called the founder of modern chemistry by J. R. Partington.[1]

 

3. Chemlamba is based on a chemical metaphor. The initial name of the formalism is “chemical concrete machine”, in contrast with the “chemical abstract machine” of Berry and Boudol.

___________________________________

 

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

Extreme functional programming done with biological computers

After a very interesting hangout with Eugenio Battaglia and Gerd Moe-Behrens emerged this idea: to use the functional programming for biological computing. It is not straightforward to explain it, therefore I use this post in order to clarify a bit what is my understanding of the problem.

As a background for this description, consider that I am a mathematician, which is a mixed blessing, because even if some ideas seem easy in mathematical terms, they turn out to be hard to explain without explicit recourse to mathematics. On the other side, I am ignorant concerning chemistry and biology, therefore handicapped in discussions with specialists outside my narrow mathematical world.

Further I shall describe things from my point of view, within the mentioned constraints which shape my understanding. I am not sure if I gave the right description of the zest of the discussion.

1. Apparently, as Gerd Moe-Behrens explains in The biological microprocessor, or how to build a computer with biological parts, there is a trend to use the architecture of a silicon computer for building a biological computer. In particular this means to try to build boolean logic gates,  or memory.  It is an important observation that whatever happens in a living cell which resembles to computation (in a more vague sense than Turing computation) is not necessarily implemented as in a silicon computer. Instead, synthetic biology tries to import concepts from silicon computers into the bio realm, mainly because these concepts are already familiar. Hence the accent on bits and boolean logic, although not exclusive (other studies concern for example membrane computing, or neural networks, or continuously evolving dynamical systems).

2. On the other side, functional programming  is one of the main  programming paradigms, a very elegant one,  receiving more and more attention in the competition with the more well known imperative programming . Or, evaluation of boolean circuits, which is taken as a role model in many studies of biological computing, is more natural in the imperative programming paradigm. However, functional programming has its roots in the lambda calculus, one of the two pillars of computation, along with the Turing machine (of equivalent power, but with different philosophies behind).

3. In 1994, Fontana and Buss, in a pair of articles, introduce the idea that lambda calculus is a kind of natural formalization of the bare bones of  chemistry.  Molecules are seen as lambda terms, collision of molecules is seen as the application operation in lambda calculus, and  the abstraction operation from lambda calculus “captures the role of functional groups embedded in the reactants” [source, p. 11].

4. By the previous points, it is only natural to try to use the functional programming paradigm for biological computing. This means in particular to no longer chase for logical gates and boolean circuits.

5. An interesting thing is to notice that Fontana and Buss use lambda calculus with eta reduction, i.e. the terms (or molecules) are identified with functions, in the sense that the molecule A is identified with the function B \mapsto AB. Also functional programming, as far as I understand, also use this eta reduction, being more interested into types.

6. Independently, I built graphic lambda calculus, which is even more powerful than  lambda calculus. One of the interesting parts of graphic lambda calculus is that it makes clear that the beta reduction (which is the main reduction move in lambda calculus) is local in nature (read “simple to implement in reality”) and the eta reduction, as innocuous as it might look to our powerful brains, is in fact a global reduction move (here you see the eta reduction in graphic lambda calculus).  Therefore, I believe that “extreme” functional programming (i.e. without extensionality) is more likely to be implemented in reality (i.e. outside a silicon computer).

7. In graphic lambda calculus we work with graphs, (some of them) corresponding to lambda terms. We could identify them with chemical molecules, as it’s done in algorithmic chemistry.  But there are some differences between algorithmic chemistry and my chemical concrete machine proposal, namely that  the application and the abstraction operations from lambda calculus are, in the concrete machine, ingredients of molecules, and moves (reductions) are assimilated with enzymes, thus reductions are certain chemical reactions.

8. This leads us to the last point: it is intriguing to search if “extreme” functional programming (i.e. functional programming without extensionality) could be implemented in biological computing. The advantage of this extreme functional programming paradigm is that it might be much more simple to do this implementation than to mimic a silicon computer, because already lambda calculus looks like a kind of chemistry. Another advantage, coming from the fact that the chemical concrete machine uses the graphic lambda calculus formalism, is that geometric operations (like bonding two molecules together, or performing reduction steps in parallel)  are more simple to describe in graphic lambda calculus than in the usual lambda calculus.

The disadvantage is that is hard to understand lambda calculus without extensionality. It is already hard to understand functional programming, by someone with the imperative programming mindframe. Without extensionality, is then even harder to not fall into habits of thinking which might be misleading. I know this, because two years ago all this lambda calculus was totally opaque to me. Why, not using equality? Not using functions? Is this mathematics or some crazy invention of logicians? Now I know better, of course. But for somebody which is used with boolean logic and (like me) with functional analysis, it looks crazy first.  If you go further, then you start to love it.