…looks like an apt name for several threads from this blog, which are now converging to a common point.
To make it more clear, here is a modification of the figure which appears in the post Theatron as an eye.
The red decorations changed. Let’s see.
- DESIGNERS are the new programmers. They are no longer write programs, but instead they design good behaving distributed GLC computations. They deserve the front seats for observing the actors.
- ACTORS occupy their right place in the greek theater: the orchestra. They are, of course, GLC actors, which are released into the wild after the preparation stage, effected by the designers. Each actor is an autonomous, reactive entity, which lives by exchanging very short, highly schematized messages with other actors (like the script one actor has, given by the director-designer; there is not very important what the actor says, only that it is communicating a repertoire of basic — emotional, in real theater — universal messages). There are many actors in the greek theater (assimilated here with the members of the chorus, to be clear), they are almost indiscernable one from another, they dress the same, they look the same, they have a very limited way of expression, but together they form the powerful chorus. Like neurons in a brain, they are in a limited variety, but they “compute” in a distributed and asynchronous way, without exchanging messages which need external competences to make sense.
- the USERS of this distributed computation, aka theatrical performance, are the beneficiary of the show.
- ACTORS CREATION: new actors are entering through the PARODOS, that’s related to the fact that the creation of new actors is a process which looks like the cells binary fission. This is implemented in chemlambda, look for example how the B,C,K, W combinators are multiplying in this post.
- CORES: behind the skene is what should not be visible to the public or actors. This is a interface with other computing devices. like for example the coloured rectangles representing various stacks in the post A machine for computing the Ackermann function in graphic lambda calculus. Recall that each core has to be embedded in a mask (which IS the actor which is communicating with that core), or, masks are decorating the skene.
- Finally, all this happens in the NET. Worldwide, circled by the sun [annalema].
A GLC neuron was first defined here, in the freedom sector of the GLC. I shall use the term “neuron”, in a more restrictive sense than previously, as being a GLC actor (example here) which has only one exit half-arrow, possibly continued by a tree of fan-out nodes, which are seen as the axon of the neuron.
I don’t know how to constrain an AM computation with GLC actors so they are all neurons at the preparation stage ad they remain neurons during the computation.
Instead, I wonder if there is any chance to model real neurons this way. That is why I try to obtain a prediction concerning a rewiring pattern. If this model of computation is pertinent for real neural networks, then we should see this rewiring pattern in the wild.
Let’s contemplate the following chain of reductions, as seen from the GLC actors point of view. I shall use in fact the chemlambda formalism, but I shall change a bit the notation from the coloured nodes to nodes marked by letters, as in GLC. The fan-in node will be denoted by the letter . Recall that the fan-in node from chemlambda replaces the dilation node from GLC (actually we reuse the dilation node of GLC and change it into a fan-in node by replacing the emergent algebra moves with the FAN-IN and DIST moves of chemlambda).
In red, we see the actor diagram, it does not change during this internal interaction.
The next step involves a graphic beta move:
The next step is a name change interaction between the actors and , namely gives a fan-out node to .
That’s it. What is this having to do with neurons?
We may see the fan-out node from the initial configuration, belonging to the actor , as the axon of . Further, we may identify actors $ and , and also actors and . In this way, the whole sequence of three interactions can be rewritten like this:
Notice how I changed the convention of drawing the actors diagram:
- I took care to indicate the orientation of the links between actors
- the fan-out node from the initial configuration was externalized, to look more like an axon, and likewise for the final configuration.
This gives a prediction: if neurons can be modelled as GLC actors, then we should observe in reality the following change of wiring between neurons.
It involves the rewiring of 5 neurons! Are there somewhere in the neuroscience literature patterns of wiring of 5 neurons like the ones from this figure? Is there any evidence concerning a rewiring like in this figure?
What is a thing? And what is that “thing” thing from the “Internet of things”? Is not, I think, what is supposed to be.
Here is a depiction of a thing [source]:
A thing is an assembly, a communication based entity. I learned this by reading another excellent article by Kenneth Olwig: “Heidegger, Latour and The Reification of Things:The Inversion and Spatial Enclosure of the Substantive Landscape of Things – the Lake District Case”, Geografiska Annaler: Series B 2013 Swedish Society for Anthropology and Geography. Cite from the 1st page:
Thing has thus undergone a process by which things went from being substantive judicially founded meetings in which knowing people assembled (as in parliaments) to discuss, and thereby constitute matters of common concern, or common things that matter, to becoming physical objects, or things as matter. [...]
The word reification is here used to mean: ‘to regard (something abstract) as a material or concrete thing’ (Merriam-Webster 1994, reification). The modern meaning of things and landscape, it will be argued, can only be grasped by understanding the way it is the outcome of a process of revolutionary inversion [...], or turning inside–out, by which the meaning of common things has been spatialized, enclosed, unified, individualized, privatized, materialized and reified as a constituent of the mental landscape of modernity.
Let’s go now to Kevin Ashton’s That ‘Internet of things’ thing, cite:
We’re physical, and so is our environment. Our economy, society and survival aren’t based on ideas or information—they’re based on things. You can’t eat bits, burn them to stay warm or put them in your gas tank. Ideas and information are important, but things matter much more. Yet today’s information technology is so dependent on data originated by people that our computers know more about ideas than things.
Seriously! It’s done. An actor like model, which uses graphic lambda calculus for doing distributed computing (i.e. parallel, asynchronous, no controller) . No evaluation needed, everything is local. There is no signal circulating far through the wires, only small local exchanges. In this respect it resembles to what is happening in chemical reaction networks, but it goes much further than this.
Looks perfectly feasible, needs angel investor.
UPDATE: Let me add a bit more details about this. The awesome thing is not in the actor model side, alone, nor in the graphic lambda calculus or chemlambda alone. Is in the combination of those. The graphic lambda calculus brings the possibility to do distributed computing (and even far away from the usual logic realm, into the field of computing with space, see A me for space). The space which is needed by graphic lambda calculus is, of course, the network. But the network lacks an essential ingredient of space: the proximity relations. This is where the actor model is useful.
Secondly, it is well known that in the usual sense computation needs two ingredients: reduction and evaluation. They appear under a form or another in any computation model I am aware of, excepting the one of graphic lambda calculus + actors! Here, only local (if we rely purely on chemlambda instead of graphic lambda calculus) reduction moves are used, after an initial stage of preparation of the computation.
This has implication into understanding how brains “compute”. Indeed, there are two things (at least, but let’s simplify) which happen in a brain: one is (electrical) signal transmitted (chemically mediated in the synapses) and neural network rewiring (as a form of “learning”). The computation model I propose may be used in two places:
- in the understanding of the computational aspects of the chemical connectome of a brain (by looking for real embodiments of the chemlambda formalism, i.e. by identifying some real chemical molecules which interact as the molecules of chemlambda)
- in the understanding of the rewiring mechanism. The graphic lambda calculus + actors proposes that the rewiring alone is a form of computation, one related to pure local reduction, while the signal transmissions are related more to evaluation aspects. In this respect, neurons (or maybe family of neurons nurtured by one glial cell) are actors and the physical neural network is like an actors diagram.
Continues from Actors for the Ackermann machine . In this post we see the first interaction between the actors.
Notations: each actor has an address and a stack .
Let the play begin. The cast is the following:
At the beginning of the performance, the actors are in the following configuration
Only with can interact (i.e. the only moves in GLC which may happen between actors are those between the mentioned ones). Their interaction is a form of pure communication (via the graphic beta move). Two such moves are possible, in succession. This is described in the next figure, along with the changes in the actors configuration.
The performance is thrilling: the actor is almost exhausted after forcing the actor to drop down his mask. In the process lost his friends and (with his buddy ) in favour of . Only is still taking the side of . What will happen next?
At this stage, no other interaction is possible without revealing what is really thinking (what’s in his stack ). The first act is over.
Continues A machine for computing the Ackermann function in graphic lambda calculus , by a preliminary analysis of the proposed Ackermann machine in GLC from the Actor Model point of view.
Very brief notes:
- see Hewitt Actor Model, lambda calculus and graphic lambda calculus for the actor notation (or decoration)
- binary fission is an apt name for the replacement of the GLOBAL FAN-OUT from GLC by the DIST moves from chemlambda (because eventually I am interested in using the purely local formalism of chemlambda for the Ackermann machine).
- “mask” is a new name instead of the “pacman” name used in the first post on the Ackermann machine.
- “stack” might correspond to the “mailbox” in the actor model, but it’s internal to the actor and interacts only with the mask.
All this has a working hypothesis basis, I’m thinking out loud – btw, if you look then you can also talk here . Everything is subject to changes, remember this is an open notebook (which can also be cited as explained at the right side of the post).
Recently I became aware of the research line started by Wadsworth ( Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971) and then Lamping (John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30) on lambda graphs and sharing graphs (see for example this article by Stefano Guerrini). I see in Guerrini article, at page 5, a figure which is almost identical to the graphic beta move (and even more, when I shall really understand what a “mux” is, then maybe some of the moves from the middle of the page 5 will turn out to be related with the distributivity moves from the chemical concrete machine).
There are several differences between GLC and Wadsworth-Lamping kind of lambda graphs (research line denoted by “WL” further):
- the graphs used in GLC are all oriented, made by trivalent or univalent nodes (the termination node) , while those from WL are a mix of oriented and non-oriented nodes,
- GLC does not use any names, neither of variables, nor of terms, while in WL names are used everywhere. This looks like a minor difference but instead is very important and here is why. If we live by the “no names” rule then forget about evaluations, in particular forget about evaluation strategies and about using the result of an evaluation for deciding the next computational step. Moreover, forget about representing the state of a computation by a “value”, whatever value might mean. Does it start to look strange? It gets even weirder: forget about thinking in terms of flowcharts with nodes which represent operations (like application and lambda abstraction), which have inputs and outputs like wires which allow propagations of signals. After all forget about this signal idea. Can you? GLC can.
- GLC has among the nodes a “fan-out” node which satisfies moves (graph rewrites) called CO-ASSOC and CO-COMM, as a replacement for the need to use names in WL, thus the only place in GLC which resembles to “sharing” from WL is in the use of the GLOBAL FAN-OUT move (see however how this move is eliminated in the fully local version of GLC, called the chemical concrete machine),
Let’s now pass to bigger differences:
- there is a sector of GLC which is made by graphs resembling the lambda graphs from WL, however, there is no effort made in GLC to work only with this sector, while a big deal of effort in the WL approach consists in finding ways to select those “valid” lambda graphs. In GLC there is no need to restrict only to well constructed lambda graphs.
- this gives an advantage which GLC has over WL, namely that GLC has also other sectors, different from the one of lambda graphs, where the graphic beta move can be applied outside lambda calculus. One of this sectors is particularly interesting: is the sector containing knot and tangle diagrams, which allows GLC to interact with Kauffman Knot Logic and Knot Automata. Thus GLC is a formalism which can be used for lambda calculus but also for other types of nonstandard computing models, like Kauffman’s Knot Logic.
Finally, the most important difference comes from the motivation of GLC, which is the need to have a visual representation of certain finite differential computations in spaces with dilations and emergent algebras. This is possible in another sector of GLC, called the “emergent algebra sector” and it is completely out of reach of WL, see
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
This leads to a rigorous definition of what “computing with space” might be. It can also have very concrete applications, evoked here:
Otherwise, there are many things to learn by a geometer like me, from previous work. There are certainly many geometric things to be learned by CS experts as well, because there is more and more clear that computation needs some notion of space, besides the boringly trivial one dimensional one.
In the post WWW with Metabolism and The chemical connectome of the internet was evoked the idea of using the chemical concrete machine not on biological, wet networks (although I still find this line of research very promising), but on the www. I am very happy to see that efforts concerning the use of chemical programming and bio inspired models of programming already exist!
Tell me more about this, if you read this and you are an expert! (And, of course, excuse my ignorance, I am a geometer, not a CS expert, therefore I shall mention your previous work here, as I learn about it.)
Fraglets is an extremely promising direction: here is the fraglets site and here is a fascinating article by Christian F. Tschudin. The title of the article is “Fraglets – a Metabolistic Execution Model for Communication Protocols” and the abstract reads:
In this paper we introduce a molecular biology inspired execution model for computer communications. The goal is to lay the ground for automatic network adaption and optimization processes as well as the synthesis and evolution of protocol implementations. Our execution model is based on the unification of code and data, featuring a single unit called “fraglets ” that are operands as well as operators. We have built a simulator and started to program classical communication tasks with fraglets that show metabolistic pathways patterns like the ones found in biological cells. We give an example of a fraglet implementation for a non-trivial flow–control–with–reordering protocol and briefly discuss how to search the fraglet program space with genetic algorithms.
I like very much two things here: “unification of code and data” and the “metabolistic” word in the title.
Lidia Yamamoto is another researcher who works in the (finished?) BIONETS collaboration, with Biochemically inspired emergent computation, co-authored with Thomas Meyer. Also, with Christian Tschudin, “A metabolic approach to protocol resilience” (here, from p. 191).
Of course, Banatre is the first who introduced the concept of “chemical programming” and Berry with Boudol the CHAM, or the chemical abstract machine. (Recall that the “chemical concrete machine” denomination points to the CHAM, but it is “concrete” because really it works with “concrete” molecules, involved in “concrete” chemical reactions, without using any name (or name management), and without the need for evaluation in the computation).
1. For Unlimited Detail (UD), euclideon or non-euclideon like, go see the blog of bcmpinc, until now the only contributor here able to give a UD solution (see also the UD projects page, where his solution is listed, along other proposals which were not visibly successful). Let me know, please, from time to time, if really new things arrive. I keep my pov, namely that bcmpinc solution is non-euclideon, because the main thing of the euclideon UD is in the way it manages realtime, by producing first a reformatting of the database (before any rendering, without any time constraints), which database is then exploited for realtime rendering. My primary interest in UD lies in the database reformatting.
2. I posted less than usual lately, because of many discussions around graphic lambda calculus (GLC) and the chemical concrete machine (chemlambda). I am grateful for all these discussions and perspectives and I shall report here, some time, about this. The main idea is that we might be on a great thing or, as usually in research, not. I believe we do, even more: that is something genuinely new in this pure syntax approach, which might be (a proof of principle) solution for the great mystery about how it’s possible for groups of simple (compared to us) cells — neurons — to implement arbitrarily high level semantics by purely physical and chemical phenomena. I am very far, in real life, from being a socialite, therefore these extremely interesting discussions take a bit from my motivation to write here.
3. The series on the vision theater is not over yet, in fact there is a very intriguing development of the point 2, namely that there may be nontrivial links between distributed computing (as a purely local phenomenon) and the homunculus fallacy. This would be also a natural continuation from and reformulation of the post Theatron as an eye.
From the Trillion Sensor Summit, this document, here is a list of interesting subjects related to the Internet of Things and the main subject of this blog: computing with space. (See this link for an explanation of usefulness of the graphic lambda calculus for this.)
- Internet of Things, connecting devices around us through new network architecture to enable low latency control. (Janusz Bryzek, Fairchild Semiconductor, Mancef)
- CeNSE, Central Nervous System for the Earth, building global environment monitoring. (Janusz Bryzek, Fairchild Semiconductor, Mancef)
- Smart cities (Susumu Kaminaga, SPP Technologies, Japan for the Growth Strategy, Ajith Amerasekera, TI, Roberto De Nuccio, ST Microelectronics)
- Biologic nanoprocessor (Luke Lee, UC Berkeley)
- Molecular diagnostics systems (Luke Lee, UC Berkeley)
- Brain-machine interfaces (Jan Rabaey, UC at Berkeley)
- Interaction swarms (Jan Rabaey, UC at Berkeley)
- Cyber security (Jan Rabaey, UC at Berkeley)
- Distributed network computing (Matthew Hopcroft, HP)
- Logistic processes: autarkic labeling, condition monitoring, position monitoring (Stephan Guttowski, Fraunhofer)
- Ambient sensing platform (Steve Nasiri, Nasiri Ventures)
- National level usage: global warming, health infrastructure, river flow monitor, health epidemics, smart grid sensors, forest fire, ozone, cyber security (Sandhiprakash Bhide, Intel)
- Context sensing (Kevin Shaw, Sensor Platforms)
In Actor Model of Computation: Scalable Robust Information Systems, arXiv:1008.1459, Hewitt writes, at p 7 that “lambda calculus cannot implement concurrency”. At p.13-14 he writes:
… the semantics of the lambda calculus were expressed using variable substitution in which the values of parameters were substituted into the body of an invoked lambda expression. The substitution model is unsuitable for concurrency because it does not allow the capability of sharing of changing resources.
Correct. However, the graphic lambda calculus does not have this problem because reductions don’t need evaluations. See The graphic beta move, with details to understand this better.
All the arguments listed further apply also verbatim to graphic lambda calculus! (Hewitt loc cit, p. 14)
Inspired by the lambda calculus, the interpreter for the programming language Lisp [McCarthy et. al. 1962] made use of a data structure called an environment so that the values of parameters did not have to be substituted into the body of an invoked lambda expression.
Note that in the definition in ActorScript [Hewitt 2011] of the lambda-calculus below:
- All operations are local.
- The definition is modular in that each lambda calculus programming language construct is an Actor.
- The definition is easily extensible since it is easy to add additional programming language constructs.The definition is easily operationalized into efficient concurrent implementations. The definition easily fits into more general concurrent computational frameworks for many-core and distributed computation.
That Actors which behave like mathematical functions exactly correspond with those definable in the lambda calculus provides an intuitive justification for the rules of the lambda calculus:
- Lambda identifiers: each identifier is bound to the address of an Actor. The rules for free and bound identifiers correspond to the Actor rules for addresses.
- Beta reduction: each beta reduction corresponds to an Actor receiving a message. Instead of performing substitution, an Actor receives addresses of its arguments.
Further I try to understand this, as a particular case of the Chemical actors sketch.
We have Actors, we have addresses and we have lambda identifiers (it’s about usual lambda calculus, not the better graphic lambda calculus). The idea, as far as I understand, is to “bound” lambda identifiers to the address of an Actor. The second idea is “each beta reduction corresponds to an Actor receiving a message”.
I want a way to identify Actors with graphs in or with molecules in , if I wish to use the chemlambda. Actually, because, after some thinking, is better to think about Actors as if they are a sort of decoration of the arrows of a graph, almost like a decoration with types. (I shall not use the epsilon gates further, just concentrating on the pure logic side)
We shall have actors A, B, etc, with actor names :A. :B. There are other names, like which is a placeholder for indetermination (that’s just a name, if you don’t like this one then change it with , or whatever), there is also :talk and :listen names. These are not names of actors.
By using these rules we can decorate all half-arrows (meaning that we may have arrows with two decorations. For concreteness, look at the example of the combinator K.
I shall take another Actor named A and make it interact with the Actor K, in order to mimick the couple of reductions.
Here “1″ and “2″ mean anything you want, from “they can be names of actors” to “I don’t care what follows next”.
Now, they are in position to communicate:
What happens next? The :listen:talk decoration indicates that a graphic beta reduction can be applied (cf. Hewitt 2nd suggestion). But we need a rule for decorations with names before and after the graphic beta move. Here is what I propose:
We apply this for our example:
Is that nice or what? The actor K lost one of it’s “lambda identifiers”, the actor A also simplified a bit, but they still talk one to another:
Remark that graphic lambda calculus easily did what Hewitt wants. This example is not showing any concurrency, though, because here there is no concurrency involved, properly speaking. I only wanted to implement the 2 suggestions of Hewitt with graphic lambda calculus. But take examples where concurrency does appear, like here, to see that everything works really nice. This will be explained in another post.
This is a repost of a very informative discussion: Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask). Enjoy!
The following is a note to myself. There are so many interesting things which could be relevant for chemical actors, but among these I only mention the moment when Hewitt draws an arbiter, as a box with two inputs and two outputs. Look at the Ackermann machine, or look at Pair of synapses, one controlling the other. From the video I see that actors are the primitive units for computation and addresses are the only real capabilities. Moreover, there are these arbiters, I really need to think if chemlambda and arbiters are not sufficient for this. Time will tell if this is reasonable or stupid.
Thinking out loud about a variant of the actor model (chapter 3 here), which uses graphic lambda calculus or the chemical concrete machine. The goal is to arrive to a good definition of an Artificial Chemical Connectome.
Have remarks? Happy to read them!
A chemical actor is the following structure:
- a graph (or a molecule )
- with a unique ID name
- with a numbering (tagging) of a (possibly empty) part of it’s free arrows
- with a behaviour, to be specified further.
A communication is:
- a graph (or a molecule)
- with a source ID and a target ID
- with a part of free arrows tagged with tags compatible (i.e. the same) with the ones from the graph from the source ID
- with another part of free arrows tags with tags compatible with the ones from the graph from the target ID
The actor target ID receives a communication from the actor source ID and it becomes:
At this point the actor which has target ID exhibit the following behaviour:
- performs one, several, or in a given order, etc of graph rewrites (only the + unidirectional moves) which involve at least an arrow between A and B
- following a given algorithm, splits into a new actor and a communication B’ which has as target arrows the one from the lower part of the previous figure (but with another target ID)
- or creates a new actor by using only (-) moves
Remark: the numbers could be uniformly bounded to 2, or 4, or 6, according to user’s wishes. Take a look at the Ackermann machine, for inspiration.
Just published the Chemical concrete machine on figshare. Should be cited as
This article introduces an artificial, algorithmic chemistry based on local graph rewrites applied to a class of trivalent graphs named molecules. The graph rewrites are seen as chemical reactions with enzymes. The enzymes are tagged with the name of the respective move. This artificial chemistry is Turing complete, because it allows the implementation of untyped lambda calculus with beta reduction (but without eta reduction). This opens the possibility to use this artificial chemistry for constructing artificial chemical connectomes.
A simpler computation than the one with the Ackermann machine concerns the Y combinator. Seen as a chemical reaction network, it looks like this for graphic lambda calculus. In the figure “A” is any graph in which has only one exit arrow, for example a combinator graph.
One might prefer to not use the GLOBAL FAN-OUT move. That’s possible, by passing to the chemical concrete machine formalism. The chemical reaction network is a bit different. (Notice the move PROP+, which is a composite move defined in the post Chemical concrete machine, detailed (VI). )
Lots of interesting things happen, among them we notice the appearance of stacks of “elementary units from the last post. so in the chemical concrete machine version of the behaviour of the Y combinator, the machine counts the number of times A should be replicated, if known (that’s a kind of lazy evaluation, if evaluation would make sense in this formalism).
The Ackermann function is a computable function which is not primitive recursive. It can be expressed in untyped lambda beta calculus (which is the version that I use).
Let’s see how does it look in graphic lambda calculus. I shall explain how the computation of it unfolds in at least one more post. In this one I shall concentrate at defining a sort of machine which does the computation of this function.
Very, briefly, I introduce some macros (i.e. notations) for the relevant pieces which are used in this Ackermann machine.
These pacman macros are used for defining the Ackermann machine:
Some explanations and remarks are in order.
1. As you see, the Ackermann machine looks like a pacman munching pacmans, some of them controlled by other pacmans.
2. The Ackermann machine as presented here is not the graph obtained from the lambda calculus term which represents the Ackermann function (explanations in a future post).
3. The coloured rectangles can be any graph in with two entries and two exits. Depending on what these graphs are, the Ackermann machine does different things, one of them being the computation of the Ackermann function. The fact that such graphs are taken with two entries and two exits is reminiscent of the modelling of B-type neural networks with graphic lambda calculus, where synapses are also such graphs (this needs more explanations, are for later).
4. In particular, the coloured rectangles can be chosen in a way related to the Church encoding of the numerals. Indeed, look at the following figure, which contains examples of graphs which can be used as the coloured rectangles in the Ackermann machine.
It’s clear what they are: a kind of stacks which accumulate copies of the same “elementary unit”, or otherwise they are empty (i.e. the first graph). I use these to obtain the Church numerals:
5. The same can be done with the chemical concrete machine, of course. This observation is relevant because the way the machine functions (by local graph rewrites), when expressed in the chemical concrete machine formalism, will look like a chemical reaction network.
An interesting discussion (?) started in the comments of this John Baez G+ post concerns differences between “model of computation” and “programming language” denominations (in that post, in particular, for Petri nets).
I reproduce here what I think that are relevant bits for this discussion and later, after possibly several updates, I shall try to write something useful by using these bits.
1. Turing machine and lambda calculus are both models of computation, but lambda calculus is also a programming language and Turing machine is not.
2. Zachariah Hargis makes the point of comparing this model of computation vs programming language distinction as related to the one made by Leibniz between calculus ratiocinator and lingua characteristica. (Among other references, note to self to explore further.)
3. Chemical reaction networks (CRNs) is one fundamental ingredient of molecular computing, no matter what formalization of CRNs is used. Don’t believe that all “computational part” of a CRN is a Petri net (because it is often very important which are concretely the molecules and reactions involved in the CRN, not only the abstract number of species and reaction rates between those).
4. Petri nets, as used in chemistry, are a tool for getting quantitative information from CRNs, once the CRNs are designed by other means. Petri nets might be useful for thinking about CRNs, but not necessary for designing CRNs.
5. CRNs which are designed by using, or embody in some sense lambda calculus is a much more interesting path towards a “programming language”, and a classical one (Fontana and Buss Algorithmic chemistry), than the more studied engineering style implementation of imperative programming and by force copy-paste of TM thinking into bio computing.
6. Among the specifications for a “programming language for chemistry” are the following:
- (a) geometrical (in particular no use of currying, along more obvious features as being parallel, asynchronous, the latter being achievable already by many well known, non exotic programming languages),
- (b) purely syntactic (in particular no names management, nor input-output signalling),
- (c) (maybe an aspect of (b), but not sure) no use of evaluation in computation (strategy).
(The chemical concrete machine satisfies these requirements and moreover it contains lambda calculus, thus showing that such a “language” is possible. However, the chemical concrete machine is based on a made-up, artificial chemistry, thus it provides only a proof of principle for the existence of such a “language”. Or is it? Biochemists help is needed to identify, or search for real chemical reactions which could be used for implementing the chemical concrete machine in reality.)
7. The problem of tacit axioms in history of the Church-Turing thesis might be especially relevant for biochemists, and it could also be mentioned as an argument in favour of making a distinction between “model of computation” and “programming language”: any model of computation uses some tacit axioms, while a programming language does not (only at the semantic level concerning making (human) sense about the results and ways of functioning of the programming language such tacit axioms are used in this case). For biochemists, to not use such tacit axioms is a must when they try to find scientifically valid explanations. CS does well when ignoring these, in most of the cases.
My primary interest is “computing with space”. Some remarks by Allen Francom make me believe that it might be a way to give a “space” to the internet of things by using graphic lambda calculus. By that I mean the following: the internet of things is a huge collection of objects which are somewhere on Earth, each carrying a very small processor. So the objects are in the physical space, but they change their places all the time, and their processors are tiny, so one better use tiny programs and simple executions for syncronizing their relative places one with respect to the others.
That is a kind of “computing with space” and it fits well with the following problem with e-books raised by by Mark Changizi, who says that e-books lack space and gives as an example his (physical library). He says he knows that book A is the one close to that calculus book he uses when he need some formulae, and that’s the way he is going to find book A. While, in a virtual library there is no space, the relative positions of a e-book wrt to others is irrelevant, because when you click a link is like you teleport (and ignore space) from one place to another. My interpretation of this is: the way Changizi finds a book in his library is different from the way a robot Changizi would find the book, because the robot Changizi would need the coordinates of the book A with respect to a corner of the bookshelf, or some information like this, then it would go and fetch the book. On the contrary, the human Changizi (‘s brain) acquired a geometric competence by accumulating these simple bits of knowledge about his library, namely tht book A is near the calculus book, and the calculus book is used when he needs formulae, etc.
As you know, the set of things is the same as the trivial groupoid of things , with arrows being pairs of things and with composition of arrows being . Now, let us define first what is the “space” around one object from T, let’s call it ““. (The same works for each object).
For defining this you take a commutative group (could be or could be the free commutative group over an alphabet (containing for example “NEAR” an “FAR”, such that , etc). Call this group the “scale group”. Then to any pair of objects and any scale , associate a “virtual pair at scale epsilon” , where is like a term in lambda calculus constructed from variable names “a” and “b”, only that it does not satisfy the lambda calculus definition, but the emergent algebra definition. Of course you can see such terms as graphs in graphic lambda calculus made by the epsilon gates, along with the emergent algebra moves from graphic lambda calculus, as in
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
Moreover, you can mix them with lambda calculus terms, as it is possible in graphic lambda calculus.
In conclusion, I am very interested in practical applications of this, and I can at least formulate a list of needs, some of them with an IT flavour, others applied math. As for the pure math needs, they are pure fun and play. All this together would lead to really interesting things. One of the things to keep in mind is that would also endow the internet of things with a synthetic chemical connectome, as an extension of The chemical connectome of the internet .
Via Mark Changizi, I arrived to this post at It’s OK to be smart , which has as a conclusion: [for PhD students] “… faculty jobs ARE the alternative career”. Went from here to The Tenure Games, which lot of people know. The problem is big, one hears about particular cases all the time and a natural question is: is academia facing a real problem or it is just about people who don’t stand the heat of the fierce competition?
I think there is a more important side question: is academia delivering? It should, because it selects the best of the best, right?
Don’t believe my suggestion to the contrary. Let’s see another, older academia in action, more than a century ago: [source]
The most famous art competition for students was the Prix de Rome. The winner of the Prix de Rome was awarded a fellowship to study at the Academie francaise’s school at the Villa Medici in Rome for up to five years. To compete, an artist had to be of French nationality, male, under 30 years of age, and single. He had to have met the entrance requirements of the Ecole and have the support of a well-known art teacher. The competition was grueling, involving several stages before the final one, in which 10 competitors were sequestered in studios for 72 days to paint their final history paintings. The winner was essentially assured a successful professional career.
The ultimate achievement for the professional artist was election to membership in the Academie francaise and the right to be known as an academician.
All this to the result, a 100 years after, of decorating a chocolate box. Art followed another, much more creative path.
This is part of the same problem as academic publishing, what is intriguing is that the same parallel works.
In the following figure you see a chemical reaction network in the chemical concrete machine which involves only “+” enzymes.
- the molecule from the left upper corner corresponds to combinator
- for each molecule I marked the available reaction sites, with dashed red closed curves, along with the name in red of the enzyme which is interested in the respective reaction site,
- for the chemical reactions, figured with blue arrows, I put the name of the move which corresponds to the respective chemical reaction, with an added “+” to indicate that the reaction is unidirectional, as if only “+” enzymes are available (that is I wrote “” instead of “” and so on, see for details the notations used in the chemical concrete machine),
- whenever there are non-overlapping reaction sites, we can perform in parallel the moves (reactions),
- but in some places we have overlapping reaction sites, like in the case of the molecule from the 3rd row, left, where a reaction site and a reaction site overlap.
- In case of overlapping reaction sites there are multiple possibilities, which produce branches in the chemical reaction network,
- I have not used any elimination of loops,
- several molecules from the figure don’t correspond to lambda calculus terms, thus what is figured is not a representation of the usual fact that the combinator does not have a normal form (one which cannot be reduced further),
- in the middle of the figure we see a loop-producing cycle, hence the name,
- curiously, by going outside lambda calculus, we can reduce (at left upper corner) to a “normal” form (right lower corner), i.e. a loop and a molecule which does not have any “+” reaction sites. (The green fork node of that molecule is a fan-out node and the molecule is like a fan-out gate with one output curling back to the input of the fan-out, but remember that no signals circulate through the arrows :) )
UPDATE: For clarity, here is the usual cycle of reductions of the combinator , seen in graphic lambda calculus (but with the drawing conventions of the chemical concrete machine):
(A similar figure appears after the proof of Theorem 3.1 arXiv:1305.5786v2 [cs.LO] . )
In graphic lambda calculus we have the GLOBAL FAN-OUT move, which, as the name says, is a global move. In the chemical concrete machine we have only local moves. Here is how the same cycle is achieved in the chemical concrete machine.
You see a move (reaction) which is labelled “(multiple) CO-ASSOC”. The reason is that we need several CO-ASSOC moves to pass from the molecule at the right of the 2nd row to the one from the 3rd row. Alternatively, the same can be done with a SWITCH move, which is a succession of FAN-IN(+) , CO-COMM(+) and FAN-IN(-), therefore unpleasant as well. Moreover, if we invent a SWITCH enzyme (i.e. if we accept SWITCH as a new move which does not modify the whole chemical machine, because SWITCH is a consequence of the other moves) then we have an explosion of places where SWITCH enzyme could act.
In conclusion, the usual endless reduction of in lambda calculus, is possible, but highly unlikely in the chemical concrete machine, and only in the presence of CO-ASSOC or SWITCH enzymes, and moreover under tight control of the places where these enzymes act.
On the upper part of the stele of Hammurabi’s code of laws we see Shamash giving to Hammurabi the sumerian royal insignia [source]:
Another important concept in Sumerian theology, was that of me. The me were universal decrees of divine authority. They are the invocations that spread arts, crafts, and civilization. The me were assembled by Enlil in Ekur and given to Enki to guard and impart to the world, beginning with Eridu, his center of worship. From there, he guards the me and imparts them on the people. He directs the me towards Ur and Meluhha and Dilmun, organizing the world with his decrees. Later, Inanna comes to Enki and complains at having been given too little power from his decrees. In a different text, she gets Enki drunk and he grants her more powers, arts, crafts, and attributes – a total of ninety-four me. Inanna parts company with Enki to deliver the me to her cult center at Erech. Enki recovers his wits and tries to recover the me from her, but she arrives safely in Erech with them. (Kramer & Maier 1989: pp. 38-68)
A me for space. A program for using space.
This is a good introduction to the main subject of this post. My main interest lies into understanding “computing with space”, which is a (more general?) form of computing (than the one covered by lambda calculus), based on some variant of graphic lambda calculus. In this formalism we see trivalent graphs, subjected to graph rewrites. Some of these graphs can be associated to lambda calculus terms, i.e. to programs. Other graphs can be associated to emergent algebras, which, I claim, are the natural house of computing with space. They are programs for using space, they are mes for space, which you (or your brain, hypothetically) may use without appeal to geometric expertise, as Koenderink claims in Brain a geometry engine that the visual brain does.
This is a rather different image about what space is, than the one offered by physicists. Think about space in terms of universal programs for using it, instead about space as made by points, or as a smooth of fractal passive receptacle.
The graphic lambda calculus has to be modified in certain ways, which I shall explain in this and later posts, as it happened with the chemical concrete machine, which is another modification of graphic lambda calculus, Turing universal, which expresses programs (lambda calculus terms) as molecules and graph rewrites as enzymes.
The strangest, but possibly the most powerful feature of these graphic languages is that they don’t use names and they don’t need evaluations for being able to perform reductions (i.e. program executions). This feature makes them good candidates for trying to use them as models of brain mechanism, because in no biological brain you will find physically implementations of names, alphabets or evaluations in the CS sense.
Enough with the controversial claims. Concretely, the graphic formalism which I want to develop is already sketched in the following posts:
and in the series
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
The elementary gates, or nodes, of the formalism I am looking for will be (expressed in terms of graphic lambda calculus, but really taken afterwards as fundamental, not derived)
Add to these the fan-in and fan-out gates from the chemical concrete machine.
The main move is the - beta move, (see a proof for this move in the graphic lambda calculus formalism, in the post Better than extended beta move )
In few words, the “me for space” which I propose is a deformation of the chemical concrete machine formalism with a scale parameter.
This is a continuation of the post WWW with Metabolism . That post, in a nutshell, propose to endow the internet connectome with a chemical connectome, by using an artificial chemistry for this artificial network, namely the graphic form of the lambda calculus formalism of the chemical concrete machine (please go read the mentioned post for details).
I’m going to do a weird, but maybe instructive thing and pretend that it already happened. Why? Because I intend to argue that the internet with a chemical connectome might be a good laboratory for testing ideas about real brains.
The name “chemical connectome” is taken from the article The BRAIN Initiative: Toward a Chemical Connectome. It means the global chemical reaction network associated with the “connectome”, which is the physical neurons-and-synapses network. Quoting from the article:
Focusing solely on neuronal electrical activity is shortsighted if the goal is to understand information processing in brains. Brain function integrally entails complex synaptic chemistries. [...] New tools fuel fundamentally new conceptualizations. By and large, today’s approaches for sensing neurotransmitters are not able to approach chemical neurotransmission at the scales that will ultimately be important for uncovering emergent properties. Likewise, most methods are not highly multiplexed suchthat the interplay of multiple chemical transmitters can be unraveled. In addition to measuring electrical activity at hundreds or thousands of neurons simultaneously, we need to advocate for a large-scale multidisciplinary effort aimed at in vivo nanoscale chemical sensing.
Imagine then we already succeeded with implementing a (logical, artificial, human made) chemical connectome over the internet, by using CS tools and exploiting the relative simplicity of ingredients of the internet (connected Von Neumann machines which exchange data through known formats and programming languages). We could explore this chemical connectome and we could play with it (because we defined it), even if, like in the case of the brain connectome, we can’t possibly know the www network in all details. So, we should be in a situation which is extremely complex, like a brain is, but however allows for very powerful experimentation, because is still human made, according to human rules.
In Science appeared the article Who’s afraid of peer-review? by John Bohannon. There were many reactions already to this article, I’ll add mine.
I shall politely pretend that the article is not a piece of propaganda against OA. (Remark, ironically, that in order to enhance it’s dissemination Science did not hide it behind a paywall.)
Then, it’s like a gun, which can be used by anybody for shooting in whatever direction they like. Pick your line:
- Gold OA journals are afraid of peer-review (because Bohannon uses a list made of exclusively by Gold OA journals)
- Traditional peer review is a joke, should be improved by making it more open, and perpetual, (cf Michael Eisen)
- DOAJ is a joke (because Bohannon used DOAJ as one of the sources for building his list and some of the DOAJ listed journals accepted the flawed articles)
- DOAJ is not a joke (not all DOAJ listed journals from Bohannon list accepted the flawed articles)
- Beall’s list is good (a lot of predatory publishers from Beall’s list accepted the flawed articles)
- Beall’s list is not entirely good (however, some of the publishers listed by Beall did not accept the flawed articles)
- Study flawed, pay-back by Science after the arsenic life story (cf Mike Taylor)
- All OA journals are bad quality (yes, and salami slicing publishing is ethical, provided is published by legacy publishers)
- We should trust only ISI journals (sure, boss, but I like DORA)
What if, when faced with spin, we should instead declare that anybody has the right to make it’s own opinion, based on the abundant evidence available? Instead of looking at DOAJ through authority lens, why not acknowledge that DOAJ is a useful tool, not an authority argument. Same for Beall’s list. They don’t have to show us perfect lists, they just help us with some information. We have to use our brains, even if the legacy publishers don’t like that. We don’t have to believe what somebody say, based only on the authority of the person. For example look at Peter Suber’s post, you don’t have to believe him. It’s up to you to read Bohannon’s article, then Suber’s reaction, or Eisen, or Taylor, whatever you want to look at, and then make your own opinion.
It goes the same when it comes to legacy publishers, to ISI lists, to research articles. Read them, use as many information you can gather and make your own opinion. Don’t rely on authority, there’s something better these days, is called free access to information. If you are lazy and you don’t want to make your own mind, well, it’s not my problem, because, like never before in history, information is not scarce.
… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following: the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.
I take the following hypotheses for a wet neural network:
- the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
- Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
- (the most simplifying hypothesis) all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of or enzyme names from the chemical concrete machine.
The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.
Before giving this explanation, I shall explain in just a bit more detail how the wet neural network, which satisfies those hypotheses, works. A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.
As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.
But, in the post SPLICE and SWITCH for tangle diagrams in the chemical concrete machine I proved that in the chemical concrete machine formalism there is a local move, called SWITCH
which is the result of 3 chemical reactions with enzymes, as follows:
Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them ). This mechanism gives chemical reactions between molecules with the form
where $\latex A$ and are molecules such that by taking an arrow from and another arrow from we may apply the enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules and (and possibly some GARBAGE, such as loops).
In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.
Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.
Let us consider the following rules of decorations with names and types:
If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.
For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.
In general, though, this is not possible. Consider for example a 2-zipper molecule.
We obtain the decoration as a nested expression of , which enough for performing two beta reductions, without knowing what mean (without the need to evaluate ). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).
Here is the tricky part: if we look at the term then all that we can write after beta reductions is only formal, i.e. reduces to , with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables and the terms .
However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like , neither of terms, like .
Remark that the decoration is made such that:
- the type decorations of arrows are left unchanged after any move
- the terms or variables decorations (names elsewhere “places”) change globally.
We indicate this global change like in the following figure, which is the result of the sequence of the two possible moves.
Therefore, after the first graphic beta reduction, we write to indicate that is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older , when we replace by . After the second graphic beta reduction we use the same notation.
But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by with the arrow decorated by . We should, in order to have a better notation, replace by , which gives rise to a notation for a potentially infinite process of modifying . So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.
In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.
So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.
Knot diagrams can be implemented in the chemical concrete machine. This result has been already touched in the post Local FAN-IN eliminates GLOBAL FAN-OUT (II). Here I shall give a short and clear exposition.
1. Oriented knot diagrams. I shall describe the larger collection of oriented tangle diagrams. A knot is a proper embedding of an oriented circle in 3D space. A tangle is a proper embedding of a finite collection of oriented 1D segments (arcs) and circles in 3D space. A tangle diagram represents a projection of a tangle in general position on a plane (i.e. so that different pieces of arcs or circles do not project on tangent curves in the plane), along with supplementary information at the crossings of the projections (i.e. when the projections of two pieces of arcs cross, there is more information about which piece passes over). In this description, a tangle diagram is a 4-valent, locally planar graph, made by two elementary nodes, called “crossings” (usually tangle diagrams are defined as being globally planar graphs made by crossings), and by loops.
2. Oriented Reidemeister moves. Two (globally planar) tangle diagrams represent the same 3D tangle up to regular deformations in 3D if and only if we can pass from one tangle diagram to another by a finite sequence of oriented Reidemeister moves (I use here, as previously, the names from Michael Polyak “Minimal generating sets of Reidemeister moves“, excepting that I use the letter “R” instead of the letter ““) .
These are 16 local graph rewrites (local moves) acting on the collection of tangle diagrams.
3. My purpose now is to give an implementation of tangle diagrams (not especially globally planar ones, but the more general locally planar ones) in the chemical concrete machine formalism. For this I have to define the elementary nodes of tangle diagrams as molecules in the chemical concrete machine formalism. This is easy: the loops are the same as in the chemical concrete machine, the crossings are defined as in the following figure:
Each oriented tangle diagram is translated into a molecule from the chemical concrete machine, by using this definition of crossings. As a consequence, each oriented Reidemeister move becomes a local move in the chemical concrete machine, simply by translating the LHS and RHS tangle diagrams into molecules.
4. I have to prove that these translations of the oriented Reidemeister moves are compatible with the moves from the chemical concrete machine in the following sense: each translation of a Reidemeister move can be obtained as a finite chain of moves from the chemical concrete machine.
It is easier to proceed the other way around, namely to ground the reasoning in the oriented tangle diagrams universe. I shall translate back some moves from the chemical concrete machine, as seen as acting on oriented tangle diagrams. See this post at mathoverflow for context, basically it’s almost all we need for the proof, supplemented only by the proof of the SWITCH move, as you shall see.
The graphic beta move in the chemical concrete machine translates as a SPLICE move (it has two variants) over oriented tangle diagrams. The elimination of loops becomes a LOOP move. These moves are described in the next figure.
You can find in the mathoverflow post a proof that all oriented Reidemeister moves, with the exception of R2c, R2d, R3a, R3h, are a consequence of a finite number of SPLICE and LOOP moves.
It is left to prove that R2c, R2d, R3a, R3h can be expressed by (translations of) some chains of chemical concrete machine moves. It turns out that we can reduce all these moves to the move R2d by using SPLICE and LOOP, therefore the only thing left is to look at R2d.
By SPLICE and LOOP, the left hand side of R2d becomes:
So R2d is equivalent with the following SWITCH move:
Finally, in the post Local FAN-IN eliminates GLOBAL FAN-OUT (II) there is a proof that the SWITCH move can be obtained from CO-COMM and FAN-IN moves. The next figure shows this.
This concludes the proof that R2d can be expressed by a chain of moves from the chemical concrete machine.
While I was trying to convince biochemists (I’m still trying) to use the Chemical concrete machine for a variety of goals, from bio-computing to understanding brains, Stephen Paul King came with an awesome suggestion, which evolved into the following idea:
The WWW is an artificial, human-made network and the Chemical concrete machine (chemlambda) is artificial, human-made, computing friendly chemistry. Let’s use the chemical concrete machine to awake the net by giving it a metabolism.
Together with Louis Kauffman, we are trying to make some fine mathematics with real world implications out of it. Care to join? Then send me or Stephen a message.
Here is a list of arguments in favor of this idea:
- it is much simpler to use a made-up, simplified chemistry on a network much simpler than brains
- both the WWW and the chemical concrete machine (which is Turing universal) belong to the same (computational) universe
- in silico experiments with WWW + chemlambda correspond to in vivo experiments with wet neural networks
- it is scalable
- may have lots of real life CS applications
- it’s mathematically friendly, come on pure mathematicians, you are needed
- it’s based on lambda calculus, so it’s already incredibly cool, as adepts of functional programming might confirm.
where you can see two lambdas arranged into a double helix. It’s better than this [source]:
UPDATE: see the more recent post Fraglets, bionets, and the www with metabolism fro relevant research already done related to www with metabolism, which could be very useful.
I’m working on a draft of the Chemical concrete machine paper, I look forward for receiving your input concerning it.
As you may see, the last part (section 4) is not yet finished, but it will be, tomorrow, along with any correction or suggestion I shall receive. There is not yet an acknowledgement, but will be added soon, hopefully mentioning all persons who helped me with these ideas. Also, of course, the bibliography is not in final form.
As you see, it is yet in an intermediary form between the paper style and the web tutorial style, I have not decided yet if is good to behave like the net does not exist, or to use a more colloquial form of exposition.
I welcome and ask for your corrections, suggestions, whatever.
UPDATE: I think that’s a stable version, appeared as arXiv:1309.6914
I’m talking about math related subjects. Why do we need to learn to count? Why is not obvious to count up to 5, for example? Why is not obvious to add 2 with 3? Why do we need to learn to evaluate a boolean expression? Which is the part not obvious from the definition? Why do we need to exercise? If, in lambda calculus, the beta reduction is defined as then why is it not instantaneously obvious that if then for any term we can reduce to ?
If you don’t like examples from mathematics learning, then I am puzzled, because mathematician = learner. It’s an ancient greek branch of a pythagorean sect, the one of people dedicated to learn.
Learning is a process which needs time to unfold. What happens in our brains during this time? It must be that we rely too much, without acknowledging this in mathematical approaches, on a collection of more simple, not goal oriented, brain activities, like for example pattern recognition. Wait, what IS a pattern? And what means “recognition”?
Let’s take it another way. I shall make the dumbest hypothesis. When we learn a new (for us) mathematical definition, we need to transform it in “brain language”. We need to “compile” it into something which happens physically in the brain. (Btw, I am completely skeptical that whatever happens in the brain can be described by a language. Main argument: vision, which occupies large parts of brain activities of all creatures which possess one.) So, the mathematical definition (say, of the truth value of a boolean expression) has to be translated into something else, comprising a procedure to parse the expression, passing by all sort of patterns recognition and all kinds of high level human brain activity. Or even better, we write a program for solving this problem. The computer “learns”, i.e. compiles the program and then it works as nice as a mechanical clock. For humans, not so much, because we may forget how to do it, then recall some ideas “behind the definition”, we say, and reconstruct the procedure from scratches, like memories about the day we learned it first time, past mistakes and common sense reasoning.
Brains, let’s take this as part of the dumbest hypothesis, are just networks of neurons, which send one to another some signals and which modify themselves and their connectivity according to some simple rule, based on mechanical clock (physics) and chemistry rules, satisfied by any part of the physical universe, not only by the brain. So, whatever happens when we learn a mathematical concept, we transform it into some very simple thing in the brain. Forget about categories, truth values, group operations, definitions and theorems. No, there must be something which mimics a pattern of behaviours associated with the implicit expectations of people writing definitions, theorems, problems and exercises. Should we write a program for any of the items mentioned in the questions from the first paragraph, then we would see that we have to take care for a lot of things not mentioned in definitions, like well managing the variable names, again some mechanized form of pattern recognition, along with more or less precise knowledge about the programming language we use and it’s idiosyncrasies.
More we try to approach the realm of obvious, more abstract it becomes. And in our brains, learning to ride a bike or to evaluate boolean expressions is equally concrete, it’s physical.
They say “on the internet nobody knows you’re a dog“, I like this a lot. It tells me that communication is enhanced by renouncing at vanity. Unfortunately this seems to me the most important stumbling block in the path towards a better research communication system. Because researchers, statistically speaking, have been selected since at least 40 years by vanity criteria. That is why we see better communication among young researchers, not only because they are young, but because their vanity is still low, making them more available to receiving and giving new ideas.
No technical improvement, no OA schema other than the most idealistic ones, will enhance communication. You don’t believe me? Look then at what is available, does it work? Slowly but surely we shall pass to OA entirely, but so slow, it’s almost at historical scale. Recently it was officially discovered that gold OA is worse than green OA. Great discovery, but it was obvious from the beginning. I believe sooner than later, but don’t worry, not too soon, we shall renounce at the article format for communicating research.
Which brings me to the real subject of this post. I am not sure if this blog/open notebook is the right format for communication. I am looking for collaborations with the rare, but surely existing creative people which, as me, are more curious than vain. There is a great amount of lurking around this blog. I can see there’s a lot of interest both in the graphic lambda calculus and in the chemical concrete machine. The lack of input I get from these lurkers worries me. This blog documents the evolution of some ideas from the initial vague start to some products, like the ones mentioned here. Don’t you understand that these are just byproducts of a very enjoyable exploration? Which byproducts could be much, much more funny if they are the result of a dialogue?
That is why I am looking for alternatives which might enhance this potential collaboration with other creative dogs around the world.
… over other computing formalisms, is contained in the following wise words:
Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.
But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides. Consider graph rewriting systems for your new computer.
This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)
In graphic lambda calculus, the parent of the chemical concrete machine, I proved that combinatory logic can be done by using local moves available and one global move, called GLOBAL FAN-OUT. This global move is what is resembling the most with the behaviour of a usual FAN-OUT gate: A graph connected to the input of a FAN-OUT gate is replaced by two copies of the graph.
That’s bad, I think, so in the chemical concrete machine I arrived to prove that GLOBAL FAN-OUT can be replaced, as concerns graphs (or molecules, in the chemical concrete machine formalism) which represent combinators, with successions of local DIST moves (and some other local moves) .
It is possible exactly because there are no variable names. Moreover, there’s something almost biological in the succession of moves: we see how combinators reproduce.
As an illustration, the following is taken from the post Chemical concrete machine, detailed (V) :
Here are the four “molecules” which represent the combinators B, C, K, W. (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )
Let’s see how the “molecule” K behaves, when connected to a FAN-OUT gate (green node with one input and two outputs):
The “reproduction” of the molecule B is more impressive:
In the formalism of the chemical concrete machine, is a distributivity move (or “enzyme” which facilitates the move in one direction, preferentially), and is a FAN-IN move (facilitated in one direction).
See more about this in the Chemical concrete machine tutorial.
This makes me believe that, as long as we don’t reason in terms of states (or any other variables), it is possible to have FAN-OUT gates in quantum computation.
I want to explain what I intend to do further in the program of “computing with space”, seen in the restricted sense of making sense of space in graphic lambda calculus. The explanation itself, if well done, would amount to achieving the program, therefore, at this stage, it can’t be too exact.
I think that a good definition of space is relational, based on it’s computational content, and not on “points” or other geometric objects and axioms over those. Obviously, a good definition of space should not be “spatial” itself, otherwise would be self-referential.
My belief, expressed mathematically, is that space (places) and types are simply just decorations over graphs in graphic lambda calculus, following the rules from the post Types and places .
If is a place and is a type, then means “there is a at the place “, for example means there is a car at . (This interpretation is what I use to make sense in my mind, beyond the mathematical formalism, therefore subject to changing). The name is just a name (of a variable or term), it does not have any other meaning than the one which can be grasped in relation with the other names for variables and terms. Recall that in graphic lambda calculus there are no names for variables and terms.
In the post Better than extended beta move I introduced two macros which I think they are better than the application and abstraction gates. Here they are again, for an arbitrary “scale parameter” , where is a commutative group (think, for example, about the free abelian group over an alphabet, if you don’t like to think about as the group of reals):
Graphic lambda calculus can be reformulated by using these “scaled” application and abstraction gates as fundamental gates. This has as effects:
- elimination of the move ext2, replaced by the definition of application and abstraction gates as the “scaled” gates at scale
- condensation of the graphic beta move and the move R2 into one scaled version of the graphic beta move, which is better than extended beta move .
The subtle part is that, if we look at the decorations, then the types decoration is left unchanged by the moves, but the place decoration is globally affected by the moves (i.e. “logical” moves, like the graphic beta move, change the space).
The other subtle part consists into extending the
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
to these scaled application and abstraction gates. You shall see that rather amazing things happen, when we look at decorations with places.
I am going to use lessons learned from the Chemical concrete machine. If we use the scaled application and abstraction gates (and the FAN-OUT gate, of course) instead of the four gates of the original graphic lambda calculus, this makes 3 gates instead of four. Is then enough place to add a FAN-IN gate, as in the chemical concrete machine (while renouncing at identifying it with an gate, as indicated in the chemical concrete machine formalism). This way we shall have four gates again, along with some natural distributivity moves, namely the ones from the chemical concrete machine and the one called the “mystery move” in the series on the dictionary from emergent algebra to graphic lambda calculus. In this way, as long as we stay in the “scaled version of combinatory logic sector” (which will contain BOTH the emergent algebras and the combinatory logic, interacting in beautiful ways), we will not need the GLOBAL FAN-OUT move.
After these mildly technical things will be done, interpretations with words will follow.
Okay, I see you understand basic principles. Now let’s move to the more complex things. I can’t share exact UD algorithm (you must understand me, I’ve spent years mastering it) but I can express in a code what I have already said in the interviews. The thing I want to say is: you must use sorting and eliminate raycasting.
Sorting algorithm I introduce here is very VERY simple. You can optimize it a lot using octrees and more elegant heuristics than that. But keep it simple.
You’ve mentioned this http://rghost.net/48541594 dataset, I had some problems downloading it, but I did it that and can say it is good for testing, you may use it with my program.
Here is the code I am talking about. If you’re smart enough, you’ll get benefit from it.
Question: why does it work?
UPDATE: It’s not clear if it really works, it’s unknown the quantitative improvement given by the random permutation trick. If you care, the BruceRDell thing is a prank.
I would like to understand the following:
- what is the role of the random permutation?
- why does the main loop work?
- is this already done? (send links to papers, if any)
So, let’s have a polite and informative discussion in the comments, if you agree. We all write in some variant of broken english, therefore I don’t see why anybody can’t participate.
Not interested in: is the real Bruce Dell? Is the real UD algorithm? Only interested to understand the guts of it.
I finish with the video from b@b’s comment:
Instead of the extended graphic beta move (proposed here and declared still in beta version in the graphic lambda calculus tutorial) is to couple the Reidemeister 2 move R2 with the graphic beta move. Here is how it can be done.
Let us define, for any scale parameter , the following versions of the application gate and abstraction gate:
Remark that when we can recover the usual application gate
and the usual abstraction gate
The graphic beta move and the move R2 can be coupled into the following nice move:
This will be used for making clear where the space is encoded in graphic lambda calculus and how. Of course, it has to do with emergent algebras, seen in graphic lambda calculus. If you want to get ahead of explanations and figure out by yourself then the following posts will help:
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
(in case you see adds here: they have nothing to do with me or with this blog)
All the steps of the editorial process used by legacy publishers are obsolete. To see this, is enough to ask “why?”.
- The author sends the article to the publisher (i.e. “submits” it). Why? Because in the old days the circulation and availability of research articles was done almost exclusively by the intermediary of the publishers. The author had to “submit” (to) the publisher in order for the article to enter through the processing pipe.
- The editor of the journal seeks reviewers based on ___________ [please add your suggestions], which amounts to hunches, friends advice, basically thin air. Why? Because, in the days when we could pretend we can’t search for every relevant bit of information, there was no other way to feed our curiosity but from the publishing pipe.
- There are 2 reviewers who make reports. (With the author, that makes 3 readers of the article, statistically more than 50% of the readers the article will have, once published.) Why? Because the pyramidal way of organization was, before the net era, the most adapted. The editor on top, delegates the work to reviewers, who call back the editor to inform him first, and not the author, about their opinion. The author worked, let’s say, for a year and the statistically insignificant number of 2 other people make an opinion on that work in … hours? days? maybe a week of real work? No wonder then that what exits through the publishing pipe is biased towards immediate applications, conformity of ideas and the glorified version of school homeworks.
- The editor, based solely on the opinion of 2 reviewers, decides what to do with the article. He informs the author, in a non-conversational way, about the decision. Why? Because again of the pyramidal organization way of thinking. The editor on top, the author at the bottom. In the old days, this was justified by the fact that the editor had something to give to the author, in exchange of his article: dissemination by the means of industrialized press.
- The article is published, i.e. a finite number of physical copies are typed and sent to libraries and particulars, in exchange for money. Why? Nothing more to discuss here, because this is the step the most subjected to critics by the OA movement.
- The reader chooses which of the published articles to read based on authority arguments. Why? Because there was no way to search, firsthand, for what the reader needs, i.e. research items of interest in a specific domain. There are two effects of this. (a) The raise of importance of the journal over the one of the article. (b) The transformation of research communication into vanity chasing. Both effects were (again, statistically) enforced by poor science policy and by the private interests of those favoured by the system, not willing to rock the boat which served them so well.
Given that the entire system is obsolete, what to do? It is, frankly, not our business, as researchers, to worry about the fate of legacy publishers, more than about, say, umbrella repairs specialists.
But, what to do, in these times of transition? It is in my power to laugh a bit, at least, and maybe to make others, with real decision power, to think.
That is why I propose a Journal of Uncalled Advices, which would work as the spnetwork, only driven by publishers, as a journal.
- The editor searches in the arxiv, or elsewhere, article he likes, or he consider important.
- Makes a public call for reviews of the selected articles. He manages the place of the discussion.
- At some point a number of technical reports appear (the uncalled advices), collaboratively.
- The editor uses again his nose to separate opinion from technical reports and produces (writes) two final (for the journal) articles about the research article. The opinion part could as well serve as vulgarization of the research article, the technical part could serve to the specialists and to the author.
- The two articles are sold by piece, for 6 months and then they are made public.
- The reader uses the journal articles as evidence and makes his own mind about the research article.
UPDATE: The following older posts are relevant
- Peer-review is Cinderella’s lost shoe
- What is an author buying from a gold OA publisher?
- Peer-review, what is it for?
- Journal of very short papers
My post ended here, in case there is something added after the end of the post, it’s an example of uncalled adds.
Have you noticed? There are now, available on this open notebook, two tutorials:
After some tinkering, I arrived at the following logo for graphic lambda calculus and derivative works (like the chemical concrete machine):
We see here a double helix, which is in fact formed by a lambda and another lambda upside down, which could also be a Y.
You would do me a favour to spread the word about these tutorials, or better, to use them. I put a link to the graphic lambda calculus tutorial on reddit, and a g+ post about the chemical concrete machine tutorial here. There’s also a tweet about it:
I am looking for a place where I can put a part of the contents of this blog (which is rather big), namely the various series of connected posts, which contain a lot of work and time invested. I am thinking about github, but I am not so sure. The thing with the blog format, besides it is still not loved by those who made their career by ISI bean counting, is that it is too linear. This can be repaired by links, however.
The more I learn about biological computing, like, in a random order, about:
- Milner’s bigraphs, or this,
- Graphs, Rewriting and Pathway Reconstruction for Rule-Based Models,
- kappa language,
- Interaction combinators, where you see in figure 2, at page 9, one DIST move and some LOC PRUNING moves,
the more it seems that the chemical concrete machine fits naturally into this. Yes, it is partly a garage lab version of this thread of research, but on the other side it has certain advantages, mainly the following one. It is a “pure” graph rewriting system, freed from any 1D thinking conventions. It is true that in order to compute with it on a silicon computer one needs to rewrite it by adding lots of things on top (for example bigraphs may be a way to formalize a model of computation with graphs, in silico, where we need to manage reaction sites, labels of input and output arrows), not because they are natural, but because of the need to make a humanly comprehensive discourse about that (which brings all the effects of the cartesian disease, see the conclusion of this post) .
Soon, after the start of the academic year, an article will be available on arxiv (and submitted to publication, although I am more and more skeptical about this way of research communication, starting with the ’90s) . There are so many directions of development for the chemical concrete machine, also in parallel with graphic lambda calculus, and also with the initial path, computing with space (i.e. the computational content of emergent algebras).
I don’t feel anymore that what I write here is too abstract for the “applied” researchers in biological computing. No, it turns out some of you have already seen this kind of stuff. The only question is: “does the chemical concrete machine brings anything new”? I think it does, because, as nature, is geometrical and not discourse based.
In conclusion, I look forward for collaborations, academic or not. As you might have noticed, I spend a lot of time with this open notebook/blog for trying to stir open, free share of ideas in fundamental research, which might be also very concretely relevant for the real world. Needless to say I shall benefit from such collaborations, but the converse is also true, if you excuse my self-promotional impulse (again, I heard that the net is not subtle, which I doubt, but let’s suppose so). The idea is that if I can do it alone, then imagine what could we do together.
UPDATE: What do you think about this logo for the chemical concrete machine?
See Example: decorations of S,K,I combinators in simply typed graphic lambda calculus for the compatibility with simply typed lambda calculus.
Compared with the purity of graphic lambda calculus, types (at least of this kind) seem a step back. However, graphic lambda calculus has also the emergent algebra sector, see arXiv:1305.5786 [cs.LO] section 5, and also here the series
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
The starting point of the research subject on “computing with space” is to try to understand the computational content of emergent algebras arXiv:0907.1520 [math.RA]. This has lead to graphic lambda calculus, passing by decorated binary trees and decorated tangle diagrams, until the non-decorated formalism of graphic lambda calculus. This formalism is not yet completely finished; indeed, the extended graphic beta move, arXiv:1302.0778 [math.GT], especially section 8, which mixes the emergent algebra part with the purely logic part, is still in beta stage.
Therefore, I’m coming back to the decorated version and remark that, as concerns the two gates which appear in the emergent algebra sector, they can be decorated with places:
Places form an emergent algebra. Types form a free magma (for well decorated graphs).
… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.
Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can). I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.
It can compute anything which can be computed by using any programming language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.
There are, of course, interesting questions to ask, among them:
- do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
- is this an example of a sequential or parallel computation?
- what evaluation procedure is used in the chemical concrete machine?
The second question is the most easy to ask: is parallel computation. The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)
But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:
- lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
- they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.
The third question, concerning the evaluation procedure will be also explored in further posts. Care has to be taken here because there are no variables in these formalisms (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right? The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.
Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.
Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong). The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow. Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.
The if-then-else construct has already been explored in graphic lambda calculus, see Teaser: B-type neural networks in graphic lambda calculus (II) in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.
In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:
Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:
Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.
In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:
The previous remarks apply here as well.
With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another the medicine B.
Here is what I believe that UD is doing. This can be surely multiply optimized, but the basics is: put the z buffer in the 3d space.
I. Preparing the database.
1. Imagine a 3D cubic lattice which is as big as your 3D scenery bounding box. Take coordinates aligned with the lattice. Put the camera at coordinates (0,0,0) and surround it with a cube C. The faces of the cube C will serve as the cubemap we want to construct. Each face is covered by pixels of a given resolution. We have already the following parameters to play with, given in the units of the coordinate system chosen: the step of the lattice, the dimension of the cube C, the resolution of a face of C.
2. render, by any means you like, the lattice, as seen from the (0,0,0) pov, on the 6 “screens” -faces of C. We have 6 view frustra, the discussion will be the same for each of them. In order to render them you need to put small balls, or squares facing the relevant face of the cubemap, or whatever, at the lattice nodes, so we have another parameter here, the diameter of the ball or square. As a result of the rendering you now know the following things:
- for any lattice atom you know which pixel from which face it corresponds. You may do better for lattice atoms which are inside the cube C, namely “ignore” them, say you attribute to them a IGNORE label, otherwise you attribute to each lattice atom the 2D coordinates of the pixel from the cubemap and a information which says which face of the cubemap the pixel is,
- you have information about the scale, which you attach to the lattice atoms, like this: if two neighbouring lattice atoms project on the same pixel then attach IGNORE to both. If the ball/square/whatever of a lattice atom projects on more than one pixel then attach to it a number SCALE approximately proportional with the square ROOT (thanks ine) of the number of pixels it projects (or the dimension of the bounding box of the pixels)
Of course, you don’t want to take a huge lattice, with very small balls. That’s all in the parameters choice.
3. Take now your database of 3D points, i.e. the real one, which you want to render eventually, UD style. I shall ignore, for the sake of the argument, how is the database implemented: as an octree or otherwise, or even if the database is made by 3D points or by some more complex objects, like polygons. Put this database in the coordinates chosen first, such that, for example, if you are working with octrees, the cells of the lattice correspond with some level of the octree. Attach to each node of the lattice the supplementary information: the points from the database which are within the ball surrounding the atom, or else the label VOID. Alternatively, think that any point from the database is inside some cell lattice and “project” it on each corner of the the cell lattice (i.e. attach to each lattice atom a list of points from the database which are in the neighbourhood of the lattice atom, the nil list corresponds to VOID)
4. Let’s see what we have, supposing for example that we use octrees. We add the 3D lattice to the 3D database of points (by using lists of points as explained at 3.) and for any lattice atom we attach also the information as explained at point 2.
How to compress this efficiently? There is a new parameter here, namely the level of the octree and also how is the color, for example, information stored in the octree. Of course, this is a matter of recursion, namely at points 1-3 we may take finer and finer resolutions and lattice steps, and so on, starting from a very gross lattice and resolution, etc, then trying to figure a correct recursion procedure. That’s work to be done, is not trivial but it is somehow straightforward once you figure it.
II. The pipeline and rendering.
The problem is that we want to be able to get very very fast, from the database constructed at (I), only the points which are needed for realtime rendering, when the camera is at coordinates (x,y,z).
This problem splits into two different ones:
- at the start of the realtime UD rendering, we want to be able to cull something close to the minimum number of 3D points, when camera is at (0,0,0). According to the information given by Euclideon, a good algorithm should able to do this in about 1s.
- then, we need a procedure to take what we need from the database when we change the pov from (x,y,z) to (x+1,y, z) (or alike). This should be much more faster, allowing for realtime rendering.
As a preparation, let’s remark that:
- if the camera is a (0,0,0), then we already know where each lattice point projects, is written in the database. So we just need to start from a pixel, at a given resolution (reccursion again), and to choose from the database only the lattice atoms which are CLOSE, in the decreasing order of SCALE, and from those the real 3D points which are neighbours (of course we have to use the octree structure). We get for each pixel a number of points of the order of log(dimension of the world).
- if the camera is at (x,y,z) we also know where each point from the 3D database projects, because we read it from the data attached to the lattice atom which, translated by (x,y,z), is the neighbour of the point. We get also the SCALE parameter from this.
1. We use remark 1 to solve the first problem, namely what comes through the pipeline from the huge 3D database to the pre-rendering buffer B, when we start with the camera at (0,0,0). The buffer contains about (number of pixels) X log(dimension of the world) 3D points, along with pixels coordinates where they project and with SCALE. This is fed directly to the rendering procedure, which you can choose freely, but it is almost trivial.
2. What happens when we move the camera? We update the pre-rendering buffer B, only by updating the pixel and scale information for the 3D points in the buffer D, getting only by a translation (addition) the relevant data from the huge database, in case here are two things which might happen: there is a point which exits the buffer, or there is a hole in the image.
Is this making any sense? Please let me know, because I was asked repeatedly what do I really believe.
Continuing from Simply typed graphic lambda calculus , let’s look at decorations for the S,K,I combinators.
We want to decorate the arrows of such graphs, according to the rules given in the following figure:
Decorations are denoted by letters … or by letters …. (what’s in a name?). These decorations are called “types”. There is only one operation, called ““, which takes a pair of types and returns the type .
In algebraic terms, here’s what is happening. We start with a graph in and we decorate all it’s arrows with different letters from an alphabet (of types). Then we associate to it the magma with:
- generators being the (decorations of the) arrows
- relations being the ones from the rules of decorations given in the previous figure.
If the magma is free, then we say that the graph is well typed. That means we can find a finite collection of types, with no relation between them, such that all arrows can be decorated by some element of the free magma generated by that collection of types.
Remark 1: Please, don’t let me reinvent the wheel. Surely somewhere in the big work on type theory, there is something analogous done already. Just send me citations and I shall add them with due acknowledgements.
Remark 2: This is exactly the same idea which is used in knot theory, where we associate to a knot diagram it’s knot quandle. But with a twist: while in knot theory we look for quandles which are not free (we cannot eliminate all relations between the generators), in this simply typed lambda calculus we concentrate only on those graphs which have free magmas associated. Please look at the right, lower corner from the previous figure, where is given the rule of decoration for the gate. (We are not going to use it in relation to the lambda calculus sector.) As you see, this is compatible with the trivial quandle with the operation .
Remark 3: I am not going to enter or take the extremely interesting path of cartesian closed categories , basically because my strong bias against anything cartesian. The initial source of this bias comes from the experience with sub-riemannian geometry, where any application of cartesian ideas, like slicing a problem until it becomes 1 dimensional, then solving it by 1d calculus and analysis techniques, then reassembling the slices, leads always to “rigidity results”, which have the form: if you try then you arrive at a contradiction. Don’t get me wrong, there are amazing and extremely deep such results, but they are always in the form of a proof by contradiction.
Remark 4: The point of view from this simply typed graphic lambda calculus is surely related to the Hindley-Milner type system.
Let’s see how this works for the I, K, S combinators, then for the combinator.
The procedure is the following:
- we number the nodes of the graph (arbitrarily)
- we put arbitrary, different labels on each arrow of the graph
- we write the relations which appear from each node, according to the rules of decoration
- finally, we examine the generated magma, to see if, eliminating some generators by using the relations, we can arrive to prove that’s a free magma.
In the next figure this is done for the combinator (the identity) and for the combinator.
We obtain the right types for identity and combinators (there are no contexts, that’s for later). It was easy, they are “well typed” according to the definition from here.
Now, let’s look at the combinator:
We have to work a bit, but not too much:
(For the convenience of the readers, I added in the figures the usual notations for combinators and types. )
We obtain again the right types for as well as for the variables involved.
Let’s try to decorate the combinator .
We obtain the following magma presentation:
which cannot be free, because of relations (4) and (6). So is not well typed, exactly like in the simply typed lambda calculus.
The leading idea is:
types are decorations, in the same way as elements of an emergent algebra are decorations.
Motivation: Remember that emergent algebras appeared as an effort to understand the formalism of decorated binary trees which appeared first in Dilatation structures I. Fundamentals, section 4. Various efforts have been spent for this, as witnessed in Computing with space, sections 3-6, until finally arriving to the formalism of graphic lambda calculus, as exposed in arXiv:1305.5786 [cs.LO] . The main “trick” of graphic lambda calculus is that it eliminates variables, therefore it eliminates decorations of graphs!
So, the program is that now we go back from the other side, logic, where types are decorations, as you will see, in order to connect with emergent algebras, which deal with the problem of describing spaces in a purely relational way, without any appeal to points (well, in the graphic lambda calculus formalism).
Interesting thing will appear, surely.
In this version of simply typed lambda calculus we introduce types (just a bag of names , … ), with one operation, denoted by . We use types as decorations of arrows of graphs in , according to the following decoration rules:
Remark how the decoration of the gate corresponds to the trivial quandle.
I don’t claim that I can decorate any graph in by using these rules. But when I can do it, then under any move from graphic lambda calculus the decoration remains compatible with the decoration rules. Indeed, this is proved in the next figure:
The pruning and the FAN-OUT moves are trivially preserving decorations.
In another post we shall see why is this compatible with the simply typed combinatory logic.
I think it’s pretty clear already what the chemical concrete machine is and that’s a fair idea about the things it can do, in principle. It is time to prepare a more conservative presentation, aka an article or two, based on the list:
- Extreme functional programming done with biological computers
- Chemical concrete machine, detailed (I)
- Chemical concrete machine, detailed (II)
- Chemical concrete machine, detailed (III)
- Chemical concrete machine, detailed (IV)
- Chemical concrete machine, detailed (V)
- Chemical concrete machine, detailed (VI)
- Local FAN-IN eliminates global FAN-OUT (I)
- Local FAN-IN eliminates GLOBAL FAN-OUT (II)
There are lots of other features which were not yet discussed here, as well as surprising paths which demand future investigations.
I need as much input as possible from those who are interested in this subject. Because there are many ways to tell a story, depending on the story teller and on the audience. As you know, I am a mathematician who wants to stir an interdisciplinary collaboration on this subject which seems to fall in the algorithmic chemistry or biological computing fields.
So I need help from you, in two directions:
- to calm down my math habits, for the sake of presenting a reasonably comprehensible story to non-mathematicians
- to tell me which parts need more development, which are more interesting, so to say.
Let’s continue from the post Chemical concrete machine, detailed (V) , by adding some more facts and remarks.
We have seen that in order to not have to apply a controlled sequence of CO-ASSOC molecules (i.e. moves), it is better to introduce a composite move, saying that a certain simple molecule is a propagator.
With this move the formalism of the chemical concrete machine is exactly as powerful as without it. The reason is that the move called “PROP+” can be seen as a sequence of CO-ASSOC moves and DIST+ moves.
Let’s accept this move (i.e. like if there is an associated enzyme to it) and let’s revisit the proof that the W molecule is a multiplier.
Besides the B, C, K, W molecules (which correspond to the B,C,K,W system ), other molecules are also interesting: in the following figure are presented the molecules F, I, S’ and S.
The molecule F may represent FALSE in the Church encoding of the booleans, along with K which encodes TRUE. The molecule I is the identity and the molecule S corresponds to the combinator S from the SKI combinator calculus. which is
a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not useful for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language.
These correspondences are established by using the algorithm for transforming lambda calculus terms into graphs in the graphic lambda calculus, then by using the dictionary between the gates from graphic lambda calculus to the essential molecules from the chemical concrete machine.
What about the S’ molecule? It is a version of the molecule S, i.e. it transforms into S by this reaction:
Also, there is a proof, analogous with the one for W, of the fact that S’ is a multiplier, by using the PROP+ move. This proof is better than the proof for S which was given at the end of the post Local FAN-IN eliminates global FAN-OUT (I) .
The B,C,K,W system
is a variant of combinatory logic that takes as primitive the combinators B, C, K, and W. This system was discovered by Haskell Curry in his doctoral thesis Grundlagen der kombinatorischen Logik, whose results are set out in Curry (1930).
In this post, I shall explain which are the correspondents of the B, C, K, W, combinators in the formalism of the chemical concrete machine, then I shall prove that they are multipliers. In a future post I shall prove that their correspondents in the chemical machine make the machine Turing complete.
Remark: In a sense, this was already established, by using the S, K , I combinators, in the post Local FAN-IN eliminates global FAN-OUT (I) . You only have to pass from the black and white drawing conventions of graphic lambda calculus to the drawings coloured with red and green of the chemical concrete machine, and then use the result which says that combinatory logic can be implemented in graphic lambda calculus with global FAN-OUT. All in all this gives that in the chemical concrete machine the combinatory logic can be implemented without any global move.
However, I think is more instructive to see first why other combinators than S, K, I are multipliers (which is a reformulation of the result mentioned in the remark).
Here are the four “molecules” which represent the combinators B, C, K, W. (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )
Now, let’s prove that they are multipliers! The proofs for B and C are very much alike, therefore I put here only the proof that B is a multiplier:
By the conventions of the chemical concrete machine, I mention here the enzymes which are involved in the reactions, instead of writing the moves, like in the graphic lambda calculus.
The proof that K is a multiplier is the following:
Notice how, in both cases, the reactions seem feasible, in the sense that there are no cases, they can be accomplished by a linear process: pour some DIST enzymes, then some FAN-IN, for B, or DIST, LOC PRUNING and then FAN-IN, for K. More than that, remark that the order of reactions do not matter, i.e. they can be accomplished by pouring all the needed enzymes at the same moment.
For the W combinator (molecule), things get a bit more complex, as was the case for the combinator S:
There is a reaction (or move) which needs explanations. I called it DISENTANGLE (CO-ASSOC) reaction. It is this:
It can clearly be done by a controlled succession of CO-ASSOC moves (reactions). From the point of view of the feasibility in the real world (provided a real implementation of the chemical concrete machine will appear), it seems hard to control the exact order of applications of CO-ASSOC moves which gives the DISENTANGLE move as an effect. So, probably, we shall need a “disentangle enzyme” dedicated to this.
As an alternative, remark that for proving that W is a multiplier we need an application of the DISENTANGLE composite move, described in the next figure:
For practical (or theoretical as well) purposes, it is enough to take this as a move. If you are lost and don’t understand what is the correspondent of this move in lambda calculus, is like this: for any term , if you apply a FAN-OUT gate to then you obtain two . (Recall that’s practically invisible in lambda calculus, because there is no FAN-OUT, instead all variables have names and the pattern matching works by some magic outside that formalism.)
In other words, what would get rid of needing a controlled sequence of CO-ASSOC reactions for multiplying the molecule W is this: assume that the molecule which is connected to the “y” essential molecule (i.e. to the input of a FAN-OUT gate) is a “propagator”. Propagators are defined in the next figure:
Propagators are different from multipliers, because they are molecules with one selected input and one selected output which “propagate along a FAN-OUT gate”, while multipliers are multiplied by a FAN-OUT gate. Propagators can serve in fact as labels, or names, which propagate along a tree of FAN-OUT gates.
… these are some notes to self, the future depends, as usual, on future inputs to my brain.
1. It seems possible to try to use the chemical concrete machine formalism for doing some very concrete “lifeforms”, i.e. families of graphs which, under the usual statistical interactions with other molecules in the “reactor”, satisfy some definition of life. Far fetched? No, at least we see hints that combinators are multipliers (thus reproduction can be achieved with zippers and combinators). Strange way to see what could be a purpose of logic in biology.
2. There is a hidden symmetry in the moves of the chemical concrete machine which indicates there is no need for the termination gate (i.e. there is no garbage), instead this symmetry (inspired by the duality between multipliers and co-multipliers) puts in a new light the idea of recursion.
3. It seems to be easy to reformulate the cartesian method as described here into a sort of it’s opposite. Of course, such that it makes sense, not especially as a scientific method, but a method of creation, maybe relevant for the understanding how life proceeds.
4. I have large quantities of math facts and research which is put on hold, not reported here, or only in a very indirect way.
There is a partly interesting, partly boring period coming, I feel it. On one side, I am working on putting some of the work reported in this open notebook in articles form, which is funny, but on the other side I ask myself: why? Is there any need, other than organizing a bit things? Is it useful? Is this a satisfying method of communication?
But is the blog a better method? Other? Suggestions?
What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. One says that awareness is intentional. This does not mean that the something exists otherwise than in awareness. For instance, you are visually aware in your dreams, when you hallucinate a golden mountain, remember previous visual awareness, or have pre-visions. However, the case that you are visually aware of the scene in front of you is fairly generic.
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness. Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist. [p. 2-3]
I hold similar views, last time expressed in the post Ideology in the vision theater (but not with the same mastery as Koenderink, of course). Recall that “computing with space“, which is the main theme of this blog/open notebook, is about rigorously understanding (and maybe using) the “computation” done by the visual brain with the purpose to understand what space IS. This is formulated in arXiv:1011.4485 as the “Plato’s hypothesis”:
(A) reality emerges from a more primitive, non-geometrical, reality in the same way as(B) the brain construct (understands, simulates, transforms, encodes or decodes) the image of reality, starting from intensive properties (like a bunch of spiking signals sent by receptors in the retina), without any use of extensive (i.e. spatial or geometric) properties.
A complete, locally compact riemannian manifold is a length metric space by the Hopf-Rinow theorem. The problem of intrinsic characterization of riemannian spaces asks for the recovery of the manifold structure and of the riemannian metric from the distance function coming from to the length functional.
For 2-dim riemannian manifolds the problem has been solved by A. Wald in 1935. In 1948 A.D. Alexandrov introduces his famous curvature (which uses comparison triangles) and proves that, under mild smoothness conditions on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional) curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Hölder smoothness).
The problem has been solved by Nikolaev in 1998, in the paper A metric characterization of Riemannian spaces. Siberian Adv. Math. 9, no. 4 (1999), 1-58. The solution of Nikolaev can be summarized like this: he starts with a locally compact length metric space (and some technical details), then
- he constructs a (family of) intrinsically defined tangent bundle(s) of the metric space, by using a generalization of the cosine formula for estimating a kind of a distance between two curves emanating from different points. This will lead him to a generalization of the tangent bundle of a riemannian manifold endowed with the canonical Sasaki metric.
- He defines a notion of sectional curvature at a point of the metric space, as a limit of a function of nondegenerated geodesic triangles, limit taken as these triangles converge (in a precised sense) to the point.
- The sectional curvature function thus constructed is supposed to satisfy a Hölder continuity condition (thus a regularity formulated in metric terms)
- He proves then that the metric space is isometric with (the metric space associated to) a riemannian manifold of precise (weak) regularity (the regularity is related to the regularity of the sectional curvature function).
Sub-riemannian spaces are length metric spaces as well. Any riemannian space is a sub-riemannian one. It is not clear at first sight why the characterization of riemannian spaces does not extend to sub-riemannian ones. In fact, there are two problematic steps for such a program for extending Nikolaev result to sub-riemannian spaces:
- the cosine formula, as well as the Sasaki metric on the tangent bundle don’t have a correspondent in sub-riemannian geometry (because there is, basically, no statement canonically corresponding to Pythagoras theorem);
- the sectional curvature at a point cannot be introduced by means of comparison triangles, because sub-riemanian spaces do not behave well with respect to this comparison of triangle idea, as proved by Scott Pauls.
In 1996 M. Gromov formulates the problem of intrinsic characterization of sub-riemannian spaces. He takes the Carnot-Caratheodory (or CC) distance (this is the name of the distance constructed on a sub-riemannian manifold from the differential geometric data we have, which generalizes the construction of the riemannian distance from the riemannian metric) as the only intrinsic object of a sub-riemannian space. Indeed, in the linked article, section 0.2.B. he writes:
If we live inside a Carnot-Caratheodory metric space V we may know nothing whatsoever about the (external) infinitesimal structures (i.e. the smooth structure on , the subbundle and the metric on ) which were involved in the construction of the CC metric.
Develop a sufficiently rich and robust internal CC language which would enable us to capture the essential external characteristics of our CC spaces.
Give a minimal, axiomatic, description of sub-riemannian spaces.
As a preparation for the Turing computation properties of the chemical concrete machine, in this post I shall explain what multipliers and co-multipliers are.
Basically, multipliers and co-multipliers are molecules which self-multiply. More precisely, in the next figure we see the definition of those:
Here and are molecules from the formalism of the chemical concrete machine and and are labels. The blue arrow means any chemical reaction which is allowed.
Question: by close examination of the previous posts on graphic lambda calculus, can you identify any multiplier? or co-multiplier?
If not, then be patient, because in a future post I shall give plenty examples of those, especially connected with logic.
Further, we shall see that zippers, introduced in Chemical concret machine, detailed (III) , multiply in a very graphic way, kind of like what happens with the DNA of a cell when it divides. Let’s see.
We want to know if a zipper can be a multiplier. In the following figure we see what happens in the presence of DIST enzymes:
The reaction continues:
Now, the zipper multiplied into two zippers, but they are still connected. We need more information about and . Remark that:
In conclusion: if are multipliers and are co-multipliers, then the zipper is a multiplier!
After yesterday’s explanation of Privacy and secrecy by Cory Doctorow, this post comes as a balance. The research medium is full of people who love to share (scientific) knowledge, who are full of ideas awaiting dissemination.
We are more likely to tell things (look what I did lately!) then ask things.
So, I’ll stick to that and for the first time on this blog, having no idea what will be the outcome of this, I say: ask me anything! I’ll try to answer to my best.