Tag Archives: artificial chemical connectome

Another discussion about math, artificial chemistry and computation

I have to record this ongoing discussion from G+, it’s too interesting.  Shall do it from  my subjective viewpoint, be free to comment on this, either here or in the original place.

(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post   Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)

Further I shall only lightly edit the content of the discussion (for example by adding links).

It started from this post:

 “I will argue that it is a category mistake to regard mathematics as “physically real”.”  Very interesting post by Louis Kauffman: Mathematics and the Real.
Discussed about it here: Mathematics, things, objects and brains.
Then followed the comments.
Greg Egan‘s “Permutation City” argues that a mathematical process itself is identical in each and any instance that runs it. If we presume we can model consciousness mathematically it then means: Two simulated minds are identical in experience, development, everything when run with the same parameters on any machine (anwhere in spacetime).

Also, it shouldn’t matter how a state came to be, the instantaneous experience for the simulated mind is independent of it’s history (of course the mind’s memory ought to be identical, too).He then levitates further and proposes that it’s not relevant wether the simulation is run at all because we may find all states of such a mind’s being represented scattered in our universe…If i remember correctly, Egan later contemplated to be embarassed about this bold ontological proposal. You should be able to find him reasoning about the dust hypothesis on the accompanying material on his website.Update: I just saw that wikipedia’s take on the book has the connection to Max Tegmark in the first paragraph:
> […] cited in a 2003 Scientific American article on multiverses by Max Tegmark.
http://en.wikipedia.org/wiki/Permutation_City

___________________________
+Refurio Anachro  thanks for the reference to Permutation city and to the dust hypothesis, will read and comment later. For the moment I have to state my working hypothesis: in order to understand basic workings of the brain (math processing included), one should pass any concept it is using through the filter
  • local not global
  • distributed not sequential
  • no external controller
  • no use of evaluation.

From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit 🙂

That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.

After, I prefer to wonder not about consciousness and it’s simulation, but instead about vision and other much more fundamental processes related to awareness. These are taken for granted, usually, and they have the bad habit of contradicting any “bit”-based explanation given up to date.

___________________________

the past lives in a conceptual domain
would one argue then that the past
is not real
___________________________
+Peter Waaben  that’s easy to reply, by way of analogy with The short history of the  rhino thing .
Is the past real? Is the rhinoceros horn on the back real? Durer put a horn on the rhino’s back because of past (ancient) descriptions of rhinos as elephant killers. The modus operandi of that horn was the rhino, as massive as the elephant, but much shorter, would go under the elephant’s belly and rip it open with the dedicated horn.  For centuries, in the minds of people, rhinos really have a horn on the back. (This has real implications, alike, for example, with the idea that if you stay in the cold then you will catch a cold.) Moreover, and even more real, there are now real rhinos with a horn on the back, like for example Dali’s rhino.
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
___________________________
+Peter Waaben, that seems what the dust hypothesis suggests.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
___________________________
[My comment added here: see the posts

]

___________________________

+Marius Buliga :

+Refurio Anachro  concretely, in the model of computation based on lambda calculus you have to add an evaluation strategy to make it work (for example, lazy, eager, etc).  The other model of computation, the Turing machine is just a machine, in the sense that you have to build a whole architecture around to use it. For the TM you use states, for example, and the things work by knowing the state (and what’s under the reading head).  Even in pure functional programming, besides their need for an evaluation strategy, they live with this cognitive dissonance: on one side they they rightfully say that they avoid the use of states of the imperative programming, and on the other side they base their computation on evaluation of functions! That’s funny, especially if you think about the dual feeling which hides behind “pure functional programming has no side effects” (how elegant, but how we get real effects from this?).
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you  see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
[My comment added here: see behaviour 5 of a GLC actor explained in this post.
]
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying.  Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
___________________________
+Marius Buliga “there are no values in this computation” Not to disagree, but is there a distinction between GLC graphs that is represented to a collection of possible values? For example, topological objects can differ in their chromatic, Betti, genus, etc. numbers. These are not values like those we see in the states, signals and bits of a TM, but are a type of value nonetheless.
___________________________
+Stephen Paul King  yes, of course, you can stick values to them, but  the fact is that you can do without, you don’t need them for the sake of computation. The comparison you make with the invariants of topological objects is good! +Louis Kauffman  made this analogy between the normal form of a lambda term and such kinds of “values”.
I look forward for his comments about this!
___________________________
+Refurio Anachro  thanks again for the Permutation city reference. Yes, it is clearly related to the budding Artifficial Connectomes idea of GLC and chemlambda!
It is also related with interests into Unlimited Detail 🙂 , great!
[My comment added here: see the following quote from the Permutation city wiki page

The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than “real” time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being “filled in” as necessary if its owner were to turn their model-of-a-head towards it.

]

But I keep my claim that that’s enough to understand for 100 years. Consciousness is far away. Recall that first electricity appeared as  kind of life fluid (Volta to Frankenstein monster), but actually it has been used with tremendous success for other things.
I believe the same about artificial chemistry, computation, on one side, and consciousness on the other. (But ready to change this opinion if faced with enough evidence.)

___________________________

Advertisements

Input and output of a GLC actors computation

This is a suggestion for using a GLC actors computation ( arXiv:1312.4333) which is easy and it has a nice biological feeling.

The following architecture of a virtual being emerged after a discussion with Louis Kauffman. It amounts to thinking that the being  has sensors, a brain, and effectors:

  • sensors are modelled as an IN actor. This actor has a core (which is the outside medium which excites the sensor). The sensor is excited as an “interaction with cores” concerning the IN actor (and it’s core)
  • the brain is a network of actors which start to interact with the IN actor (after the interaction with cores). These interactions can be of any kind, but I think mainly as interactions by graph reduction.
  • the effectors are modelled as a OUT actor. This actor has also a core, which is the outside medium which is changed by the GLC computation. At a certain point in the computation the brain actors interact (by graph reductions) with the OUT actor and the result of the computation is again an interaction with cores. This time in the OUT actor. It is like the OUT actor deposit the result of the computation in it’s core.

With images, now. The architecture of the being is the following:

actor_read_1

The actor IN has a core (the outside medium) and a mask (which is the sensor):

actor_read_2

The sensing means interaction of the IN actor with it’s core (which is one of the behaviours of a GLC actor):

actor_read_4

The red-green graph has no meaning, is just a graph in the chemlambda drawing convention.

Now, the architecture is designed such that there will be Brain actors – IN actor interactions (by graph reductions, another behaviour of GLC actors)

actor_read_5

… followed by interactions between the brain actors. This is where the bulk of the computation is done.

actor_read_6

This computation sets the stage for interactions between the Brain actors and the OUT actor.

actor_read_7

And finally the OUT actor interacts with it’s core, producing a change into it (i.e. a change in the outside world)

actor_read_8

__________________________

Comments:

  • it is a mater of good design
  • this is the simplest proposal for reading the result of a GLC computation. There are other possibilities to think about
  • it is not supposed that the IN, Brain or OUT actors survive after the computation. (It would be great, though!)
  • the Brain actors are doing a computation for which they were designed, not any computation. So, they look as the equivalent of a program
  • the IN actor (which could be many IN actors as well) does something alike reading data (from it’s core)
  • and the OUT actor does something like writing data (in it’s core).
  • Finally, the whole architecture has to be designed such that what has been described works by it’s own, without any external control.

GLC actors, what are for and why are interesting?

We decided to go open with our project, documented here in the posts from the category distributed GLC. There is a dynamically changing and evolving team, formed by the authors of this article

GLC actors, artificial chemical connectomes, topological issues and knots

and Stephen P. King, Jim Whitescarver, Rao Bhamidipati, as well as others from ProvenSecure Solutions, which is behind it. This project have several goals, the first one is to explore the possibility of implementing this computing model in reality.

The implications of the proposal (i.e. “what if this is possible?”) are huge.  I would like to write instead, in an understated manner, why we think that the means to achieve those implications is interesting. From personal experience with explanations around this subject, it looks like the  really difficult part, at this stage of the project, at least, is the one of understanding what is different in this model.

The reason behind this difficulty is not that the understanding demands highly technical knowledge. Instead, as Louis Kauffman describes it, the project is based on ideas which look completely non obvious at first, which, once understood, they become straightforward.

The most important idea is that the model is not one based on signals circulating through wires, passing through gates. Nor on messages exchanged through channels (though there is a nuance here, coming from the second idea, concerning “locality”).

What then? you might ask. Is this a communication model where the entities involved in communication are not actually communicating? No, on the contrary, the entities (i.e. the GLC actors) are communicating all the time, but not like in a phone call, instead their communication is alike a chemical reaction between  molecules which are close in space.

(The GLC and chemlambda are good for this, because there is no signal circulating through the arrows of these graphs and the nodes are not gates which are processing these signals!)

Think about this. A chemical reaction network (CRN) is obviously a good description of a collection of chemical reactions. But is nothing else than a description. The molecules involved in the CRN are not “aware” there is a CRN. They interact in a purely local way, they are not goal driven, nor there is a upper being which set them the task to interact.  The “communication” between two molecules is not based on signal passing, like the mathematical description of a CRN is.  It is more like a graph rewrite procedure. Pure syntax, at this level (of molecules), but nevertheless self-sustaining, autonomous, purely local and of course distributed (in space).

The second idea is this “locality”, which means that the interaction between the actors should be not controlled from an upper level, the actors should be autonomous and reactive entities, and that their reactions should be a consequence of the local environment only. In the case of the GLC actors, this is a consequence of GLC being based on local graph rewrites (i.e. involving only rewrites acting on a bounded number of nodes and arrows), with the exception of the GLOBAL FAN-OUT move, where chemlambda comes to help. Chemlambda is purely local.

The third idea is to use Hewitt Actor Model as a replacement for space. On the net, we do need a way to define what means for two GLC actors (i.e. the net correspondents of two molecules) to be in proximity. The GLC actors are designed in such a way so to have these proximity relations as links between actors (with no exterior control or decision over this, past the preparation stage) and to see communication between actors as the graph rewrites. In practice, most of the graph rewrites occur between two actors. As an exception, the GLOBAL FAN-OUT (from GLC) becomes in chemlambda a procedure which is alike binary cell fission in real life. Indeed, this is not explained in detail in the paper, only a small example is given, but compare the picture taken from this post

(about how the W combinator suffers a GLOBAL FAN-OUT with the pure local means of chemlambda) with this picture from the binary cell fission wiki page,

Binary_Fission

______________________

Chemical actors

UPDATE: Clearly needed a mix of the ActorScript of Carl Hewitt with GLC and chemlambda. Will follow in the months to come!

______________________

Thinking out loud about a variant of the actor model (chapter 3 here), which uses graphic lambda calculus or the chemical concrete machine. The goal is to arrive to a good definition of an Artificial Chemical Connectome.

Have remarks? Happy to read them!

A chemical actor is the following structure:

  • a graph A \in GRAPH  (or a molecule A \in MOLECULES)
  • with a unique ID name ID
  • with a numbering (tagging)  of a (possibly empty) part of it’s free arrows
  • with a behaviour, to be specified further.

actor_1

A communication is:

  • a graph  B \in GRAPH  (or a molecule)
  • with a source ID and a target ID
  • with a part of free arrows tagged with tags compatible (i.e. the same) with the ones from the graph from the source ID
  • with another part of free arrows tags with tags compatible with the ones from the graph from the target ID

actor_3

The actor target ID receives a communication from the actor source ID and it becomes:

actor_4

At this point the actor which has target ID exhibit the following behaviour:

  • performs one, several, or in a given order, etc of graph rewrites (only the + unidirectional moves)  which involve at least an arrow between A and B
  • following a given algorithm, splits into a new actor and a communication B’ which has as target arrows the one from the lower part of the previous figure (but with another target ID)
  • or creates a new actor by using only (-) moves

Remark: the numbers n, m could be uniformly bounded to 2, or 4, or 6, according to user’s wishes. Take a look at the Ackermann machine, for inspiration.

Chemical concrete machine on figshare

Just published the Chemical concrete machine on figshare.  Should be cited as


Chemical concrete machine. Marius Buliga. figshare.
http://dx.doi.org/10.6084/m9.figshare.830457

The description is a bit more telling than the abstract of the article (see why by reading the suggested posts below):
This article introduces an artificial, algorithmic chemistry based on local graph rewrites applied to a class of trivalent graphs named molecules. The graph rewrites are seen as chemical reactions with enzymes. The enzymes are tagged with the name of the respective move.  This artificial chemistry is Turing complete, because it allows the implementation of untyped lambda calculus with beta reduction (but without eta reduction). This opens the possibility  to use this artificial chemistry for constructing artificial chemical connectomes.
Here, on this open notebook/blog, you may read more about this here:
The article is available also as  arXiv:1309.6914 [cs.FL].
_________________________

The chemical connectome of the internet

This is a continuation of the post  WWW with Metabolism . That post, in a nutshell, propose to endow the internet connectome with a chemical connectome, by using an artificial chemistry for this artificial network, namely the graphic form of the lambda calculus formalism of the chemical concrete machine (please go read the mentioned post for details).

I’m going to do a weird, but maybe instructive thing and pretend that it already happened. Why? Because I intend to argue that the internet with a chemical connectome might be a good laboratory for testing ideas about real brains.

The name “chemical connectome” is taken from the article The BRAIN Initiative: Toward a Chemical Connectome.  It means the global chemical reaction network associated with the “connectome”, which is the physical neurons-and-synapses network. Quoting from the article:

Focusing solely on neuronal electrical activity is shortsighted if the goal is to understand information processing in brains. Brain function integrally entails complex synaptic chemistries. […] New tools fuel fundamentally new conceptualizations. By and large, today’s approaches for sensing neurotransmitters are not able to approach chemical neurotransmission at the scales that will ultimately be important for uncovering emergent properties. Likewise, most methods are not highly multiplexed suchthat the interplay of multiple chemical transmitters can be unraveled. In addition to measuring electrical activity at hundreds or thousands of neurons simultaneously, we need to advocate for a large-scale multidisciplinary effort aimed at in vivo nanoscale chemical sensing.

Imagine then we already succeeded with implementing a (logical, artificial, human made) chemical connectome over the internet, by using CS tools and exploiting the relative simplicity of ingredients of the  internet (connected Von Neumann machines which exchange data through known formats and programming languages). We could explore this chemical connectome and we could play with it (because we defined it), even if, like in the case of the brain connectome, we can’t possibly know the www network in all details. So, we should be in a situation which is extremely complex, like a brain is,  but however allows for very powerful  experimentation, because is still human made, according to human rules.

Neurons know nothing, however …

… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following:  the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.

I take the following hypotheses for a wet neural network:

  • the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
  • Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
  • (the most simplifying hypothesis)  all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of MOLECULES or enzyme  names from the chemical concrete machine.

The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in  arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.

__________________________

Before giving this explanation, I shall explain in just a bit more detail how the wet neural network,  which satisfies those hypotheses, works.  A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.

As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.

But, in the post  SPLICE and SWITCH for tangle diagrams in the chemical concrete machine    I proved that in the chemical concrete machine formalism there is a local move, called SWITCH

chem_switch

which is the result of 3 chemical reactions with enzymes, as follows:

chem_switch_2

Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them \sigma).  This mechanism gives chemical reactions between molecules with the form

A + B + \sigma \rightarrow C + D + GARB

where $\latex A$ and B are molecules such that by taking an arrow from A and another arrow from B we may apply the \sigma enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules C and D (and possibly some GARBAGE, such as loops).

In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.

__________________________

Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.

Let us consider the following rules of decorations with names and types:

chem_decor_1

If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.

chem_decor_2

For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.

In general, though, this is not possible. Consider for example a 2-zipper molecule.

chem_decor_3

We obtain the decoration F as a nested expression of A, D, E, which enough for performing two beta reductions, without knowing what A, D, E mean (without the need to evaluate A, D, E). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).

Here is the tricky part: if we look at the term F then all that we can write after beta reductions is only formal, i.e. F  reduces to (A[y:=D])[x:=E], with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables x, y and the terms A, D, E.

However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like x, y, neither of terms, like A, D, E, F.

Remark that the decoration is made such that:

  • the type decorations of arrows are left unchanged after any move
  • the terms or variables decorations (names elsewhere “places”) change globally.

We indicate this global change like in the following figure, which is the result of the sequence of the two possible \beta^{+} moves.

chem_decor_4

Therefore, after the first graphic beta reduction, we write  A'= A[y:=D] to indicate that A' is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older A, when we replace y by D. After the second  graphic beta reduction we use the same notation.

But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by D with the arrow decorated by y.  We should, in order to have a better notation, replace D by D[y:=D], which gives rise to a notation for a potentially infinite process of modifying D. So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.

In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.

So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.