# Actors as types in the beta move, tentative

Recall the posts about types as decorations of a GLC graph?  Like this one

Example: decorations of S,K,I combinators in simply typed GLC

In the chemlambda version, the decoration with types for the lambda and application graphical elements is this:

or with g-patterns:

L[x:b, y:a, z:a->b]

A[x:a->b, y:a, z:b]

Recall also that there is a magma associated to any graph (or g-pattern) which is easy to define. If the magma is free then we say that the g-pattern is well typed (not that we need “well” further).

Let’s mix this with actors. We make the attribution of the port variables of a g-pattern to actors (id’s) and we write  that the port variable x belongs to the actor a  like this

x:a

I don’t want to define an operation -> for actors id’s, like if they are types. Instead I shall use the Arrow graphical element and the COMB move (see the moves of chemlambda in terms of g-patterns here).

Here is a COMB move, a bit modified:

L[x:b, y:a, z:d]     –COMB–>  L[x:b, y:a, u:a] Arrow[u:b, z:d]

which says something like

:d should be  :a->:b

The same for the application

A[z:d, v:a, w:b]  –COMB–>  Arrow[z:d, s:a] A[s:b , v:a, w:b]

which says something like

:d should be :a->:b

which, you agree, is totally compatible with the decorations from the first figure of the post.

Notice the appearance of port variables u:a, u:b  and s:a, s:b, which play the role a->b.

We allow the usual COMB moves only if the repeating variables have the same actors.

What about the beta move?

The LEFT g-pattern of the beta move is, say with actors:

L[x:a, y:d, z:c]  A[z:c, v:b, w:e]

Apply the two new COMB moves;

L[x:a, y:d, z:c]  A[z:c, v:b, w:e]

–2 COMB–>

L[x:a, y:d, u:d]

Arrow[u:a, z:c]  Arrow[z:c, s:b]

A[s:e , v:b, w:e]

An usual COMB move applies here:

L[x:a, y:d, u:d]

Arrow[u:a, z:c]  Arrow[z:c, s:b]

A[s:e , v:b, w:e]

<–COMB–>

L[x:a, y:d, u:d]

Arrow[u:a, s:b]

A[s:e , v:b, w:e]

and now the new beta move would be:

L[x:a, y:d, u:d]

Arrow[u:a, s:b]

A[s:e , v:b, w:e]

–BETA–>

Arrow[x:a, w:e]

Arrow[v:b, y:d]

This form of the beta move resembles with the combination of CLICK and ZIP from zipper logic.

Moreover the Arrow elements could be interpreted as message passing.

________________________________________________________

# Neurons know nothing, however …

… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following:  the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.

I take the following hypotheses for a wet neural network:

• the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
• Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
• (the most simplifying hypothesis)  all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of $MOLECULES$ or enzyme  names from the chemical concrete machine.

The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in  arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.

__________________________

Before giving this explanation, I shall explain in just a bit more detail how the wet neural network,  which satisfies those hypotheses, works.  A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.

As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.

But, in the post  SPLICE and SWITCH for tangle diagrams in the chemical concrete machine    I proved that in the chemical concrete machine formalism there is a local move, called SWITCH

which is the result of 3 chemical reactions with enzymes, as follows:

Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them $\sigma$).  This mechanism gives chemical reactions between molecules with the form

$A + B + \sigma \rightarrow C + D + GARB$

where $\latex A$ and $B$ are molecules such that by taking an arrow from $A$ and another arrow from $B$ we may apply the $\sigma$ enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules $C$ and $D$ (and possibly some GARBAGE, such as loops).

In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.

__________________________

Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.

Let us consider the following rules of decorations with names and types:

If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.

For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.

In general, though, this is not possible. Consider for example a 2-zipper molecule.

We obtain the decoration $F$ as a nested expression of $A, D, E$, which enough for performing two beta reductions, without knowing what $A, D, E$ mean (without the need to evaluate $A, D, E$). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).

Here is the tricky part: if we look at the term $F$ then all that we can write after beta reductions is only formal, i.e. $F$  reduces to $(A[y:=D])[x:=E]$, with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables $x, y$ and the terms $A, D, E$.

However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like $x, y$, neither of terms, like $A, D, E, F$.

Remark that the decoration is made such that:

• the type decorations of arrows are left unchanged after any move
• the terms or variables decorations (names elsewhere “places”) change globally.

We indicate this global change like in the following figure, which is the result of the sequence of the two possible $\beta^{+}$ moves.

Therefore, after the first graphic beta reduction, we write  $A'= A[y:=D]$ to indicate that $A'$ is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older $A$, when we replace $y$ by $D$. After the second  graphic beta reduction we use the same notation.

But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by $D$ with the arrow decorated by $y$.  We should, in order to have a better notation, replace $D$ by $D[y:=D]$, which gives rise to a notation for a potentially infinite process of modifying $D$. So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.

In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.

So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.

# Types and places

Simply typed graphic lambda calculus    is an added layer of decorations with types  over graphic lambda calculus, described by the following rules:

See  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus   for the compatibility with simply typed lambda calculus.

Compared with the purity of graphic lambda calculus,  types (at least of this kind) seem a step back. However, graphic lambda calculus has also the emergent algebra sector, see  arXiv:1305.5786 [cs.LO] section 5, and also here the series

The starting point of the research subject on “computing with space” is to try to understand the computational content of emergent algebras arXiv:0907.1520 [math.RA]. This has lead to graphic lambda calculus, passing by decorated binary trees and decorated tangle diagrams, until the non-decorated formalism of graphic lambda calculus. This formalism is not yet completely finished; indeed, the extended graphic beta move,  arXiv:1302.0778 [math.GT], especially section 8,   which mixes the emergent algebra part with the purely logic part, is still in beta stage.

Therefore, I’m coming back to the decorated version and remark that, as concerns the two gates which appear in the emergent algebra sector, they can be decorated with places:

Places form an emergent algebra. Types form a free magma (for well decorated graphs).

# Example: decorations of S,K,I combinators in simply typed graphic lambda calculus

Continuing from   Simply typed graphic lambda calculus  , let’s look at decorations for the S,K,I combinators.

We work in the lambda calculus sector of the graphic lambda calculus, which contains graphs in $GRAPH$ which are associated to terms in lambda calculus by the algorithm explained here.

We want to decorate the arrows of such graphs, according to the rules given in the following figure:

Decorations are denoted by letters $\sigma, \tau$ … or by letters $a, b, c$ …. (what’s in a name?).  These decorations are called “types”. There is only one operation, called “$\rightarrow$“, which takes a pair of types $(\sigma, \tau)$ and returns the type $\sigma \rightarrow \tau$.

In algebraic terms, here’s what is happening. We start with a graph in $GRAPH$ and we decorate all it’s arrows with different letters from an alphabet (of types). Then we associate to it the magma with:

• generators being the (decorations of the) arrows
• relations being the ones from the rules of decorations given in the previous figure.

If the magma is free,  then we say that the graph is well typed. That means we can find a finite collection of types, with no relation between them, such that all arrows can be decorated by some element of the free magma generated by that collection of types.

_____________________

Remark 1:  Please, don’t let me reinvent the wheel.  Surely somewhere in the big work on type theory, there is something analogous done already. Just send me citations and I shall add them with due acknowledgements.

_____________________

Remark 2: This is exactly the same idea which is used in knot theory, where we associate to a knot diagram it’s knot  quandle. But with a twist: while in knot theory we look for quandles which are not free (we cannot eliminate all relations between the generators), in this simply typed lambda calculus we concentrate only on those graphs which have free magmas associated.  Please look at the right, lower corner from the previous figure, where is given the rule of decoration for the $\varepsilon$ gate. (We are not going to use it in relation to the lambda calculus sector.)  As you see, this is compatible with the trivial quandle with the operation $xy = y$.

_____________________

Remark 3:  I am not going to enter or take the extremely interesting path of cartesian closed categories , basically because my strong bias against anything cartesian. The initial source of this bias comes from the experience with sub-riemannian geometry, where any application of cartesian ideas, like slicing a problem until it becomes 1 dimensional, then solving it by 1d calculus and analysis techniques, then reassembling the slices, leads always to “rigidity results”, which have the form: if you try then you arrive at a contradiction. Don’t get me wrong, there are amazing and extremely deep such results, but they are always in the form of a proof by contradiction.

Remark 4: The point of view from this simply typed graphic lambda calculus is surely related to the Hindley-Milner type system.

_____________________

Let’s see how this works for the I, K, S combinators, then for the $\Omega$ combinator.

The procedure is the following:

• we number the nodes of the graph (arbitrarily)
• we put arbitrary, different labels on each arrow of the graph
• we write the relations which appear from each node, according to the rules of decoration
• finally, we examine the generated magma, to see if, eliminating some generators by using the relations, we can arrive to prove that’s a free magma.

In the next figure this is done for the $I$ combinator (the identity) and for the $K$ combinator.

We obtain the right types for identity and $K$ combinators (there are no contexts, that’s for later). It was easy, they are “well typed” according to the definition from here.

Now, let’s look at the $S$ combinator:

We have to work a bit, but not too much:

(For the convenience of the readers, I added in the figures the usual notations for combinators and types. )

We obtain again the right types for $S$ as well as for the variables involved.

_____________________

Let’s try to decorate  the combinator $\Omega = (\lambda x . xx) (\lambda x. xx)$.

We obtain the following magma presentation:

which cannot be free, because of relations (4) and (6). So $\Omega$ is not well typed, exactly like in the simply typed lambda calculus.

# Simply typed graphic lambda calculus

Let me prepare the path towards discussing about types and spaces. In this post is a first proposal for a simply typed graphic lambda calculus, which is compatible with simply typed lambda calculus.

The leading idea is:

types are decorations, in the same way as elements of an emergent algebra are decorations.

Motivation: Remember that emergent algebras appeared as an effort to understand the formalism of decorated binary trees which appeared first in Dilatation structures I. Fundamentals, section 4. Various efforts have been spent for this, as witnessed in Computing with space, sections 3-6,  until finally arriving to the formalism of graphic lambda calculus, as exposed in  arXiv:1305.5786 [cs.LO] . The main “trick” of graphic lambda calculus is that it eliminates variables, therefore it eliminates decorations of graphs!

So, the program is that now we go back from the other side, logic, where types are decorations, as you will see, in order to connect with emergent algebras, which deal with the problem of describing spaces in a purely relational way, without any appeal to points (well, in the graphic lambda calculus formalism).

Interesting thing will appear, surely.

__________________________

In this version of simply typed lambda calculus we introduce types (just a bag of names  $\sigma, \tau$, … ), with one operation, denoted by $\rightarrow$.  We use types as decorations of arrows of graphs in $GRAPH$, according to the following decoration rules:

Remark how the decoration of the $\varepsilon$ gate corresponds to the trivial quandle.

I don’t claim that I can decorate any graph in $GRAPH$ by using these rules.  But when I can do it, then under any move from graphic lambda calculus the decoration remains compatible with the decoration rules. Indeed, this is proved in the next figure:

The pruning and the FAN-OUT moves are trivially preserving decorations.

__________________________

In another post we shall see why is this compatible with the simply typed combinatory logic.