Tag Archives: combinatory logic

RNA based combinators

Thank you for noticing me about arXiv:2008.08814 An RNA-Based Theory of Natural Universal Computation.

The author proposes to partially use combinators which are RNA encoded, in such a way that the combinatory logic rewrites are implemented by pervasive RNA editing rules. The idea is like in Molecular computers, namely that for any rewrite there is an enzyme which detects the rewrite pattern and then does the rewrite using a small repertoire of cleavage and ligation.

The author concentrates on the BCKW system, which works well until the rewrite for W combinator. At this point the problem of duplication appears, which makes the author to propose the following replacement of duplications:

  • use RNA pseudoknots for term representation, in the sense that parantheses matching correspond to double-stranded portions of the pseudoknot
  • instead of duplication, tag a term with a label and use the same double strand matching idea to reference the term in another or the same term (pseudoknot)

This is very interesting to pursue, perhaps in combination with chemSKI with tokens which somehow proposes another mechanism for duplication, or with Combinatory Chemistry: Towards a Simple Model of Emergent Evolution arXiv:2003.07916 where duplication is delegated to the environment.

Partially explored also in Chemlambda strings, Zipper logic, as well as here in Zipper Logic and RNA pseudoknots. The idea is different though, look how I and K combinators look:

See also the first, apparently, project which proposes the use of combinators in chemistry, UPIM, mentioned locally here.

Take a better look at the knotted S combinator (zipper logic VI)

Continuing from  Knot diagrams of the SKI (zipper logic V) , here are some  more drawings of the S combinator which was described in the last post by means of crossings:

 

zipper_loop_5

 

Seen like this, it looks very close to the drawings from section 5 of  arXiv:1103.6007.

I am not teasing you,  but many things are now clear exactly because of all these detours. There is a lot to write and explain now, pretty much straightforward and a day after day effort to produce something  which describes well the end of the journey. When in fact the most mysterious creative part is the journey.

_____________________________________

Knot diagrams of the SKI (zipper logic V)

Continuing from  Zipper logic and knot diagrams, here are the  S,K,I combinators when expressed in this convention of the zipper logic:

zipper_loop_4

Besides crossings (which satisfy at least the Reidemeister 2 move), there are also fanout nodes. There are associated DIST moves which self-reproduce the half-zippers as expressed with crossings.

Where do the DIST moves come from? Well, recall that there are at least two different ways to express crossings as macros in GLC or chemlambda: one with application and abstraction nodes, the other with fanout and dilation nodes.

This is in fact the point: I am interested to see if the emergent algebra sector of GLC, or the corresponding one in chemlambda, is universal, and when I am using crossings I am thinking secretly about dilations.

The DIST moves (which will be displayed in a future post) come from the DIST moves from the emergent algebra sector of chemlambda (read this post and links therein).

There is though a construct which is strange, namely the left-to-right arrow which has attached a stack of right-to-left arrows,  and the associated CLICK move which connects these stacks of arrows.

Actually, these stacks look like half-zippers themselves and the CLICK move looks like (un)zipping a zipper.

So, are we back to square one?

No, because even if we replace those stacks by some other half-zippers and the CLICK move by unzipping, we still have the property that those constructs and moves, which are external to knot diagrams, are very localized.

Anyway, I can cheat by saying that I can do the CLICK move, if the crossings are expressed in the emergent algebra sector of chemlambda (therefore dilation nodes, fanout and fanin nodes), with the help of ELIM LOOPS and SWITCH.

But I am interested into simple, mindless ways to do this.

___________________________________

 

Zipper logic (III). Universality

Continues from Zipper logic (II). Details . In this post we shall see that zipper logic is Turing universal.

The plan is the following. I shall define combinators in the zipper logic framework, as being made of two ingredients:

  • instead of a tree of application nodes we have a combination of + half zippers (i.e. there is a clear way to transform a binary tree of application nodes into a collection of + zippers)
  • each combinator is then seen as being made of a combination of + half zippers (replacing the tree of application nodes) and the combinators S, K, I, seen also as particular combinations of + and – zippers.

1. Converting trees of application nodes into combinations of + zippers is straightforward.  Imagine such a tree with the root upwards and leaves downwards.  Each application node has then an exit arrow  and two input arrows, one at left, the other one at right.

Cut into two each right input arrow which is not a leaf or the root arrow.  We obtain a collection of + half zippers. Replace each tower of application nodes from this collection with the corresponding + half zipper.

Then glue back along the arrows which have been cut in two.

An example is given in the following figure.

zipper_not_8

2. The S, K, I combinators in zipper logic are defined int he next figure.

zipper_not_6

3. Combinators in zipper logic are by definition trees of application nodes translated into + half zippers, as explained at point 1, with leaves the S, K, I combinators defined at point 2.

4. Computation in zipper logic means the application of the moves (graph rewrites) defined in the previous post  Zipper logic (II). Details .

In particular the computation with a combinator in zipper logic means the reduction according to the graph rewrites of zipper logic of the respective combinator, as defined at point 3.

5. Zipper logic is Turing universal. In order to prove this we have to check the usual reductions of the S, K, I combinators.

Let A be a  combinator, then the zipper combinator which corresponds to IA reduces to the zipper combinator of A:

zipper_not_7

Let A, B be two combinators. Then the zipper combinator corresponding to the combinator (KA)B reduces as in the following figure.

zipper_not_10

The combinator (SK)K reduces to the combinator I, as in the next figure.

zipper_not_11

Now, let A, B, C be three combinators. We see then how the combinator ((SA)B)C reduces to (AC)(BC), when expressed as zipper combinators.

zipper_not_12

We see here a move called “FAN-OUT”.  This move is a composite of DIST and  FAN-IN, like in chemlambda. It is left to prove that indeed, any zipper combinator is a multiplicator (i.e. the move FAN-OUT makes sense when applied to a zipper combinator). The proof is the same as the one needed in a step of the proof that chemlambda is Turing universal. This is left to the reader, or for another time, if necessary.

________________________________

Question:  why is CLICK needed? after all I used it all the time with ZIP.

Will see this next time, when I shall prove that tangle diagrams are Turing universal, via zipper logic.

________________________________

Zipper logic

As seen in GLC or chemlambda, combinatory logic is a matter of zipping, unzipping or self-multiplication of (half) zippers.

So, why not taking this seriously, by asking if there is a way to express combinatory logic as a logic of zippers.

Besides the fun, there are advantages of using this for computing with space, just let me first  explain the logic of zippers in all detail.

The starting point is the fact that the S, K, I combinators and the reductions they are involved into can be expressed by the intermediary of zippers.

I am going back to the post Combinators and zippers.

A  n-zipper in GLC (in fact I am going to use chemlambda in order to eliminate the GLOBAL FAN-OUT move)  is a graph which has 2n nodes, looking like this:

zipper_n_1

Zippers are interesting because they behave like actual zippers. Indeed, we can unzip  a zipper by a graphic beta move:

zipper_n_2

Therefore, we can define a k-ZIP move, as a concatenation of k beta moves, which can be applied to a n-zipper, provided that k \leq n.

As a real zipper, we see that the n-zipper can be split into 2 half zippers, which are denoted as “-n Z” and “+n Z”.

zipper_n_3

Prepared like this, let’s think about combinators. In the SKI system a general combinator is expressed as a tree made by application nodes, which has as leaves the combinators S, K, and I.

Now, we can see a tree made by application nodes  as a combination  of + half zippers.

Also, the leaves, i.e. the S,K, I combinators are themselves made of half zippers and termination nodes or fanout nodes.

This is easy to see for the I and K combinator (images taken from the post  Combinators and zippers ):

zipper_3

_________________

zipper_4  The combinator S is made of:

  • one -3Z  half-zipper
  • one -2Z half-zipper
  • one -1Z half zipper
  • one fanout node

linked in a particular way:

zipper_5Conclusion: any combinator expressed through S,K,I is made of:

  • a tree of + half zippers
  • with leaves made by -1Z, -2Z, -3Z half zippers,
  • connected perhaps to fanout nodes and termination nodes.

The reduction of a combinator is then expressed by:

  •  application of k-ZIP moves
  • LOC PRUNING moves (in relation to termination nodes and also in relation to CO-ASSOC moves, see further)
  • and DIST moves.

Let me explain. The application of k-ZIP moves is clear. The k-ZIP moves shorten the + half zippers which form the application tree.

We sit now in chemlambda, in order to avoid the GLOBAL FAN-OUT. Then, what about the moves related to a half zipper which is connected to the in arrow of a fanout node?

These are DIST moves. Indeed, remark that  all half zippers are DISTRIBUTORS.  Because they are made by concatenations of lambda or application nodes.

Moreover, the fanout node is a distributor itself! Indeed, we may see the CO-ASSOC move as a DIST move, via the application of some FAN-IN and LOC PRUNING moves:

zipper_n_4

All in all this defines the Zipper Logic, which is then Turing universal.  In the next post I shall give all details, but it’s straightforward.

_______________________________________

I-don’t-always advantage of the chemical concrete machine

… over other computing formalisms, is contained in the following wise words:

WHEN_FAN_OUT

Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.

But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer.

This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)

In graphic lambda calculus, the parent of the chemical concrete machine, I proved that combinatory logic can be done by using local moves available and one global move, called GLOBAL FAN-OUT.  This global move is what is resembling the most with the behaviour of a usual FAN-OUT gate:  A graph A connected to the input of a FAN-OUT gate is replaced by two copies of the graph.

That’s bad, I think, so in the chemical concrete machine I arrived to prove that GLOBAL FAN-OUT can be replaced, as concerns graphs (or molecules, in the chemical concrete machine formalism) which represent combinators, with successions of local DIST moves (and some other local moves) .

It is possible exactly because there are no variable names. Moreover, there’s something almost biological in the succession of moves: we see how combinators reproduce.

As an illustration, the following is taken from the post  Chemical concrete machine, detailed (V) :

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )

bckw_2

Let’s see how the “molecule” K behaves, when connected to a FAN-OUT gate (green node with one input and two outputs):

bckw_6

The “reproduction” of the molecule B is more impressive:

bckw_3

In the formalism of the chemical concrete machine, \delta^{+} is a distributivity move (or “enzyme” which facilitates the move in one direction, preferentially), and \phi^{+} is a FAN-IN move (facilitated in one direction).

___________________________

See more about this in the Chemical concrete machine tutorial.

___________________________

This makes me believe that, as long as we don’t reason in terms of states (or any other variables), it is possible to have FAN-OUT gates in quantum computation.

Example: if-then-else in the chemical concrete machine

… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.

Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can).  I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.

It can compute anything which can be computed by using any programming  language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.

There are, of course, interesting questions to ask, among them:

  • do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
  • is this an example of a sequential or parallel computation?
  • what evaluation procedure is used in the chemical concrete machine?

The second question is the most easy to ask: is parallel computation.  The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)

But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:

  • lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
  • they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.

The third question, concerning the evaluation procedure will be also explored in further posts.  Care has to be taken here because there are no variables in these formalisms  (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right?  The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.

Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.

Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong).  The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow.  Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.

_________________________

The if-then-else construct has already been explored in graphic lambda calculus,  see Teaser: B-type neural networks in graphic lambda calculus (II)   in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.

In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:

bckw_111

Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:

bckw_112

Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.

In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:

bck_113

The previous remarks apply here as well.

With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another  the medicine B.

_____________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (VI)

Let’s continue from  the post  Chemical concrete machine, detailed (V) , by adding some more facts and remarks.

_________________

We have seen that in order to not have to apply a controlled sequence of CO-ASSOC molecules (i.e. moves), it is better to introduce a composite move, saying that a certain simple molecule is a propagator.

bckw_9

With this move the formalism of the chemical concrete machine is exactly as powerful as without it. The reason is that the move called “PROP+” can be seen as a sequence of CO-ASSOC moves and DIST+ moves.

Let’s accept this move (i.e. like if there is an associated enzyme to it) and let’s revisit the proof that the W molecule is a multiplier.

bckw_10

_________________

Besides the B, C, K, W molecules (which correspond to the  B,C,K,W system  ), other molecules are also interesting: in the following figure are presented the molecules F, I, S’ and S.

bckw_12

The molecule F may represent FALSE in the Church encoding of the booleans, along with K which encodes TRUE.  The molecule I is the identity and the molecule S corresponds to the combinator S from the SKI combinator calculus. which is

a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not useful for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language.

These correspondences are established by using the algorithm for transforming lambda calculus terms into graphs in the graphic lambda calculus, then by using the dictionary between the gates from graphic lambda calculus to the essential molecules from the chemical concrete machine.

What about the S’ molecule?  It is a version of the molecule S, i.e. it transforms into S by this reaction:

bckw_13

Also, there is a proof, analogous with the one for W, of the fact that S’ is a multiplier, by using the PROP+ move. This proof is better than the proof for S which was given at the end of the post   Local FAN-IN eliminates global FAN-OUT (I) .

_____________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (V)

The B,C,K,W system

   is a variant of combinatory logic that takes as primitive the combinators B, C, K, and W. This system was discovered by Haskell Curry in his doctoral thesis Grundlagen der kombinatorischen Logik, whose results are set out in Curry (1930).

In this post, I shall explain which are the correspondents of the B, C, K, W, combinators in the formalism of the chemical concrete machine, then I shall prove that they are multipliers. In a future post I shall prove that  their correspondents in the chemical machine make the machine Turing complete.

Remark: In a sense, this was already established, by using the S, K , I combinators, in the post Local FAN-IN eliminates global FAN-OUT (I) . You only have to pass from the  black and white drawing conventions of graphic lambda calculus to the drawings coloured with red and green of the chemical concrete machine, and then use the result which says that combinatory logic can be implemented in graphic lambda calculus with global FAN-OUT. All in all this gives that in the chemical concrete machine the combinatory logic can be implemented without any global move.

However, I think is more instructive to see first why other combinators than S, K, I are multipliers (which is a reformulation of the result mentioned in the remark).

___________________

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )

bckw_2

Now, let’s prove that they are multipliers!  The proofs for B and C are very much alike, therefore I put here only the proof that B is a multiplier:

bckw_3

By the conventions of the chemical concrete machine, I mention here the enzymes which are involved in the reactions, instead of writing the moves, like in the graphic lambda calculus.

The proof that K is a multiplier is the following:

bckw_6

Notice how, in both cases, the reactions seem feasible, in the sense that there are no cases, they can be accomplished by a linear process: pour some DIST enzymes, then some FAN-IN, for B, or DIST, LOC PRUNING and then FAN-IN, for K. More than that, remark that the order of reactions do not matter, i.e. they can be accomplished by pouring all the needed enzymes at the same moment.

For the W combinator (molecule), things get a bit more complex, as was the case for the combinator S:

bckw_4

There is a reaction (or move) which needs explanations. I called it DISENTANGLE (CO-ASSOC) reaction. It is this:

bckw_5

It can clearly be done by a controlled succession of CO-ASSOC moves (reactions). From the point of view of the feasibility in the real world (provided a real implementation of the chemical concrete machine will appear), it seems hard to control the exact order of applications of CO-ASSOC moves which gives the DISENTANGLE move as an effect. So, probably,  we shall need a “disentangle enzyme” dedicated to this.

As an alternative, remark that for proving that W is a multiplier we need an application of the DISENTANGLE composite move, described in the next figure:

bckw_7

For practical (or theoretical as well) purposes, it is enough to take this as a move. If you are lost and don’t understand what is the correspondent of this move in lambda calculus, is like this:  for any term T, if you apply a FAN-OUT gate to TT then you obtain two TT.  (Recall that’s practically invisible in lambda calculus, because there is no FAN-OUT, instead all variables have names and the pattern matching works by some magic outside that formalism.)

In other words, what would get rid of needing a controlled sequence of CO-ASSOC reactions for multiplying the molecule W is this:  assume that the molecule which is connected to the “y” essential molecule (i.e. to the input of a FAN-OUT gate) is a “propagator”. Propagators are defined in the next figure:

bckw_8

Propagators are different from multipliers, because they are molecules with one selected input and one selected output which “propagate along a FAN-OUT gate”, while multipliers are multiplied by a FAN-OUT gate. Propagators can serve in fact as labels, or names, which propagate along a tree of FAN-OUT gates.

_________________

Return to the chemical concrete machine tutorial.

Local FAN-IN eliminates global FAN-OUT (I)

For being able to build  a chemical concrete machine (see the posts  A chemical concrete machine for lambda calculus  and  Why build a chemical concrete machine, and how?) we have to prove that  universal computation can be attained with only local moves in graphic lambda calculus. Or, the lambda calculus sector of the graphic lambda calculus, which gives universality to graphic lambda calculus, uses the global FAN-OUT move (see theorem 3.1 (d)  arXiv:1305.5786 [cs.LO]. Similarly, we see in proposition 3.2 (d), which describes the way combinatory logic appears in graphic lambda calculus, that again global FAN-OUT is used.

I want to describe a way to eliminate the global FAN-OUT move from combinatory logic (as appears in graphic lambda calculus via the algorithm described here ).

________________

There are reasons to dislike global moves in relation to B-type neural networks (see the last post    Pair of synapses, one controlling the other (B-type NN part III) ). Similar concerns can be found in the series of posts which has as the most recent one Dictionary from emergent algebra to graphic lambda calculus (III) .

In this first post I shall introduce a local FAN-IN move and two distributivity moves and I shall prove that they eliminate the need for using global FAN-OUT in combinatory logic. In the next post I shall prove that we can eliminate two other moves (so that the total number of moves of graphic lambda calculus stays the same as before) and moreover we can recover from distributivity and local FAN-OUT moves the missing oriented Reidemeister moves from the \lambda-TANGLE sector.

________________

Definition. The local FAN-IN move is described in the next figure and it can be applied for any \varepsilon \not = 1.

frob_1

Comments:

  • as you see, in the move appears a dilation gate, what can this has to do with combinatory logic? As I explained previously, the properties of the gates are coming through the moves they are involved in, and not from their name. I could have introduced a new gate, with two inputs and one output, call this new gate “fan-in gate” and use it in the FAN-IN move. However, wait until the next post to see that there are other advantages, besides the economy of gates available, in using a dilation gate as a fan-in.
  • the FAN-IN move resembles to the packing arrows trick which is used extensively in the neural networks posts.  This suggests to use as a  fan-in gate

synapse_1

the green triangle gate and as fan-out gate the red triangle gate. This would eliminate the \Upsilon gate from the formalism, but is not clear to me how this replacement would interfere with the rest of the moves.

  • the FAN-IN move resembles with the dual of the graphic beta move, but is not the same (recall that until now I have not accepted the dual of the graphic beta move in the list of the moves of graphic lambda calculus, although there are strong reasons to do so):

dist_2

dist_1

which is needed in the emergent algebra sector in order to make the dictionary to work (and related as well to the goal of non using global FAN-OUT in that sector).  This latter move is in fact a distributivity move (see further), but we are free to choose different moves in different sectors of the graphic lambda calculus,

  • I know it is surprising that until now there was nothing about evaluation strategies in graphic lambda calculus, the reason being that because there are no variables then there is noting to evaluate. However, the situation is not so simple, at some point, for example in the chemical concrete machine or for neural networks, some criterion for choosing the order of moves will be needed. But it is an important point to notice that replacing global FAN-OUT (which could be seen as a remnant of having variables and evaluating them) by local FAN-IN has nothing to to with evaluation strategies.

________________

Definition: The distributivity moves (related to the application and lambda abstraction gates) are the following:

dist_3

dist_4

Comments:

  • the first distributivity move is straighforward, an application gate is just doubled and two fan-out moves establish the right connections. We see here why the “mystery move” can be seen as a kind of distributivity move,
  • the second distributivity move is where we need a fan-in gate (and where we use a dilation gate instead): because of th orientation of the arrows, after we double the lambda abstraction gates, we need to collect two arrows into one!

________________

Combinatory logic terms appear in graphic lambda calculus as  trees made by application gates, with leaves one of the combinators S, K, I (seen as graphs in GRAPH.  I want to show the following. [UPDATE: made some corrections.]

Theorem.   We can replace the global FAN-OUT move with a sequence of local FAN-IN ,  DIST, CO-COMM and local pruning moves, every time the global FAN-OUT move is applied to a term made by SKI combinators.

Proof.  First, remark that a sequence of  DIST moves for the application gate allows to reduce the problem of replacing global FAN-OUT moves for any combinator to the problem of replacing it for S, K, and I. This is because the DIST move for the application gate allows to do the FAN-OUT of trees of application gates:

dist_5

Now we have to prove that we can perform global FAN-OUT for I , K, S combinators.  For the combinator I the proof is the following:

frob_4

For the combinator K we shall also use a local pruning move:

frob_5

Finally, the proof for the combinator S is the following:

frob_3

Now we are going to use 3 DIST moves, followed by the switch of arrows explained in   Local FAN-IN eliminates GLOBAL FAN-OUT (II) , which is applied in the dashed green ellipse from the next figure:

frob_10

And we are done.

UPDATE: At close inspection, it turns out that we don’t need to do switch (macro) moves. Instead, if we go back at the point we were before the last figure,  we may use  first CO-ASSOC and then perform the three FAN-IN moves .

From combinators and zippers to interaction graphs (part I)

In the post Combinators and zippers I gave a description of the SKI combinators and their fundamental relations in terms of zippers.  See the tutorial page on graphic lambda calculus for background.

As a conclusion of the post  on combinators and zippers, let me mention that the I and K combinators appear as “half” of the 1-zipper and 2-zipper respectively, with some wires and termination gates added. The combinator S appears as a half of the 3-zipper with a “diamond” added, which in fact serves to generate more zippers when composed with other combinators.

Looks like gibberish? Here is for example the relation SKK \rightarrow I, as deduced in graphic lambda calculus, by using zippers:

zipper_6

Looks like I did it without using GLOBAL FAN-OUT! The point is that we may transform combinatory logic into a calculus with zippers, half-zippers (see how the left half zipper from the expression of K, made by two \lambda gates, couples to form a 2-zipper with the right-half zipper made by two \curlywedge gates from the “diamond”) and some wires and termination gates.

Here comes the more interesting part: could we do it with zippers written in terms of emergent algebras and by using the dual of the graphic beta move instead?

I shall first explain what a zipper in emergent algebra is. Look at the next figure to see one:

zipper_7

This is 3-zipper written with the help of the emergent algebra crossing macro. It unzips by using three \beta^{*}(\varepsilon) moves.

The problem with this zipper is that it is not at all clear what a half of it might be. Related to this is the fact that we can “unzip” independently each of the parts of the zipper, there is no order of zipping-unzipping, because we already have all the patterns needed to apply the dual of the graphic beta move. This is a different situation, compared to the one of the usual zipper. In the usual zipper case, that zipper is in fact made by two halves, one containing the \lambda gates, the other half containing the \curlywedge  gates; there is only one place where we can apply the graphic beta move and after the application of this move appears a new pattern proper for the application of the next graphic beta move and so on, until the zipper becomes unzipped.

In the extreme case of the 1-zipper, which is nothing but the pattern for the application of the graphic beta move, we can identify the \lambda  gate with the left half of the zipper and the \curlywedge gate with the right half of the zipper. On the contrary, we can’t do anything comparable to this for the 1-zipper from emergent algebras.

A solution which could get us out of this problem (and therefore one which would allow to do combinatory logic in the realm of emergent algebra) is to introduce “interaction graphs”, which form a new sector of the graphic lambda calculus.  These interaction graphs look (superficially) a bit alike Feynman trivalent diagrams, with “interaction  nodes” decorated either with \lambda or with \curlywedge, and wires corresponding to “particles” \Upsilon or \varepsilon. This is for the next time, I have to play a bit more with those before showing them.

Combinators and stickers

From the wiki page of Combinatory logic:

Combinatory logic is a notation to eliminate the need for variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators.

For the treatment of combinators in graphic lambda calculus see section 3.1 of Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale, arxiv:1207.0332.

In this post I want to discuss about the conversion of the lambda calculus sector of the graphic lambda calculus formalism into the knot diagram sector plus stickers.

There are two stickers, they are graphs in $GRAPH$ with the following form:

I call them “stickers” because they behave like  devices which concentrate the attention at the region encircled by the closed loop. Think about a sticker with nothing written on it. This is a thing which you can stick in some place and you can write something on it. Likewise, take any graph in $GRAPH$ with a marked OUT leaf (for any example take any combinator in graphic lambda calculus form, which always has an unique OUT leaf) and connect it with one of the stickers (i.e. “write it on the sticker”). The operation analoguous to  sticking the written sticker on something is to cross the sticker connected with a combinator with another combinator, where “crossing” refers to the knot diagrams sector, with its graphic lambda crossings.

Look how the stickers behave:

They may be converted one into another (by way of graphic beta moves). Moreover, they transform the application operation gate into a crossing!

Let us see the behaviour of some combinators connected with stickers. Take the combinator I = \lambda x.x. In the next figure, we see it in the upper left corner, in graphic lambda formalism. (The figure has two layers. One may read only what is written in black, considering after what happens if the red drawings are added.)

In black, we see that the graph of I connected with a sticker transforms by a graphic beta move into a loop. If we add the red part, we see that the graph transforms into a loop with a crossing with a graph A. If we perform a second graphic beta move (in red) then we get A. This corresponds to the combinatory logic relation IA \rightarrow A.

Likewise, let’s consider the combinator K = \lambda xy.x, with associated graph  described in the left upper corner of the  next figure.

With the added red part, it corresponds to the relation KA \rightarrow \lambda y. A with y a fresh variable for A.

I skip the combinator S for a future discussion, but from this post it looks almost clear that by using stickers we may convert the combinatory logic sector of the graphic lambda calculus into  the knot diagrams sector enhanced with stickers and maybe with zippers (but we shall see that we may eliminate one or another, in some sense).