Tag Archives: knot diagrams

Small graph rewrite systems (5)

Here are some more tentative descriptions of system X and a play with the trefoil knot. This post comes after the intermezzo and continues the series on small graph rewrite systems.

Recall that system X is a proposal to decompose a crossing into two trivalent nodes, which transforms a knot diagram into an unoriented stick-and-ring graph.

2cols-spin-x

The rewrites are the following, written both with the conventions from the stick-and-ring graphs and also with the more usual conventions which resemble the slide equivalence or spin braids mentioned at the intermezzo.

The first rewrite is GL (glue), which is a Reidemeister 1 rewrite in only one direction.

2cols-spin-gl-x

The second rewrite is RD2, which is a Reidemeister 2 rewrite in one direction.

2cols-spin-rd2-x

There is a DIST rewrite, the kind you encounter in interaction combinators or in chemlambda.

2cols-spin-dist-x

And finally there are two SH rewrites, the patterns as in chemlambda or appearing in the semantics of interaction combinators.

2cols-spin-sh1-x

2cols-spin-sh2-x

One Reidemeister 3 rewrite appears from these ones, as explaned in the following figure (taken from the system X page).

2cols-spin-rd3

Let’s play with the trefoil knot now. The conversion to stick-and rings

2cols-spin-3foil

is practically the Gauss code. But when we apply some sequences of rewrites

2cols-spin-3f-2

we obtain more complex graphs, where

  • either we can reverse some pairs of half-crossings into crossings, thus we obtain knotted Gauss codes (?!)
  • or we remark that we get fast out of the Gauss codes graphs…

thus we get sort of a recursive Gauss codes.

Finally, remark that any knot diagram has a ring into it. Recall that lambda terms translated to chemlambda don’t have rings.

Nothing vague in the “no semantics” point of view

I’m a supporter of “no semantics” and I’ll try to convince you that it is nothing vague in it.

Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.
But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.
Now, let’s pass from trees to other graphs.
The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.
Nothing wrong with this!
Decorated graphs have a long tradition in mathematics, think for example at knot theory.
In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.
The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between  two graphs which have the same associated family of decorations, as it is to  find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

What if this global semantic point of view makes things harder than they really are?

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.

TO TELL THE STORY!

Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

semtree
Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?

http://chorasimilarity.github.io/chemlambda-gui/dynamic/28_syn.html

UPDATES:

Louis Kauffman reply to this:

Dear Marius,
There is no such thing as no-semantics. Every system that YOU deal with is described by you and observed by you with some language that you use. At the very least the system is interpreted in terms of its own actions and this is semantics. But your point is well-taken about not using more semantic overlay than is needed for any given situation. And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe. In doing mathematics it is often the case that one must find the least semantics and just the right syntax to explore a given problem. Then work freely and see what comes.
Then describe what happened and as a result see more. The description reenters the syntactic space and becomes ‘uninterpreted’ by which I mean  open to other interactions and interpretations. It is very important! One cannot work at just one level. You will notice that I am arguing both for and against your position!
Best,
Lou Kauffman
My reply:
Dear Louis,
Thanks! Looks that we agree in some respects: “And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe.” Not in others; this is the base of any interesting dialogue.
Then I made another post
Related to the “no semantics” earlier g+ post [*], here is a passage from Rodney Brooks “Intelligence without representation”

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

This brings to mind also this quote from the end of Vehicle 3 section from V. Braintenberg book Vehicles: Experiments in Synthetic Psychology:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Louis Kauffman reply to this post:
Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
I think that we might ask Cookie and Parabel about this.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Cookie. Precisely, you are not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Cookie. lakfdjl
Parabel. Are you becoming inchoate?
Cookie. &Y*
Parabel. Y
Cookie.
Parabel.

Best,
Lou

My reply:
Dear Louis, I see that the Minsky reference in the beginning of the quote triggered a reaction. But recall that Minsky appears in a quote by Brooks, which itself appears in a post by Marius, which is a follow up of an older post. That’s where my interest is. This post only gathers evidence that what I call “no semantics” is an idea which is not new, essentially.
So let me go back to the main idea, which is that there are positive advances which can be made under the constraint to never use global notions, semantics being one of them.
As for the story about Cookie and Parabel, why is it framed into text strings universe and discusses about  a “central locus of control”? I can easily imagine Cookie and Parabel having a discussion before writing was invented, say for example in a cave which much later will be discovered by modern humans in Lascaux.
I don’t believe that there is a central locus of control. I do believe that semantics is a mean to tell the story, any story, as if there is a central locus of control. There is no “central” and there is very little “control”.
This is not a negative stance, it is a call for understanding life phenomena from points of view which are not ideologically loaded by “control” and “central”. I am amazed by the life variety, beauty and vastness, and I feel limited by the semantics point of view. I see in a string of text thousands of years of cultural conventions taken for granted, I can’t forget that a string of text becomes so to me only after a massive processing which “semantics” people take as granted as well, that during this discussion most of me is doing far less trivial stuff, like collaborating and fighting with billions of other beings in my gut, breathing, seeing, hearing, moving my fingers. I don’t forget that the string of text is recreated by my brain 5 times per second.
And what is an “illusion”?
A third post
In the last post https://plus.google.com/+MariusBuliga/posts/K28auYf69iy I gave two quotes, one from Brooks “Intelligence without representation” (where he quotes Minsky en passage, but contains much more than this brief Minsky quote) and the other from Braitenberg “Vehicles: Experiments in Synthetic Psychology”.
Here is another quote, from a reputed cognitive science specialist, who convinced me about the need for a no semantics point of view with his article “Brain a geometry engine”.
The following quote is by Jan Koenderink “Visual awareness”
http://www.gestaltrevision.be/pdfs/koenderink/Awareness.pdf

“What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. […]
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.”  [p. 2-3]

[Remark: all these quotes appear in previous posts at chorasimilarity]

_______________________________________________________________

Distributivity move as a transposition (curious crossings II)

It looks that there is a connection of the following with the research on DNA topology, but I shall comment on this in a future post, mainly because I am learning about this right now. (But another main reference is Louis Kauffman.)

Let’s see.

If I am using this encoding of chemlambda nodes with crossings

 

ccdeco_1

which is a variant of the encoding from Curious crossings   then, as in the mentioned post,   the beta move becomes a CLICK between “sticky ends” which are marked with dashed lines, followed by R2b move, and also the FAN-IN move becomes a CLICK between sticky ends, followed by a R2a move.

What about the DIST moves? Let’s take the DIST move which involves an application node and a fanout node. The move looks like this:

ccdeco_surgery_1(Click on the picture to make it bigger.)

In the right column we see the DIST move with chemlambda drawing conventions. In the left column there is the translation with crossings and sticky ends.

What do we see? The strings 1-5 and 6-3 are transposed by the DIST move and a new string appears, which crosses them.

I can draw the same move like this:

ccdeco_surgery_2In this figure, the left column is as before, but the right column has changed. I just kept the order, from left to right, of the strings 6-3 and 1-5, and I wiggled the string 2-4 for this.

This is starting to look interestingly alike some other pictures from  DNA topology.

 

___________________________________________

 

Curious crossings

I’m coming back to  the post   Halfcross way to pattern recognition (in zipperlogic)    and I am going to modify the drawings a bit.

Instead of this convention of transforming chemlambda nodes into half-crossings

halfcross_1

I draw this mapping from chemlambda nodes to crossings:

cdeco_1

Remark that at the left we have now crossings. At the right we have the chemlambda nodes, each with a dashed half-arrow attached, so now the nodes become 4-valent locally planar ones.

As usual (here at chorasimilarity) the crossing diagrams are only locally planar!

Look closer: there are two kinds of oriented crossings, right? To each kind corresponds a colour (green or red) of a chemlambda node.

This is no longer a dictionary, there is no longer a bijective correspondence, because for each oriented crossing there are two possible chemlambda nodes, depending on where is the dashed half-arrow! That is the meaning of the wiggling blue arrow from right to left, it’s uni-directional.

OK then, instead of drawing this interpretation of the beta move

halfcross_2

we get the following one

cdeco_2

where the arrows are drawn like that in order to see what transforms into what (otherwise there is no need for those convoluted arrows in the formalism).

Likewise, instead of this

halfcross_3

I draw this:

cdeco_3

Funny, what could that mean?

________________________________________

Halfcross way to pattern recognition (in zipperlogic)

Inspired by the Zipper logic and knot diagrams post, here is an alternate encoding of chemlambda.

This is part of the series on zipper logic (branded in this post as  “zipperlogic”).  The last post is Zipper logic (VI) latest version, detailed.

As can be seen in that post, zipperlogic is equivalent with chemlambda, but it has two interesting qualities: is more intuitive and it has the CLICK move.

The CLICK move  transforms a pair of opposed half-zippers into a  zipper, which is then unzipped with the ZIP move. While the ZIP move is equivalent with the graphic beta  move, there is no correspondent to the CLICK move, apparently.

The CLICK move becomes useful when we use other realizations of the zipperlogic than chemlambda.    In the one where half-zippers are realized as towers of crossings, the CLICK move turns out to be a pattern recognition move, and the ZIP move becomes the familiar R2 (Reidemeister 2) move, applied to that pattern.

That is why CLICK is interesting: because in order to apply moves in chemlambda, we have first to identify the patterns where these moves may be used.

Now, I want to justify this in the following.  I shall not aim for another realization of zipperlogic, but for one of chemlambda, inspired by the one of zipperlogic seen as acting on towers of crossings.

I shall use half-crossings.  Recall that in the post Slide equivalence of knots and lambda calculus (I) I wrote:

Louis Kauffman proposes in his book Knots and Physics  (part II, section “Slide equivalence”), the notion of slide equivalence. In his paper “Knotlogic” he uses slide equivalence (in section 4) in relation to the self-replication phenomenon in lambda calculus. In the same paper he is proposing to use knot diagrams as a notation for the elements and operation of a combinatory algebra (equivalent with untyped lambda calculus).

[…]

Obviously, we have four gates, like in the lambda calculus sector of the graphic lambda calculus. Is this a coincidence?

 

So this post can be seen as a try to answer this question.

But the halfcrossings which I use here are different than the ones defined by Louis Kauffman. There might be a way to transform ones into the others, but I have not found it yet.

____________________________

Here is the encoding of chemlambda by halfcrossings:

halfcross_1

Remark that each of the halfcrossings has a dangling, vanishing thread, like in the previous post Bacterial conjugation is beta reduction.

[I shall come back in later posts to the relevance of this formalism for horizontal gene transfer.]

Look at this as a new notation for chemlambda nodes and just replace  the green and red nodes by these halfcrossings in order to get the right moves for the halfcrossings.

With an exception: the CLICK move. This move consists into joining neighbouring dangling threads, in two situations, one related to the beta move, the other related to the FAN-IN move.

Look how the beta move appears with halfcrossings and the CLICK move used for pattern recognition (in the figure this s calld “pattern id”):

halfcross_2

Nice, right?

 

Now, the other CLICK move, involved into the identification of the pattern appearing  in the FAN-IN move.

halfcross_3

In a future post I shall look at the DIST moves, in this encoding.

_________________________________

 

Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.

 

The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.

Image002

In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:

 

661px-Conjugation.svg

After the ZIP move the arc which represents SP is representing a pilus as well!

____________________________________

Take a better look at the knotted S combinator (zipper logic VI)

Continuing from  Knot diagrams of the SKI (zipper logic V) , here are some  more drawings of the S combinator which was described in the last post by means of crossings:

 

zipper_loop_5

 

Seen like this, it looks very close to the drawings from section 5 of  arXiv:1103.6007.

I am not teasing you,  but many things are now clear exactly because of all these detours. There is a lot to write and explain now, pretty much straightforward and a day after day effort to produce something  which describes well the end of the journey. When in fact the most mysterious creative part is the journey.

_____________________________________

Knot diagrams of the SKI (zipper logic V)

Continuing from  Zipper logic and knot diagrams, here are the  S,K,I combinators when expressed in this convention of the zipper logic:

zipper_loop_4

Besides crossings (which satisfy at least the Reidemeister 2 move), there are also fanout nodes. There are associated DIST moves which self-reproduce the half-zippers as expressed with crossings.

Where do the DIST moves come from? Well, recall that there are at least two different ways to express crossings as macros in GLC or chemlambda: one with application and abstraction nodes, the other with fanout and dilation nodes.

This is in fact the point: I am interested to see if the emergent algebra sector of GLC, or the corresponding one in chemlambda, is universal, and when I am using crossings I am thinking secretly about dilations.

The DIST moves (which will be displayed in a future post) come from the DIST moves from the emergent algebra sector of chemlambda (read this post and links therein).

There is though a construct which is strange, namely the left-to-right arrow which has attached a stack of right-to-left arrows,  and the associated CLICK move which connects these stacks of arrows.

Actually, these stacks look like half-zippers themselves and the CLICK move looks like (un)zipping a zipper.

So, are we back to square one?

No, because even if we replace those stacks by some other half-zippers and the CLICK move by unzipping, we still have the property that those constructs and moves, which are external to knot diagrams, are very localized.

Anyway, I can cheat by saying that I can do the CLICK move, if the crossings are expressed in the emergent algebra sector of chemlambda (therefore dilation nodes, fanout and fanin nodes), with the help of ELIM LOOPS and SWITCH.

But I am interested into simple, mindless ways to do this.

___________________________________

 

SPLICE and SWITCH for tangle diagrams in the chemical concrete machine

Knot diagrams can be implemented in the chemical concrete machine.  This result has been already touched in the post  Local FAN-IN eliminates GLOBAL FAN-OUT (II). Here I shall give a short and clear exposition.

1. Oriented knot diagrams.  I shall describe the larger collection of oriented tangle diagrams. A knot is a proper embedding of an oriented circle in 3D space. A tangle is a proper  embedding of a finite collection of  oriented 1D segments (arcs) and circles  in 3D space. A tangle diagram represents a projection of a tangle in general position on a plane (i.e. so that different pieces of arcs or circles do not project on tangent curves in the plane), along with supplementary information at the crossings of the projections (i.e. when the  projections of two pieces of arcs cross, there is more information about which piece passes over).  In this description, a tangle diagram is a 4-valent, locally planar graph, made by two elementary nodes, called “crossings” (usually tangle diagrams are defined as being globally planar graphs made by crossings), and by loops.

chem_cross_0

2. Oriented Reidemeister moves. Two (globally planar) tangle diagrams represent the same 3D tangle up to regular deformations in 3D if and only if we can pass from one tangle diagram to another by a finite sequence of oriented Reidemeister moves (I use here, as previously, the names from   Michael Polyak “Minimal generating sets of Reidemeister moves“, excepting that I use  the letter “R” instead of the letter “\Omega“) .

reidemeister_1

reidemeister_2

reidemeister_3

These are 16 local graph rewrites (local moves) acting on the collection of tangle diagrams.

 3.   My purpose now is to give an implementation of tangle diagrams (not especially globally planar ones, but the more general locally planar ones)  in the chemical concrete machine formalism. For this I have to define the elementary nodes of tangle diagrams as molecules in the chemical concrete machine formalism. This is easy: the loops are the same as in the chemical concrete machine, the crossings are defined as in the following figure:

chem_cross

Each oriented tangle diagram is translated into a molecule from the chemical concrete machine, by using this definition of crossings.  As a consequence, each oriented Reidemeister move becomes a local move in the chemical concrete machine, simply by translating the LHS and RHS tangle diagrams into molecules.

4. I have to prove that these translations of the oriented Reidemeister moves are compatible with the moves from the chemical concrete machine in the following sense: each translation of a Reidemeister move can be obtained as a finite chain of moves from the chemical concrete machine.

It is easier to proceed the other way around, namely to ground  the reasoning in the oriented tangle diagrams universe. I shall translate back some moves from the chemical concrete machine, as seen as acting on oriented tangle diagrams.  See this post at mathoverflow for context, basically it’s almost all we need for the proof, supplemented only by the proof of the SWITCH move, as you shall see.

The graphic beta move in the chemical concrete machine translates as a SPLICE move (it has two variants) over oriented tangle diagrams. The elimination of loops becomes a LOOP move. These moves are described in the next figure.

splice_1

You can find in the mathoverflow post a proof that all oriented Reidemeister moves, with the exception of R2c, R2d, R3a, R3h, are a consequence of a finite number of SPLICE and LOOP moves.

It is left to prove that R2c, R2d, R3a, R3h can be expressed by (translations of) some chains of chemical concrete machine moves. It turns out that we can reduce all these moves to the move R2d by using SPLICE and LOOP, therefore the only thing left is to look at R2d.

chem_r2d

By SPLICE and LOOP, the left hand side of R2d becomes:

chem_r2d_1

So R2d is equivalent with the following SWITCH move:

chem_switch

Finally,  in the post   Local FAN-IN eliminates GLOBAL FAN-OUT (II)   there is a proof that the SWITCH move can be obtained from CO-COMM and FAN-IN moves. The next figure shows this.

chem_switch_2

This concludes the proof that R2d can be expressed by a chain of moves from the chemical concrete machine.

________________________

Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine

This post is partially a request for references, partially a place for possible discussions, in the comments, and partially a place for clarifications of the previous post Why build a chemical concrete machine, and how? .

I started to read Erik Winfree’s thesis Algorithmic self-assembly of DNA and, to my joy, I see that at least at the knowledge level of 1998, what I propose is different. Here is a short brief of what I got from Winfree’s thesis  (along with my excuses for not representing things correctly and for misattributions) :

  • Adleman, Lipton, propose a model of DNA computing which uses exponential space, i.e. all the candidates for a solution of a combinatorial problem, each one represented by a strand of DNA, which are then filtered by a sequence of physical, or chemical manipulations of test tubes, of one of the types: separate (a tube into two others, according to the value of the variable at “i” position), merge (two tubes into one), detect. Variants (not due to Adleman, Lipton, Karp) are to add an amplify operation (like a FAN-OUT with variables) which transform a tube into two copies of it. Or (Boneh), instead of amplify, an append operation which adds a new variable with a give value.  All these models have variables and are based on the mechanism of selection of the solution from an exponential sea of candidates.
  • Hagiya, single-strand DNA computation, using a mechanism called “whiplash PCR”, which I don’t understand yet, but which has the computational power of a GOTO program. Has variables.
  • Winfree, single-strand DNA computation, but in 2D, where he proposes a “materialization” of a block cellular automaton (BCA) which has Turing universality. Has variables, tries to make a Turing machine.

_________________

In the post   Why build a chemical concrete machine, and how?   I mentioned Turing machines, but this is obviously wrong, as can be seen by looking at the previous post A chemical concrete machine for lambda calculus. I don’t want to have a ” syringe with 10^9 nano geometrical turing machines”, no, that’s misleading, what I call a chemical concrete machine works with lambda calculus terms (among other graphs, more geometrical, from graphic lambda calculus), which are reduced by chemical reactions (using for example the graphic beta move enzyme). That’s different.

_________________

At page 29 of Winfree’s thesis, there’s a picture illustrating various reactions and results of reactions between DNA strands.  I find interesting the Holliday junction, (image taken from the wiki page)

472px-Holliday_Junctionbecause it’s relation to crossings in knot diagrams. Recall that in the \lambda-TANGLE sector of the graphic lambda calculus, the graphic beta move appears as a SPLICE move.

Compare with these images from 3D crossings in graphic lambda calculus:

3d_crossing_3

3d_crossing_2

_________________

 

As that’s an exploratory post, kind of note to self, but open to anybody, take a look at this short course

[Updates will follow here]

Local FAN-IN eliminates GLOBAL FAN-OUT (II)

As I wrote in   Local FAN-IN eliminates global FAN-OUT (I) , the introduction of the three moves (FAN-IN and the two DIST moves) eliminates global FAN-OUT from the lambda calculus sector of the graphic lambda calculus.  In this post we shall see that we can safely eliminate other two moves, namely R1a, R1b, as well as improving the behaviour of the crossings from the \lambda-TANGLE sector.

The equilibrium is thus established: three new moves instead of the three old moves. And there are some unexpected advantages.

______________

Proposition.

Proof.  (a) Done in the following picture.

frob_6

The proof of (b) is here:

frob_7

Finally, here is the proof of (c):

frob_8

______________

The \lambda-TANGLE sector of the graphic lambda calculus is obtained by using the lambda-crossing macros

frob_9

In Theorem 6.3   arXiv:1305.5786 [cs.LO]  I proved that all the oriented Reidemeister moves (with the crossings replaced by the respective macros), with the exception of the moves R2c, R2d, R3a and R3h, can be proved by using the graphic beta move and elimination of loops.  We can improve the theorem in the following way.

Theorem.  By using the graphic beta move, elimination of loops, FAN-IN and CO-COMM, we can prove all the 16 oriented Reidemeister moves.

Proof. The missing moves R2c, R2d, R3a and R3h are all equivalent (by using the graphic beta move and elimination of loops, see this question/answer at mathoverflow) with the following switching move, which we can prove with FAN-IN and CO-COMM:

frob_2

The proof is done.

On graphic lambda calculus and the dual of the graphic beta move

Much of the research reported in the article “On graphic lambda calculus and the dual of the graphic beta move”  arXiv:1303.0778  appeared first time in posts from this blog, in places indicated by links given in the article  (here is the link to the preview, before the appearance in the arxiv).

The abstract is:

This is a short description of graphic lambda calculus, with special emphasis on a duality suggested by the two different appearances of knot diagrams, in  lambda calculus  and emergent algebra sectors of the graphic lambda calculus respectively. This duality leads to the introduction of the dual of the graphic beta move. While the graphic beta move corresponds to beta reduction in untyped lambda calculus, the dual graphic beta move appears in relation to emergent algebras.

See the page Graphic lambda calculus for details.

I played also with the bibliography, in two ways: I tried to cite only articles from arxiv and I give each time in the text the link to the respective article, also I preferred to indicate web pages as sources of bibliographic information whenever possible. This way, the bibliography is reduced to the bare minimum, it is there mostly by convention. Finally, there are no theorems or definitions in the text (I mean, there are, as well as proofs, only that they are not “encrypted”  into the respective formats), I preferred instead to use a more relaxed writing, more alike  wiki pages, according to views expressed in “Comments in epijournals: we may learn from Wikipedia”  and “Wiki journals over arxiv“.

I first wanted to make a tutorial article on graphic lambda calculus, but for the moment I don’t see the point, there already is such a tutorial here; much of it is in several arxiv articles. But probably this article will have updates, if it will be submitted to a “regular” publisher. For the moment it is a bit of an experiment (but mind you, it is a rigorous mathematical article).

The three main sectors of graphic lambda calculus

For a web tutorial on graphic lambda calculus go here.  There will be detailed explanations concerning the three main sectors of the graphic lambda calculus, for the moment here is a brief description of them.

Sectors.

A sector of the graphic lambda calculus is:

  • a set of graphs, defined by a local or global condition,
  • a set of moves from the list of all moves available.

The name “graphic lambda calculus” comes from the fact that there it has untyped lambda calculus as a sector. In fact, there are three important sectors of graphic lambda calculus:

Dual of the graphic beta move implies some Reidemeister moves

For the extended beta move see the dedicated tag.  See also the tutorial “Introduction to graphic lambda calculus“.

The extended beta move is still in beta version. In this post I want to explore some consequences of the dual of the graphic beta move.

I proved earlier the the extended beta move is equivalent with the pair (graphic beta move, dual of the graphic beta move), let me recall this pair in the following picture.

beta_beta_star

Here are some particular applications of the dual of the graphic beta move. The first is this:

beta_beta_star_1

But this is equivalent with the emergent algebra move R1a (via a CO-COMM move).  Likewise, another application of of the dual of the graphic beta move is this:

beta_beta_star_2

which is the same as   the emergent algebra move R2a.

The third application is this one:

 

beta_beta_star_3

 

and it was discussed in the post “Second thoughts on the dual of the extended beta move“, go there to see if this move may be interpreted as a pruning move (or an elimination of loops).

Finally, there is also this:

beta_beta_star_4

which was not mentioned before. It suggests that

beta_beta_star_5

behaves like the generic point in algebraic geometry.

Emergent algebra moves R1a, R1b, R2 and ext2

This is part of the Tutorial:Graphic lambda calculus.

Here are described the moves related to emergent algebras. This moves involve the gates \bar{\varepsilon}, \Upsilon and the termination gate.

See the post “3D crossings in emergent algebras” in order to understand the relation with knot diagram crossings and with Reidemeister moves for oriented knot diagrams (for those I use the notations  from the paper    “Minimal generating sets of Reidemeister moves“,  by Michael Polyak  only that I use the letter “R” from “Reidemeister” instead of “\Omega” used by Polyak).

_________

We have an abelian group \Gamma with operation denoted multiplicatively and with neutral element denoted by 1. Thus, for any \varepsilon, \mu = \mu \varepsilon \in \Gamma their product is \varepsilon \mu \in \Gamma, the inverse of \varepsilon is denoted by \varepsilon^{-1} and we have the identities: 1 \varepsilon = \varepsilon , \varepsilon \varepsilon^{-1} = 1.

For each \varepsilon \in \Gamma we have an associated gate \bar{\varepsilon}.

  • The move R1a is described in the following figure

r1amove

The reason for calling this move R1a is that is related to the move R1a from Polyak paper (also to R1d). In the papers on emergent algebras this move is called R1, that is “Reidemeister move 1”.

  • The move R1b is this:

r1bmove

This move is related to the move R1b from Polyak (and also to R1c). This move does not appear in relation with general emergent algebras, it is true only for a special subclass of them, namely uniform idempotent quasigroups.

  • The move R2 is the following:

r2move

We shall see that this is related to all Reidemeister 2 moves (using also CO-ASSOC, CO-COMM, and LOCAL PRUNING).

  • The move ext2. The notation comes from the rule (ext2) from lambda-Scale calculus. In emergent algebra language it means that the emergent algebra operation indexed by the neutral element of \Gamma is the trivial operation xy = y.

ext2r

(but for this LOCAL PRUNING is also used).

___________________________

Return to Tutorial:Graphic lambda calculus

Second thoughts on the dual of the extended beta move

Continues “Dual of the extended beta move. Termination as implosion or blow-out“.

After the night I found another way to look at the dual of the extended beta move. Recall that in the previous post was proved that the dual of the extended beta move is equivalent with this mystery move.

fundam_epsilon_dual2

The graph from the left hand side can be expressed by using the (emergent algebra) crossing macros:

fundam_epsilon_dual5

This form of the graph makes obvious where to apply the extended beta move. Let’s see what we get.

fundam_epsilon_dual6

Therefore it looks like the mystery move is a combination of the extended beta move and elimination of loops. There is no need to replace the termination gate by a trivalent graph, as suggested  in the previous post, although that is an idea worthy of further exploration.

Dual of the extended beta move. Termination as implosion or blow-out

Thanks Stephen for this comment, it made me think about the dual of the extended beta move and about the elimination of the termination gate.

The extended beta move has been introduced here. It is not, for the moment, part of the tutorial for graphic lambda calculus, but it will be soon.

Meanwhile, let me explore further the duality suggested in the post “Lambda scale in graphic lambda calculus and diagram crossings“.

The duality, although unclear as a precise mathematical object, is described by these three correspondences:

correspondence_1

The last correspondence should be understood like it appears in the left hand sides of the “dual” moves from the “Graphic beta move extended …” which I reproduce here:

fundam_epsilon_9

fundam_epsilon_10

OK, so what is the dual of the extended beta move, then?  First, let me remind you the extended beta move:

fundam_epsilon_8

The dual should be this:

fundam_epsilon_8_dual

If we look at the graph from the right hand side, then we see we can apply a beta move, like this:

fundam_epsilon_8_dual1

Therefore the “dual extended beta” move is equivalent with the curious looking move:

fundam_epsilon_dual2

That’s kind of a pruning move, but it is not on the list. Should it be?

We reason like this: by using the extended beta move and the Reidemeister 2 move (for emergent algebras) we arrive at

fundam_epsilon_dual3

We may repeat indefinitely this succesion of moves, or, alternatively, we may use the extended beta move with an arbitrary \mu  from the group \Gamma. The intuition behind this is that the gate \bar{\varepsilon} is a dilation gate, which has an input coming from the fan-out gate and the other connected to the output of the gate, therefore, by circulating along this loop, we apply an endless number of dilations of coefficient \varepsilon. At the limit (if such a concept makes sense at this level of generality), either the things blow to infinity (if \varepsilon > 1) or they implode to the value given by the fan-out gate(if \varepsilon < 1), or they circulate forever in a loop (if \varepsilon = 1) . In all cases the graph with only one input and no outputs, which we see in the previous figure, behaves exactly like the termination gate! Hence the mystery move (??) is simply a local pruning move.

Therefore, we may as well eliminate the termination gate and replace it in all the formalism by the gate from the previous figure.

fundam_epsilon_dual4

By this replacement the “dual extended beta move” is true, as a consequence of the usual graphic beta move. Moreover, if we do this replacement, then we shall have pure trivalent graphs in the formalism because the ugly univalent termination gate is replaced by a trivalent graph!

What do you think?

Introduction to graphic lambda calculus

UPDATE: Just go and read/use this, this and this.

____________________

This post is part of  the   Tutorial: Graphic lambda calculus. It describes the set GRAPH.

____________________

 What is graphic lambda calculus?

Graphic lambda calculus is a formalism working with oriented, locally planar, trivalent graphs, with decorated nodes. It has a number of moves (transformations) acting on such graphs, which can be local or global moves.

It contains  differential calculus in metric spaces, untyped lambda calculus and that part of knot theory which can be expressed by using knot diagrams.

——————————————

 The set GRAPH.

This is the set of graphs which are subjected to the moves.  Any assembly of the following elementary graphs, called “gates” is a graph in GRAPH.

  • The \lambda gate, which corresponds to the lambda abstraction operation from lambda calculus, see lambda terms. It is this:

lambdal

But wait! This gate looks like it has one input (the entry arrow) and two outputs (the left and right exit arrows respectively). This could not be a graph representing an operation, because an operation has two inputs and one output. For example, the lambda abstraction operation takes as inputs x a variable name and A a term and outputs the term \lambda x.A.

Remember that the graphic lambda calculus does not have variable names. There is a certain algorithm which transforms a lambda term into a graph in GRAPH, such that to any lambda abstraction which appears in the term corresponds a \lambda gate. The algorithm starts with the representation of the lambda abstraction operation as a node with two inputs and one output, namely as an elementary gate which looks like the \lambda  gate, but the orientation of the left exit arrow is inverse than the one of the \lambda gate. At some point in the algorithm the orientation is reversed and we get \lambda gates as shown here. There is a reason for this, wait and see.

It is cool though that this \lambda gate looks like it takes a term A as input and it outputs at the left exit arrow the variable name x and at the right exit arrow the term \lambda x.A. (It does not do this, properly, because there will be no variable names in the formalism, but it’s still cool.)

  • The \curlywedge graph, which corresponds to the application operation from lambda calculus, see lambda terms. It is this:

curlyl

This looks like the graph of an operation, there are no clever tricks involved. The sign I use is like a curly join sign.

  • The \Upsilon graph, which will be used as a FAN-OUT gate, it is:

upsil

  • The \bar{\varepsilon} graph. For any element \varepsilon \in \Gamma of an abelian group \Gamma (think about \Gamma as being (\mathbb{Z}, +) or ((0,+\infty), \cdot) ) there is an “exploration gate”, or “dilation gate”, which looks like the graph of an operation:

epsil

(Therefore we have a family of operations, called “dilations”, indexed by the elements of an abelian group. This is a structure coming from emergent algebras.)

We use these elementary graphs for constructing the graphs in GRAPH. Any assembly of these gates, in any number, which respects the orientation of arrows, is in GRAPH.

Remark that we obtain trivalent graphs, with decorated nodes, each node having a cyclical order of his arrows (hence locally planar graphs).

There is a small thing to mention though: we may have arrows which input or output into nothing. Indeed, in particular the elementary graphs or gates are in GRAPH and all the arrows of an elementary graph either input or output to nothing.

Technically, we may imagine that we complete a graph in GRAPH, if necessary, with univalent nodes, called “leaves” (they may be be decorated with “INPUT” or “OUTPUT”, depending on the orientation of the arrow where they sit onto).

  • For this reason we admit into GRAPH arrows without nodes which are elementary graphs, called wires

linel

and loops (without nodes from the elementary graphs, nor leaves)

loopl

  • Finally, we introduce an univalent gate, the termination gate:

topl

The termination gate has an input leaf and no output.

and now, any graph which is a reunion of lines, loops and assemblies of the elementary graphs (termination graph included) is in GRAPH.

——————————————

Graphic beta move extended, to explore

This post continues the previous: “Lambda scale in graphic lambda calculus and diagram crossings“.

Suppose, just suppose for the moment that there is an extension of the graphic beta move, suggested by the previous post. I shall call this move “extended beta move”, or “(ext \beta)”, although a better name would be “scaled beta move”.

The extension to consider has the following form:

fundam_epsilon_8

What a move! Who could have guessed such a move, involving four gates, without thinking about diagram crossings AND about lambda calculus?

Now, after expressing my enthusiasm, let’s see what this move can do.

First of all, per the previous post, the extended beta move implies the graphic beta move (by using (ext2) and LOCAL PRUNING).  Recall that in terms of the crossing macro involving only lambda calculus gates, the graphic beta move looks like this:

betar_dif2

which is the same as this:

fundam_epsilon_9

The extended beta move implies also a new move, expressed via the emergent algebra crossings, namely  this:

fundam_epsilon_10

(See the last post: by the graphic beta move, which is implied by the extended one, the emergent algebra crossing from the RHS of the figure transforms into the graph from the RHS of the first figure, which now, by the extended beta move, transforms into the graphs from the LHS of the last figure.)

Let us give a name to this last move, call it “epsilon beta move”.

Now, fact: the usual graphic beta move together with the epsilon beta move are equivalent with the extended beta move.

Proof: we have to prove only one implication, because we have already proved the other. For this, let us express the graph from the RHS of the first figure with the help of the crossing macros. It looks like this:

fundam_epsilon_11

Now we see that by applying first a epsilon beta move and then an usual graphic beta move, we recover the extended beta move.

Anyway, the extended beta move is left for further exploration.

Lambda scale in graphic lambda calculus and diagram crossings

Background:

1. The lambda epsilon calculus was introduced in  arXiv:1205.0139 [cs.LO]    “\lambda-Scale, a lambda calculus for spaces with dilations”. Further, graphic lambda calculus was introduced in arXiv:1207.0332 [cs.LO]  “Local and global moves on locally planar trivalent graphs, lambda calculus and \lambda-Scale”.

2.   In the post “3D crossings in emergent algebras” I noticed a very intriguing resemblance between crossings in emergent algebras (as encoded in graphic lambda) and crossings macros in graphic lambda. More specifically, here is  the encoding of a crossing in emergent algebras:

emerr_dif_11

which is very much like the crossing macro

betar_dif_11

Likewise, here is the other crossing in emergent algebras:

emerr_dif_21

and its “twin” crossing macro

betar_dif21

Their behaviour differs only with respect to the Reidemeister 3 move, which holds only in self-distributive (or “linear”) emergent algebras.

Question:   Are these crossing macros related in graphic lambda calculus?

Answer: Yes, through an idea introduced in the \lambda-Scale paper (which is actually the first proposal of a calculus for emergent algebra, later transformed into “graphic lambda calculus”).

Here is the explanation. In \lambda-Scale there are two operations, namely the \lambda abstraction and a \varepsilon operation, one for every coefficient \varepsilon in a commutative group.  The application operation from lambda calculus comes as a composite of these two fundamental operations. Moreover, the \varepsilon operation IS NOT THE SAME AS the operation from emergent algebras. The emergent algebra operation is also defined as a composite of the two fundamental operations of \lambda-Scale (see the paper, definition 2.2).

Later, in graphic lambda calculus, I renounced at the \varepsilon operation of \lambda-Scale, replacing it with the operation from emergent algebras. To be clear, I used implicitly proposition 3.2 from the mentioned paper (which has a slightly misleading figure attached), which gives a description of the fundamental \varepsilon operation of \lambda-Scale calculus in terms of the abstraction operation, the application operation  and the emergent algebra operation.

This turns out to be the key of the explanation I am writing about.

I shall define now a macro in graphic lambda calculus (inspired by the said proposition 3.2) which corresponds to the fundamental \varepsilon operation from \lambda-Scale calculus. Here is it:

fundam_epsilon_1

Let us play with this macro, by using it in a graph almost similar with the one where the graphic beta move applies:

fundam_epsilon_3

Let us consider the particular case \varepsilon = 1. By using the (ext2) move and then local pruning we get the following graph:

fundam_epsilon_4

I shall now apply the graphic beta move for the general \varepsilon, like this:

fundam_epsilon_5

which is very nice, because we obtain the crossing macro of emergent algebras.

Finally, in the case when \varepsilon = 1, by an (ext2) move, followed by a local pruning pruning, we may transform the macro crossing from emergent algebra into this:

fundam_epsilon_6

which ends the explanation.

Combinators and stickers

From the wiki page of Combinatory logic:

Combinatory logic is a notation to eliminate the need for variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators.

For the treatment of combinators in graphic lambda calculus see section 3.1 of Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale, arxiv:1207.0332.

In this post I want to discuss about the conversion of the lambda calculus sector of the graphic lambda calculus formalism into the knot diagram sector plus stickers.

There are two stickers, they are graphs in $GRAPH$ with the following form:

I call them “stickers” because they behave like  devices which concentrate the attention at the region encircled by the closed loop. Think about a sticker with nothing written on it. This is a thing which you can stick in some place and you can write something on it. Likewise, take any graph in $GRAPH$ with a marked OUT leaf (for any example take any combinator in graphic lambda calculus form, which always has an unique OUT leaf) and connect it with one of the stickers (i.e. “write it on the sticker”). The operation analoguous to  sticking the written sticker on something is to cross the sticker connected with a combinator with another combinator, where “crossing” refers to the knot diagrams sector, with its graphic lambda crossings.

Look how the stickers behave:

They may be converted one into another (by way of graphic beta moves). Moreover, they transform the application operation gate into a crossing!

Let us see the behaviour of some combinators connected with stickers. Take the combinator I = \lambda x.x. In the next figure, we see it in the upper left corner, in graphic lambda formalism. (The figure has two layers. One may read only what is written in black, considering after what happens if the red drawings are added.)

In black, we see that the graph of I connected with a sticker transforms by a graphic beta move into a loop. If we add the red part, we see that the graph transforms into a loop with a crossing with a graph A. If we perform a second graphic beta move (in red) then we get A. This corresponds to the combinatory logic relation IA \rightarrow A.

Likewise, let’s consider the combinator K = \lambda xy.x, with associated graph  described in the left upper corner of the  next figure.

With the added red part, it corresponds to the relation KA \rightarrow \lambda y. A with y a fresh variable for A.

I skip the combinator S for a future discussion, but from this post it looks almost clear that by using stickers we may convert the combinatory logic sector of the graphic lambda calculus into  the knot diagrams sector enhanced with stickers and maybe with zippers (but we shall see that we may eliminate one or another, in some sense).

Generating set of Reidemeister moves for graphic lambda crossings

UPDATE 2: It  turrns out there is an error in this post, concerning the move R3a. The oriented Reidemeister moves which are not respected by the untyped lambda calculus crossings are R2c, R2d, R3a, R3h. All other moves are allowed. [Added: see why by looking at the post Big sets of oriented Reidemeister moves which don’t generate all moves.]

________

UPDATE: The paper “Graphic lambda calculus and knot diagrams”, arXiv:1211.1604    contains a part of what is discussed here, at the tag “graphic lambda calculus“. There will be follow-ups and applications.

________

Last post related to the subject is “3D crossings in graphic lambda calculus“.  I shall use the notations from there, see also some earlier posts indicated there.

For oriented link diagrams the list of Reidemeister moves is large (24 moves). The list of the moves and a choice of a minimal generating set (with proofs) can be found in the paper by Michael Polyak “Minimal generating sets of Reidemeister moves“.

The following four Reidemeister moves generate all the others.

– R1a:

-R1b:

-R2a:

-R3a:

Let’s use the crossings given by graphic lambda calculus (see the post mentioned at the beginning) and let us interpret the moves as composites of graphic beta moves. Is this possible? Yes, I’ll show you in a moment.

In terms of the graphic lambda calculus, the move R1a is this:

which is just a degenerate graphic beta move with elimination of loops (short name “elimination of loops”).

The move R1b becomes this:

which is another elimination of loops move.

So, the first two moves are just degenerate graphic beta moves with elimination of loops. We may say that we used 0 graphic beta moves, because degenerated moves are much weaker than the full graphic beta move.

Let’s see how R2a looks like in terms of graphic lambda calculus:

That is a combination of two beta moves!

Finally, the move R3a looks like this:

That is a combination of 6 beta moves. The red  arrows  are those which correspond to the relation over-under of the respective crossings. (see UPDATE 2 for corrections)

Conclusion: the Reidemeister moves, as seen when interpreted in graphic lambda calculus, are combinations of an even number of graphic beta moves and any number of degenerated graphic beta moves with elimination of loops.

3D crossings in emergent algebras

This post continues the previous one  3D crossings in graphic lambda calculus . Other relevant posts are:

Four smbols

Computing with space: done!

For graphic lambda calculus see this, for knot diagrams and emergent algebras see this, sections 3-6.

In the previous post we saw that we can “construct” crossings   by using both the \lambda abstraction operation and the composition operation from lambda calculus. These operations appear as elementary gates in graphic lambda calculus, along with other two gates, namely the FAN-OUT gate denoted by Y and the \bar{\varepsilon} gate (with \varepsilon an element in a commutative group \Gamma). This last gate models the family of operations of an emergent algebra.

The FAN-OUT gate Y is used in two different contexts. The first one is   as a properly defined FAN-OUT, with behaviour described by the  global fan-out move, needed in the construction which attaches to any term in untyped lambda calculus a graph in the lambda calculus sector of the graphic lambda calculus as explained in “Local and global moves on locally planar trivalent graphs … ” step 3 in section 3. The second one is in relation with the decorated knot formalism of emergent algebras, “Computing with space, …” section 3.1″.

There is an astounding similarity between the \lambda and composition gates from lambda calculus, on one side, and FAN-OUT and \bar{\varepsilon} gates from emergent algebras, in the context of defining crossings. I shall explain this further.

In the decorated knots formalism, crossings of oriented wires are decorated with elements \varepsilon of a commutative group \Gamma. The relation between these crossings and their representations in terms of trivalent graphs is as following:

Comparing with the notations from the previous post, we see that in both cases the \lambda gate corresponds to a FAN-OUT, but, depending on the type of crossing, the composition operation gate corresponds to one of the gates decorated by \varepsilon OR \varepsilon^{-1}.

There is a second remark to be made, namely that the crossings constructed from FAN-OUT and \bar{\varepsilon} gates satisfy Reidemeister I and II moves but not Reidemeister III move. This is not a bad feature of the construction, in fact is the most profound feature, because it leads to the “chora” construction and introduction of composite gates which “in the infinitesimal limit”, satisfy also Reidemeister III, see section 6 from “Computing with space”.

In contradistinction, the crossings constructed from the \lambda abstraction and composition operation do satisfy the three Reidemeister moves.

3D crossings in graphic lambda calculus

Related:

(A)  (Graphic) beta rule as braiding

(B)  Slide equivalence of knots and lambda calculus (I)

(C) Braitenberg vehicles, enchanted looms and winnowing-fans

Let us look again at the NOTATIONS I made in the post (A) for crossings in graphic lambda calculus:

When seen in 3D, both are the same. Indeed, the 3D picture is the following one:

How to imagine the graphic beta move? Start with two wire segments in 3D, marked like this:

Glue the small blue arrow (which is just a mark on the wire) which goes downwards away from the blue wire with the small red arrow which goes downwards to the red wire:

That’s the graphic beta move, in one direction. For the opposite direction just rewind the film.

There is a slight  resemblance with the figures from the post (B), concerning slide equivalence, consisting in the fact that here and there we see crossings decomposed (or assembled from) two types of gates, namely one with one entry, two exits, the other with two entries, one exit.

Notice also that in graphic lambda calculus we have another two gates, namely the FAN-OUT gate and the TOP gate. We shall see how they couple together, next time.

Slide equivalence of knots and lambda calculus (I)

Related: (Graphic) beta rule as braiding.

Louis Kauffman proposes in his book Knots and Physics  (part II, section “Slide equivalence”), the notion of slide equivalence. In his paper “Knotlogic” he uses slide equivalence (in section 4) in relation to the self-replication phenomenon in lambda calculus. In the same paper he is proposing to use knot diagrams as a notation for the elements and operation of a combinatory algebra (equivalent with untyped lambda calculus).

There is though, as far as I understand, no fully rigorous relation between knot diagrams, with or without slide equivalence, and untyped lambda calculus.
Further I shall reproduce the laws of slide equivalence (for oriented diagrams), following Kauffman’ version from Knots and physics. Later, I shall discuss about the relations with my graphic lambda calculus.

Here are the laws, with the understanding that:

1. the unoriented lines may have any orientation,

2. For any version of orientation which is depicted, one may globally change, in a coherent way, the orientation in order to obtain a valid law.

Law (I’) is this one:

Law (II’) is this:

Law (III’):

Law (IV’):

Obviously, we have four gates, like in the lambda calculus sector of the graphic lambda calculus. Is this a coincidence?

UPDATE (06.06.2014): The article Zipper logic  arXiv:1405.6095  answers somehow to this, see the post Halfcross way to pattern recognition (in zipperlogic).A figure from there is:

halfcross_1

 

See also the posts Curious crossings (I) and Distributivity move as a transposition (Curious crossings II).

_____________________________

 

(Graphic) beta rule as braiding

The graphic lambda calculus formalism described in the paper Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale  concerns moves performed on oriented trivalent graphs, which are locally planar in the sense that each node comes with a cyclic order of the three edges connected with it.

I have shown that a sector of this formalism (i.e. some of the moves applied to a certain set of trivalent graphs) is equivalent with untyped  lambda calculus.  There are two interesting facts about this, namely:

(a) the set of trivalent graphs is defined by global conditions (“global condition ” as opposed to “local condition”; a local condition is one  which  involves up to a number N of edges and nodes, with fixed  N),

(b) apart some unimportant global pruning moves,  all the other moves are local. In particular, the most important rule of lambda calculus, the beta reduction, is equivalent with the graphic beta move depicted here

which is a local move!

In this post I want to make some supplementary remarks about this graphic beta move.

1.  It is not written in the paper, but this move resembles a lot with the unzip operation from the  knotted trivalent graphs formalism, see for example the paper  The algebra of knotted trivalent graphs and Turaev’s shadow world by D.P. Thurston, section 3. There are two differences: the unzip operation works on unoriented trivalent graphs and moreover by this move we can only unzip pairs of connected nodes (so the move goes only in one direction).

2.  As explained in my paper, the graphic beta move may be used outside of the lambda calculus sector, i.e. it may be used for any oriented trivalent graph from my formalism.  The previous remark suggests that there is a connection between knot diagrams formalism and the graphic lambda calculus, because the knotted trivalent graphs formalism contains the knot diagrams one. In the last section of my paper I proposed some relations between knot diagrams and graphic lambda calculus, also in the post Four symbols, where I connect with decorated knot diagrams relevant for emergent algebras.

3. There is though one easy way to see yet another connection between these formalisms: the graphic beta move is a braiding operation!

Indeed, it is important to notice that the moves of the graphic lambda formalism are independent of the particular embeddings of trivalent graphs into the plane. This may not be obvious from the figures which explain the moves, but it is so after further inspection.

Let us look again at the graphic beta move, as depicted in the first figure. In fact, this is a move which transforms a pair of edges (from the right of the picture) into the graph from the left of the picture. The fact that the edges cross in the figure is irrelevant. What matters is that, for the sake of performing the move, one edge is temporarily decorated with 1-3 and the other with 4-2.

Here are two more equivalent depictions of the same rule, coming from different choices of  1-3, 4-2 decorations. We may see the graphic beta move as

but also as

Let me then make some notations of the figures from the left hand sides of the previous diagrams.  Here they are: for the first figure we introduce the “crossing notation”

and for the second figure the “opposite crossing notation”:

With these notations the graphic beta move may be see as this braiding operation

or as this other braiding operation

These are “notations” which encode the performing of the graphic beta move in a particular embedding (of the graph) in the plane. As such, they look dependent of the particular embedding, but the crossings satisfy the Reidemeister moves for knot diagrams (see the last section in the paper), therefore the notation system appears to be independent of the change of embedding of the graphs in the 3-dim space!

Entering “chora”, the infinitesimal place

There is a whole discussion around the key phrases “The map is not the territory” and “The map is the territory”. From the wiki entry on the map-territory relation, we learn that Korzybski‘s dictum “the map is not the territory” means that:

A) A map may have a structure similar or dissimilar to the structure of the territory,

B) A map is not the territory.

Bateson, in “Form, Substance and Difference” has a different take on this: he starts by explaining the pattern-substance dichotomy

Let us go back to the original statement for which Korzybski is most famous—the statement that the map is not the territory. This statement came out of a very wide range of philosophic thinking, going back to Greece, and wriggling through the history of European thought over the last 2000 years. In this history, there has been a sort of rough dichotomy and often deep controversy. There has been violent enmity and bloodshed. It all starts, I suppose, with the Pythagoreans versus their predecessors, and the argument took the shape of “Do you ask what it’s made of—earth, fire, water, etc.?” Or do you ask, “What is its pattern?” Pythagoras stood for inquiry into pattern rather than inquiry into substance.1 That controversy has gone through the ages, and the Pythagorean half of it has, until recently, been on the whole the submerged half.

Then he states his point of view:

We say the map is different from the territory. But what is the territory? […] What is on the paper map is a representation of what was in the retinal representation of the man who made the map–and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all.

Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.

At this point Bateson puts a very interesting footnote:

Or we may spell the matter out and say that at every step, as a difference is transformed and propagated along its pathways, the embodiment of the difference before the step is a “territory” of which the embodiment after the step is a “map.” The map-territory relation obtains at every step.

Inspired by Bateson, I want to explore from the mathematical side the point of view that there is no difference between the map and the territory, but instead the transformation of one into another can be understood by using tangle diagrams.

Let us imagine that the exploration of the territory provides us with an atlas, a collection of maps, mathematically understood as a family of two operations (an “emergent algebra”). We want to organize this spatial information in a graphical form which complies with Bateson’s footnote: map and territory have only local meaning in the graphical representation, being only the left-hand-side (and r-h-s respectively) of the “making map” relation.

Look at the following figure:

In the figure from the left, the “v” which decorates an arc, represents a point in the “territory”, that is the l-h-s of the relation, the “u” represents a “pixel in the map”, that is the r-h-s of a relation. The relation itself is represented by a crossing decorated by an epsilon, the “scale” of the map.

The opposite crossing, see figure from the right, is the inverse relation.

Imagine now a complex diagram, with lots of crossings, decorated by various
scale parameters, and segments decorated with points from a space X which
is seen both as territory (to explore) and map (of it).

In such a diagram the convention map-territory can be only local, around each crossing.

There is though a diagram which could unambiguously serve as a symbol for
“the place (near) the point x, at scale epsilon” :

In this diagram, all crossings which are not decorated have “epsilon” as a decoration, but this decoration can be unambiguously placed near the decoration “x” of the closed arc. Such a diagram will bear the name “infinitesimal place (or chora) x at scale epsilon”.

A difference which makes four differences, in two ways

Gregory Bateson , speaking about the map-territory relation

“What is in the territory that gets onto the map? […] What gets onto the map, in fact, is difference.

A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. This piece of paper is different from the wood of this lectern. There are many differences between them, […] but if we start to ask about the localization of those differences, we get into trouble. Obviously the difference between the paper and the wood is not in the paper; it is obviously not in the wood; it is obviously not in the space between them .

A difference, then, is an abstract matter.

Difference travels from the wood and paper into my retina. It then gets picked up and worked on by this fancy piece of computing machinery in my head.

… what we mean by information — the elementary unit of information — is a difference which makes a difference.

(from “Form, Substance and Difference”, Nineteenth Annual Korzybski Memorial
Lecture delivered by Bateson on January 9, 1970, under the auspices of the Institute of General Semantics, re-printed from the General Semantics Bulletin, no.
37, 1970, in Steps to an Ecology of Mind (1972))

This “difference which makes a difference” statement is quite famous, although sometimes considered only a figure of speach.

I think it is not, let me show you why!

For me a difference can be interpreted as an operator which relates images of the same thing (from the territory) viewed in two different maps, like in the following picture:

This figure is taken from “Computing with space…” , see section 1 “The map is the territory” for drawing conventions.

Forget now about maps and territories and concentrate on this diagram viewed as a decorated tangle. The rules of decorations are the following: arcs are decorated with “x,y,…”, points from a space, and the crossings are decorated with epsilons, elements of a commutative group (secretly we use an emergent algebra, or an uniform idempotent right quasigroup, to decorate arcs AND crossings of a tangle diagram).

What we see is a tangle which appears in the Reidemeister move 3 from knot theory. When epsilons are fixed, this diagram defines a function called (approximate) difference.

Is this a difference which makes a difference?

Yes, in two ways:

1. We could add to this diagram an elementary unknot passing under all arcs, thus obtaining the diagram

Now we see four differences in this equivalent tangle: the initial one is made by three others.
The fact that a difference is selfsimilar is equivalent with the associativity of the INVERSE of the approximate difference operation, called approximate sum.

2. Let us add an elementary unknot over the arcs of the tangle diagram, like in the following figure

called “difference inside a chora” (you have to read the paper to see why). According to the rules of tangle diagrams, adding unknots does not change the tangle topologically (although this is not quite true in the realm of emergent algebras, where the Reidemeister move 3 is an acceptable move only in the limit, when passing with the crossing decorations to “zero”).

By using only Reidemeister moves 1 and 2, we can turn this diagram into the celtic looking figure

which shows again four differences: the initial one in the center and three others around.

This time we got a statement saying that a difference is preserved under “infinitesimal parallel transport”.

So, indeed, a difference makes four differences, in at least two ways, for a mathematician.

If you want to understand more from this crazy post, read the paper 🙂