Tag Archives: zipper logic

RNA based combinators

Thank you for noticing me about arXiv:2008.08814 An RNA-Based Theory of Natural Universal Computation.

The author proposes to partially use combinators which are RNA encoded, in such a way that the combinatory logic rewrites are implemented by pervasive RNA editing rules. The idea is like in Molecular computers, namely that for any rewrite there is an enzyme which detects the rewrite pattern and then does the rewrite using a small repertoire of cleavage and ligation.

The author concentrates on the BCKW system, which works well until the rewrite for W combinator. At this point the problem of duplication appears, which makes the author to propose the following replacement of duplications:

  • use RNA pseudoknots for term representation, in the sense that parantheses matching correspond to double-stranded portions of the pseudoknot
  • instead of duplication, tag a term with a label and use the same double strand matching idea to reference the term in another or the same term (pseudoknot)

This is very interesting to pursue, perhaps in combination with chemSKI with tokens which somehow proposes another mechanism for duplication, or with Combinatory Chemistry: Towards a Simple Model of Emergent Evolution arXiv:2003.07916 where duplication is delegated to the environment.

Partially explored also in Chemlambda strings, Zipper logic, as well as here in Zipper Logic and RNA pseudoknots. The idea is different though, look how I and K combinators look:

See also the first, apparently, project which proposes the use of combinators in chemistry, UPIM, mentioned locally here.

How do the zip, slip, and smash rewrites work in ZSS, and what are their applications in computation?

Again I used phind.com to summarize a github article: Zipper logic revisited https://doi.org/10.6084/m9.figshare.14769666.v1

Then phind suggested the question from the title and answered it.

Seems pretty clear. Here is the archived conversation.

In the following we have the summary and the answer to the question.

Answer | Phind V7 Model

The text is a transcript of a talk by Marius Buliga titled “ZSS: Zipper logic revisited” given in September 2020. The talk discusses the problem of computing with tangles and Reidemeister moves, and proposes an improvement to zipper logic, a method of universal computation using tangles and zippers.

The problem arises from the use of a graph rewrite system that includes tangle diagrams as graphs and Reidemeister moves as graph rewrites. The question is whether we can perform universal computation with this system.

Buliga explains that the usual way to perform computation with tangles is different from the problem at hand. In the usual method, a circuit is represented as a tangle, where crossings and maybe caps and cups are operators that satisfy the algebraic equivalent of the R moves. Each circuit represents a computation, and two circuits are considered equivalent if they can be transformed into each other using R moves.

Buliga introduces the concept of emergent algebras, a combination of algebraic and topological information. Emergent algebras can perform the Reidemeister 1 and 2 moves, but not the R3. However, by passing to the limit with the parameter that decorates the operations (or crossings), emergent objects and properties can be produced.

The text then discusses the concept of universal computation, which can be defined in three ways: lambda calculus, Turing machines, and Lafont’s Interaction Combinators. Buliga reformulates the problem of whether we can compute with R moves and presents a theorem stating that if there is a parser from lambda calculus to knot diagrams such that any lambda calculus rewrite is parsed to a pair of knot diagrams equivalent under the Reidemeister moves, then all lambda terms are sent to diagrams of the unknot.

The text concludes with the introduction of Zip-Slip-Smash (ZSS), a graph rewrite system that extends the Reidemeister rewrites with supplementary ones (zip, slip, smash) which allow the reconnection of edges, and tangle graphs with some 3 valent and 1 valent nodes. ZSS is an improved version of Zipper logic.

How do the zip, slip, and smash rewrites work in ZSS, and what are their applications in computation?

Answer | Phind V7 Model

The zip, slip, and smash rewrites in ZSS are graph rewrite rules that allow the reconnection of edges and manipulation of tangle graphs with 3 valent and 1 valent nodes.

The zip rewrites are used to create zippers, which are elements that have two non-identical parts. There are two types of zip rewrites: the first one uses only white half-zippers, and the second one uses only black half-zippers. These rewrites can be used to make diagrams and rewrites that resemble zippers https://github.com/mbuliga/zss/blob/master/zss.md .

The slip rewrites are introduced to resolve graphical ambiguity in the drawing of a half-zipper. They state that ends (of any color) which pass over a crossing are the same as half-zippers, and also that ends which pass under the crossing slip under and the crossing is destroyed https://github.com/mbuliga/zss/blob/master/zss.md .

The smash rewrites create half-zippers by imagining that a crossing is smashed with a hammer, breaking down into two half-zippers. All rewrites in ZSS are bi-directional, meaning that you can also turn a pair of half-zippers into a crossing. The smash rewrites turn a FIN fanin into a FI 3 valent node, in the presence of a L node. Conversely, a pair of half-zippers (black and white) connected in a certain way turns into a crossing https://github.com/mbuliga/zss/blob/master/zss.md .

In terms of their applications in computation, ZSS is universal because it implements directed Interaction Combinators (dirIC). The zip, slip, and smash rewrites, along with the Reidemeister rewrites for crossings, allow for the reconstruction of Lafont’s Interaction combinators by grouping A, L, and FI, FOE nodes. This makes ZSS a powerful tool for computation https://github.com/mbuliga/zss/blob/master/zss.md .

Thank you Hundredrabbits!

Today I discovered Hundredrabbits by reading this HN post. And I felt upon this particular article by Hundredrabbits

https://100r.co/site/weathering_software_winter.html

Hey, brothers!

Just replace in the article “artist” by “mathematician” and you get a perhaps better version of chorasimilarity :))

The same problems as mine, for different, more beautiful to live reasons.

They are on a boat, I try to do open science.

They are on a quest for a computing system which is self-sufficient, that is the same problem I have in OS, where the goal is to just do it, give to the potential reader everything you have, in a form which is simple enough to be used without dependencies.

Heck, even the love for garbage computers, how much I long for understanding in this materialistic world, thank you Hundredrabbits!

And then I saw this illustration (screenshot of phone screenshot from chorasimilarity telegram)

Oooh, this is an image from Zipper logic V, a post from 2014!

So they considered the possibility hahaha: “I went down that rabbit hole, and I barely emerged back alive.”

And also the repeated arguments that this or that does not port easily to silicone… Like chemlambda, where you have to destroy all the time the extra information the silicone needs to describe graphs and therefore most of the computation is actually needed to jump randomly….

Fractions? I know what you refer to, but fractions? What about rational fractions, have you seen Pure See?

I understand the choice of (but not yet fall in love with) Forth, eventually.

What a great post, nice to learn about you, from my virtual boat ;)))

Molecular computers reading list 1

This is the first in a stream of posts about making a new kind of molecular computers. it is time to learn, you are welcome to join me.

[Update: go to the repository molecular.]

Here is the formulation of the problem and what is the concrete goal.

Problem: Create a graph rewrite system of the type of Interaction combinators with real chemistry.

Build a molecular computer in the sense ofone molecule which transforms, by random chemical reactions mediated by a collection of enzymes, into a predictable other molecule, such that the output molecule can be conceived as the result of a computation encoded in the initial molecule.”

Goal: Compute ackermann(2.2) or ackermann(2,3) with this system.

Or why not ackermann(3,2)? or ackermann(4,4) to see what an ackermann goo looks like, macroscopically.

Once you can build one, you can build all.

Details: Graph rewrite systems as Interaction Combinators, chemlambda, chemSKI, Zipper Logic, are a natural candidate to build molecular computers.

The idea is a graph will be a molecule and a graph rewrite rule will be chemical reaction

(LEFT PATTERN) + (TOKEN_1) + (ENZYME) = (RIGHT PATTERN) + (TOKEN_2) + (ENZYME)

which transforms the graph into a new graph (chemical reaction mediated by an enzyme or catalyst, perhaps in the presence of input and output tokens).

The goal is to compute a function which is not trivial, like the Ackermann function, which is not primitively recursive. This was suggested in the article Molecular computers [arXiv] [figshare] , where Ackermann(2,2) is computed as an example.

The real use case is to explore the limits of the Lafont universality (also here).

Question: are there already molecular computing techniques which can be used for turning a graph-rewrite system as described into reality?

This is the goal of the reading list, which starts now:

Evgeny Katz (editor), DNA- and RNA-Based Computing Systems , Wiley (2020)

Sundus Erbas-Cakmak, David A. Leigh,Charlie T. McTernan, and Alina L. Nussbaumer, Artificial Molecular Machines, Chem. Rev. 2015, 115, 10081−10206

David Yu Zhang and Georg Seelig, Dynamic DNA nanotechnology using strand-displacement reactions, , Nature Chemistry, VOL 3, february 2011, doi: 10.1038/nchem.957

ZSS: zipper logic revisited, with explanations

I took the time to explain in detail the ZSS (zip-slip-smash) graph rewrite formalism,

  • why is universal,
  • why is somehow dual to directed Interaction Combinators,
  • why we need to enhance the Reidemeister moves with new rewrites to obtain a system where the R moves compute.

It uses the pictures of the slides and it has links inside.

Available here, in the chorasimilarity telegram channel (but you don’t need telegram to see it). In case you do use telegram, here is the chorasimilarity channel of long reads.

Also available on github.

In article form is on figshare.

And finally here as pdf.

Transcript of “Zipper logic revisited” talk

UPDATE: This post is superseded by a newer one. The transcript is available here, in the chorasimilarity telegram channel (but you don’t need telegram to see it). In case you do use telegram, here is the chorasimilarity channel of long reads.

Also available on github.

In article form is on figshare.

And finally here as pdf.

___

This is the transcript of the talk from September 2020, which I gave at the Quantum Topology Seminar organized by Louis Kauffman.

There is a video of the talk and a github repository with photos of slides.

My problem is if we can compute with tangles and the R moves. I am going to tell you where does this problem comes from my point of view, why it is different than the usual way of using tangles in computation and then I’m going to propose you an improvement of the thing called zipper logic, namely the way to do universal computation using tangles and zippers.

The problem is to COMPUTE with Reidemeister moves. The problem is to use a graph rewrite system which contains the tangle diagrams and as graph rewrites the R moves.

Can we do universal computation with this?

Where does this problem come from?

This is not the usual way to do computation with tangles. The usual way is that we have a circuit which we represent as a tangle, a knot diagram, where the crossings and maybe caps and cups are operators which satisfy the algebraic equivalent of the R moves. The circuit represents a computation. When we have two circuits, then we can say that they represent equivalent computations when we can turn one into another by using R moves.

In a quantum computation we have preparation (cups) and detectors (caps) (Louis).

R moves transform a computation into another. Example with teleportation.

R moves don’t compute, they are used to prove that 2 computations are equivalent. This is not what I’m after.

The source of interest: emergent algebras. An e.a. is a combination of algebraic and topological information…

[See https://mbuliga.github.io/colin/colin.pdf as a better source for a short description of emergent algebras]

We can represent graphically the axioms. This is the configuration which gives the approximate sum. It says that as epsilon goes to 0 you obtain in the limit some gate which has 3 inputs and 3 outputs.

We say that the sum is an emergent object, it emerges from the limit. We can also define in the same way differentiability.

We can define not only emergent objects, but also emergent properties.

Here you see an example: we use the R2 rewrites and passage to the limit to prove that the addition is commutative.

The moral is: you pass to the limit here and there, then this becomes by a passage to the limit the associativity of the operation.

Another example: if you define a new crossing (relative crossing) then you can pass to the limit and you can prove that, based on e.a. axioms. Moreover you can prove, by using only R1 and R2 and passage to the limit, that the R3 emerges from R1, R2 and a passage to the limit, for the relative crossings.

With e.a. we can do differential calculus. We use only R1, R2 and a passage to the limit. It is a differential calculus which is not commutative.

There are interesting classes of e.a.:

  • linear e.a. correspond to calculus on conical groups (Carnot groups, explanations)
  • commutative e.a. which satisfy SHUFFLE (calculus in topological vector spaces) In this class you can do any computation (Pure See https://mbuliga.github.io/quinegraphs/puresee.html )

What I want to know is: can you do universal computation in the linear case? This corresponds to the initial problem.

What means universal computation? There are many, but among them, 3 ways to define what computation means. They are equivalent in a precise sense.

Lambda calculus is a term rewrite system (follows definition of lambda calculus).

Turing machine is an automaton (follows definition of TM).

Lafont’ Interaction combinators is a graph rewrite system, where you use graphs with two types of 3valent nodes and one type of 1valent node. Explanation of rewrites. These are port graphs.

Lafont proves that the grs is universal because he can implement TM, so it has to be universal. There is a lot of work to implement LC in a grs, but the reality is that this is extremely dificult, in the sense that there are solutions, but the solutions are not what we want, in the sense that you can transform a lambda term into a graph and then reduce it with the grs of Lafont, say, and then you can decorate the edges of the graph so that you can retrieve the result of the computation. But these are non-local. (Explanation of local)

We have 3 definition of what computation means, by 3 different models, which are equivalent only if you add supplementary hypotheses. For me IC is the most important one, but we don’t know yet how to compute with IC.

Let me reformulate the problem of if we can compute with R moves in this way.

Notation discussion. We can associate a knot quandle to a knot diagram, simply by naming the arcs, then we get a presentation of a quandle. The presentation of a quandle is invariant wrt the permutation of relation or the renaming of the arcs. There is a problem, for example when an arc passes over in two crossings, we have a hidden fanout. The solution is to use a different notation and FIN (fanin) and FO (fanout) nodes. This turns the presentation into a mol file.

Can we compute with that?

Theorem: If there is a computable parser from lc to knot diagrams, such that there is a term sent to a diagram of the unknot, then all lambda terms are sent to the unknot.

We can compute with knot diagrams, but in a stupid way: if we use diagrams as a notational device. Example with the knot TM (flattened unknot, you may add the variations with the head near the tape, or the SKI calculus, to argue that it can be done in a local way. Develop this.)

Conclussion, you need a little something more, by reconnecting the arcs.

Let’s pass to ZSS

For this I introduce zippers. The idea is that a zipper is a thing which has two non-identical parts, so it’s made of two half-zippers, which are not identical.

We can use zippers like this: we have a tangle 1, a zipper, then tangle two. Explanation of the zip rewrite.
This move transforms zippers to nothing.

The move smash create zippers. Explanation of smash.

Then we have a slip move.

Here is the new proposal for ZL. The initial proposal used tangles with zippers, but there were also fanins and fanout nodes.

The new proposal is that we have 4 kinds of 1valent nodes, the teeths of the zippers, then we have 4 kinds of half-zippers, and then we have two kinds of crossings.

Explanation of the new proposal.

Theorem: ZSS system turns out to be able to implement dirIC, so it is universal.

Explanation of dirIC: https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#icvschem

Episodes I and II

The talk Zipper logic revisited

is the third episode, jokingly like Star Wars first movie. The episodes 1 and 2 would be as described further.

Title: Curvature, commutativity and numbers in emergent algebras


Abstract: In the frame of emergent algebras we define measures of curvature, infinitesimal commutativity and we describe intrinsically one-parameter groups as numbers, analoguously with the Church numbers for naturals. We explain how these measures relate with the usual ones (for example the relation of curvature and Schild’s ladder, with the differentiability of infinitesimal left translations, or the relation between the measure of commutativity with the differentiability of infinitesimal right translation; we explain how can we replace parts of the solution of the Hilbert 5th problem, which relies heavily on the use of one parameter groups, with a more intrinsic, from the point of view of emergent algebras, use of numbers). We explain why several generalizations (like for example cartesian closed categories, or symmetric monoidal categories) do not apply to categories associated to emergent algebras.

Uses:
[1] https://arxiv.org/pdf/1206.3093.pdf
[2] https://arxiv.org/pdf/1807.02058.pdf

Title: Emergent algebras as denotational semantics for chemlambda and interaction combinators


Abstract: Sketched in the Pure See language draft [3], there is a denotational semantics for chemlambda (and by extension to lambda calculus) and for Interaction Combinators (via directed IC, in the form dirIC), which uses emergent algebras which are commutative, in the sense that they satisfy the SHUFFLE. We explain this semantics and more precisely how the graph rewrites (or term rewrites) indeed emerge from SHUFFLE and passages to the limit, in the style of emergent algebras.

Uses:
[3] Pure See, https://mbuliga.github.io/quinegraphs/puresee.html

ZIP SLIP SMASH

ZSS is the name of the new formalism, which blends zipper logic with dirIC. and the Reidemeister rewrites. There is a working github repository which will be continuously updated, so that is the place to look for more details in the future.

The name comes from the 3 kinds of rewrites:

zip

slip

smash

The search for a name for this formalism started with this post. I think ZSS is concise and amusing as zip-slip-smash!

Zipper logic revisited: Tangles with zippers

Today, in a short time I give a talk in the Quantum Topology Seminar, organized by Louis Kauffman, see this announcement.

Zipper logic revisited

Thursday, September 24, 2020, 16:00 Chicago time

The recording of the talk and some supplementary material at this link.

The zoom link will be transmitted about 5 min before the talk, therefore if you are interested please mail me before.

I’ll add more materials to this post as soon as they will be available. For the moment I only have things like this

or like this

but, as previously, there will be some programs to play with later.

Zipper logic with tangle diagrams

I retook the subject of Zipper Logic [arXiv] [figshare] because it opens a problem I became aware of only recently.

UPDATE: Solved by ZSS.

I am interested into graph rewrite systems (GRS), run with the most simple algorithms, which use graphs in the family of tangle diagrams. To my knowledge there is no such a grs which is proved to be Turing universal. Pay attention: grs which acts on tangle diagrams which is universal. There are many attempts to use tangle diagrams as a sort of notation for computing, or decorations upon such diagrams, or whole physical theories which have associated some tangle diagrams formalism. My goal is simpler: I look for a graph rewrite system over tangle diagrams (and perhaps some enhancements) whose rewrites contain the Reidemeister rewrites and some new rewrites, which is universal for computation, in the sense that any lambda calculus (or combinatory logic) term can be parsed into a graph, which can then be reduced by using the grs, so that the resulting graph can be decorated in such a way so to give the result of the computation as a lambda term (or a combinatory logic term). Therefore I want that the computation to happen via graph rewrites. I don’t want to encode computations (i.e. chains of reductions) into a tangle diagram, I don’t want to use the Reidemeister moves to prove that such two chains of reduction are equivalent. No. I want to use graph rewrites for reductions of terms parsed into tangle diagrams and back (in the sense explained, that of decoration of tangle diagrams).

This observation of the distinction between the use of graph rewrites, when they compute and when they don’t compute, was formulated for the first time as NTC vs TC in Topology does Not Compute vs Topology Computes.

Chemlambda, dirIC or Lafont Interaction Combinators, or chemSKI, are grs which are universal. All of them happen to be of a special blend, namely they all admit as a semantics emergent algebras which are commutative. (I know that except some specialists this is a void phrase, take it as if I say that there is a hidden commutativity in all these formalisms.)

Tangle diagrams with the Reidemeister rewrites admit as a semantics emergent algebras which are only linear, not commutative. This is a larger class of emergent algebras, which may be interesting for quantum stuff. Logic?

So it would be interesting to see if universal computations in the sense explained can be achieved in this linear setting, without using the hidden commutativity.

I believe not, for some reasons, and I believe that in a twisted way it is possible, for other reasons, both too long to be shared in a post.

So I am curious and I took again the Zipper Logic formalism, which I try to reformulate exclusively in tangle diagrams (with zippers) and without trivalent fanouts.

Up to now it appears that I am able to either have, in this linear setting, something akin to the beta rewrite or something akin to duplication, but not both all the time.

I will share soon the result, when it will stabilize 🙂

How does a zipper logic zipper zip? Bonus: what is Ackermann goo?

Zipper logic is an alternative to chemlambda, where two patterns of nodes, called half-zippers, appear.

It may be more easy to mimic, from a molecular point of view.

Anyway, if you want to have a clear image about how it works then there are two ways to play with zippers.

1. Go to this page and execute a zip move. Is that a zipper or not?

2. Go to the lambda2chemlambda page and type this lambda term

(\h.\g.\f.\e.\d.\c.\b.\a.z) A B C D E F G H

Then reduce it. [There is a difference, because a, b, … h do not occur in A B C D E F so the parser adds a termination node to each of them, so when you reduce it the zipper will zip and then will dissappear.]

You can see here the half-zippers

\h.\g.\f.\e.\d.\c.\b.\a.

A B C D E F G H

which are the inspiration of the zippers from zipper logic.

In chemlambda you can also make FI-zippers and FOE-zippers as well, I used this for permutations.

BONUS: I made a comment at HN which received the funny reply “Thanks for the nightmares! 🙂“, so let me recall, by way of this comment, what is an Ackermann goo:

A scenario more interesting than boundless self-replication is Ackermann goo [0], [1]. Grey goo starts with a molecular machine able to replicate itself. You get exponentially more copies, hence goo. Imagine that we could build molecules like programs which execute themselves via chemical interactions with the environment. Then, for example, a Y combinator machine would appear as a linearly growing string [2]. No danger here. Take Ackermann(4,4) now. This is vastly more complex than a goo made of lots of small dumb copies.

[0] https://chemlambda.github.io/collection.html#58

[1] https://chemlambda.github.io/collection.html#59

[2] https://chemlambda.github.io/collection.html#259

 

Fold rewrite, dynamic DNA material and visual DSD

As it happened with chemlambda programs, I decided it is shorter to take a look myself at possible physical realizations of chemlambda than to wait for others, uninterested or very interested really, people.

Let me recall a banner I used two years ago

KeepCalmStudio.com-Shortest-Explanation-Of-Chemlambda-[Knitting-Crown]-Keep-Calm-And-Use-Rna-For-Interaction-Nets

It turns out that I know exactly how to do this. I contacted Andrew Phillips, in charge with Microsoft’ Visual DSD  with the message:

Dear Andrew,

I am interested in using Visual DSD to implement several graph-rewriting formalisms with strand graphs: Lafont Interaction Combinators, knots, spin braids and links rewrite systems, my chemlambda and emergent algebra formalisms.

AFAIK this has not been tried. Is this true? I suggest this in my project chemlambda but I don’t have the chemical expertise.

About me: geometer working with graph rewrite systems, homepage: http://imar.ro/~mbuliga/index.html or
https://mbuliga.github.io/

Some links (thank you for a short reception of the message reply):

Chemlambda:
– github project: https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
– page with more links: http://imar.ro/~mbuliga/chemlambda-v2.html
– arXiv version of my Molecular computers article https://arxiv.org/abs/1811.04960

Emergent algebras:
– em-convex https://arxiv.org/abs/1807.02058

 

I still wait for an answer, even if Microsoft’ Washington Microsoft Azure and Google Europe immediately loaded the pages I suggested in the mail.

Previously, I was noticed by somebody [if you want to be acknowledged then send me a message and I’ll update this] about Hamada and Luo Dynamic DNA material with emergent locomotion behavior powered by artificial metabolism  and I sent them the following message

Dear Professors Hamada and Luo,

I was notified about your excellent article Dynamic DNA material with emergent locomotion behavior powered by artificial metabolism, by colleagues familiar with my artificial chemistry chemlambda.

This message is to make you aware of it. I am a mathematician working with artificial chemistries and I look for ways to implement them in real chemistry. The shortest description of chemlambda is: an artificial chemistry where the chemical reactions are alike a Turing complete family of graph rewrites.

If such a way is possible then molecular computers would be not far away.

Here is a list of references about chemlambda:

– GitHub repository with the scripts https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
– page which collects most of the resources http://imar.ro/~mbuliga/chemlambda-v2.html

Thank you for letting me know if this has any relation to your interests. For my part I would be very thrilled if so.

Best regards,
Marius Buliga

Again, seems that these biology/chemistry people have problems with replies to mathematicians, but all ths makes me more happy because soon I’ll probably release instructions about how everybody could make molecular computers along the lines of Molecular computers.

I’ll let you know if there are future “inspiration” work. Unrelated to chemlambda, there are several academic works which shamelessly borrow from my open work without acknowledgements, I’ll let you know about these and I’ll react in more formal ways. I hope though this will not be the case with chemlambda, however, this happened before twice at least.  (I say nothing about enzymes/catalysts, category theory and cryptocurrencies… for the moment.)

Finally, here is a realization of the lambda calculus beta rewrite via a FOLD rewrite

arrowlink-2

which shares a relation with the ZIP rewrite from Zipper Logic. It seems I was close to reality,  now though I got it exactly 🙂 .

Let’s talk soon!

 

 

 

Topology does not compute vs topology computes (I)

The article Zipper logic   arXiv:1405.6095  contains in its last section a discussion about the role of topology in computing formalisms which use topological objects, like tangle diagrams, or other graphs, and graph rewrites.

The purpose of this post is to start a discussion about the differences between two uses of graph rewrites in such formalisms.

I shall use further the NTC vs TC denomination suggested by Louis Kauffman instead of the longer one:

  • NTC = “topology does not compute” , meaning that the graph rewrites are NOT part of the computation notion
  • TC = “topology computes”, meaning that the graph rewrites are the fundamental blocks of the computation notion.

There are many articles which fall in the NTC camp, among them the ZX calculus of Coecke and Duncan from Interacting Quantum Observables: Categorical Algebra and Diagrammatics   arXiv:0906.4725 [quant-ph] , which is a part of the beautiful Categorical Quantum Mechanics. I shall use this article as an anchor which allows to compare with the TC camp. In future posts I shall add more articles to the list, because this is a rather big research field now.

Chemlambda is in the TC camp, like zipper logic.

So let us compare them, concretely.

At first sight they look related, they seem to have the same fundamental genes, say like an elephant (NTC) has almost all genes in common with a mouse (TC). They are both graph rewrites systems, they both use almost the same family of graphs (details later), they both speak about computation:

  • in the NTC article case it is about quantum computing, there is also a practical computing arm, the Quantomatic project
  • in the TC article case is about distributed, decentralized computing, and there is  the distributed GLC project.

The goals are different, the means are different, the NTC camp is much more developed than the TC camp, the question is:

Is there anything  fundamentally different in the TC camp?

 

The answer is YES. There are several fundamentally different things in the TC camp, maybe the biggest among them being the different role played by graph rewrites, summarized as NTC vs TC.

Let’s see in more detail, by comparing ZX with chemlambda.

 

In order to see what is really different, it is useful to make the two formalisms as alike as possible. I shall start by discussing if there is any difference between the graphs which appear in ZX from the ones which appear in chemlambda.

The graphs from ZX don’t have oriented arrows, but instead there is a global orientation convention, say from up to down when represented on page.

In ZX here is also a global horizontal “simultaneity” of graphical elements, which is part of the recipe of associating to a ZX graph an object related to a Hilbert space.

 

In chemlambda the graphs have oriented arrows. There is no global arrow orientation, nor any horizontal simultaneity.

Otherwise, both in ZX and chemlambda are used loops, free arrows and “half-arrows”.

So, there is an obvious procedure to associate to a graph in chemlambda a graph from ZX, by adding to the graph in chemlambda a global arrow orientation and a horizontal simultaneity.

This means, for example that the graph of the K combinator from chemlambda is represented in ZX like this:

two_kThe price is that in ZX there are several new graphical elements which need to be introduced, like the CAP, the CUP and the CROSS. Here is the list of all graphical elements needed to represent chemlambda graphs in ZX.

alphabet

What about the graph rewrites? Is not clear to me if the chemlambda graph rewrites can be done as finite sequences of ZX graph rewrites, but this is not very important for this discussion. There seems to be anyways a kind of a fundamental alphabet of graph rewrites which keeps appearing everywhere. I shall not be worried about the detailed graph rewrites, but however here is how the FAN-IN and the beta moves from chemlambda appear in ZX translation.

beta_fan_in

It is worthy to mention that the global arrow orientation and the horizontal simultaneity used in ZX introduces new moves to (the translation of) chemlambda graphs which are related to these constraints, and which are not needed in chemlambda.

Now we are close to see the differences.

There is a recipe for associating to each graph G from ZX an object O(G) on a Hilbert space. This is done by using the global arrow orientation convention and the horizontal simultaneity, along with a dictionary which associates to any graphical element from ZX something from a Hilbert space.

 

Here is the important thing about the NTC ideology:  if two graphs G and G' have the property that we can go from G to G' by a finite sequence of ZX graph rewrites, then O(G) = O(G').

Each graph G from ZX represents a computation. The graph rewrites from ZX are used to prove that TWO computations are equivalent.

More than this: a graph from ZX represents a computation in the sense that the global arrow orientation is a global time orientation, the horizontal simultaneity express space simultaneity, the CAP is a preparation, the CUP is a measurement. The nodes of the graph are GATES and the arrows of the graph are WIRES which transmit some signals.

That is why, in the NTC camp the graph rewrites DO NOT COMPUTE. The graph rewrites are used to prove an equivalence of two computations, like for example in the case of graphs which represent quantum teleportation protocol.

Now, in chemlambda, a graph is NOT a computation. Each graph is like a chemical molecule, with nodes as atoms and arrows as bonds. There is only one computation in chemlambda, which is done by graph rewrites.

So, even if we can translate chemlambda graphs into ZX graphs, there is this difference.

Suppose G is a ZX graph which is obtained by translating a chemlambda graph, as previously explained. Let G' be another such graph. Suppose that we can arrive from G to G' by a sequence of translations of chemlambda moves (graph rewrites), maybe also by using some of the supplementary “topological” moves which we need in ZX because of the global orientation and simultaneity constraints.

What do we have in ZX?

The graphs G and G' represent two computation which are equivalent.

What do we have in chemlambda?

The graphs G and G' represent the beginning and end of a computation done by the graph rewrites which in ZX serve to prove equivalence of computations.

This very big difference between NTC and TC has a big effect in all the rest of the formalisms.

______________________________________________

Zipper logic appeared in math.CO (combinatorics)

Today,  a week after the submission, the article Zipper Logic   arXiv:1405.6095 appeared in math.CO , math GT, and not in math.LO , math.GT.

Actually, combinatorics, especially lately, is everywhere, being probably one of the most dynamic research fields in mathematics.

There is a small world connection of zipper logic with combinatorics,   because zipper logic is a variant of chemlambda, which is a variant of GLC, which has an emergent algebra sector  arXiv:1305.5786 section 5 , which is a sort of approximate algebraic structure, alike, but not the same as the approximate groups arXiv:1110.5008 .

____________________________________

 

Zipper logic and RNA pseudoknots

UPDATE: compare with the recent Chemlambda strings (working version).

Zipper logic is a variant of chemlambda, enhanced a bit with a pattern identification move called CLICK.

Or, chemlambda is an artificial chemistry.  Does it have a natural, real counterpart?

It should. Finding one is part of the other arm of the research thread presented here, which aims to unite (the computational part of) the real world with the virtual world, moved by the same artificial life/real life basic bricks. The other arm is distributed GLC.

A lot of help is needed for this, from specialists!Thank you for pointing to the relevant references, which I shall add as I receive them.

Further is a speculation, showing that zipper graphs can be turned into something which resembles very much with RNA pseudoknots.

(The fact that one can do universal computation with RNA pseudoknots is not at all surprising, maybe with the exception that one can do functional style programming with these.)

It is a matter of notation changes. Instead of using oriented crossings for denoting zippers, like in the thread called “curious crossings” — the last  post is Distributivity move as a transposition (curious crossings II) — let’s use another, RNA inspired one.

rna_2

So:

  • arrows (1st row) are single stranded RNA pieces. They have a natural 5′-3′ orientation
  • termination node (2nd row) is a single strand RNA with a “termination word on it”, figured by a magenta rectangle
  • half-zippers are double strands of RNA (3rd and 4th rows). The red rectangle is (a pair of antiparallel conjugate) word(s) and the yellow rectangle is a tag word.

Remark that any (n) half-zipper can be transformed into a sequence of (1) half-zippers, by repeated applications if the TOWER moves, therefore (n) half-zippers appear in this notation like strings of repeated words.

Another interesting thing is that on the 3rd row of the figure we have a notation for the lambda abstraction node and on the 4th row we have one of the application  node form chemlambda. This invites us to transform the other two nodes — fanin and fanout — of chemlambda into double stranded RNA. It can be done like this.

rna_3

On the first two rows of this picture we see the encoding of the (1) half-zippers, i.e. the encoding of the lambda abstraction and application, as previously.

On the 3rd row is the encoding of the fanout node and on the 4th row the one of the fanin.

In this way, we arrived into the world of RNA. The moves transform as well into manipulations of RNA strings, under the action of (invisible and to be discovered) enzymes.

But why RNA pseudoknots? It is obvious, as an exercise look how the zipper I and K combinators look in this notation.

rna_4

We see hairpins.

___________________________________

Conclusion:  help needed for chemistry experts. Low hanging fruits here.

___________________________________

Distributivity move as a transposition (curious crossings II)

It looks that there is a connection of the following with the research on DNA topology, but I shall comment on this in a future post, mainly because I am learning about this right now. (But another main reference is Louis Kauffman.)

Let’s see.

If I am using this encoding of chemlambda nodes with crossings

 

ccdeco_1

which is a variant of the encoding from Curious crossings   then, as in the mentioned post,   the beta move becomes a CLICK between “sticky ends” which are marked with dashed lines, followed by R2b move, and also the FAN-IN move becomes a CLICK between sticky ends, followed by a R2a move.

What about the DIST moves? Let’s take the DIST move which involves an application node and a fanout node. The move looks like this:

ccdeco_surgery_1(Click on the picture to make it bigger.)

In the right column we see the DIST move with chemlambda drawing conventions. In the left column there is the translation with crossings and sticky ends.

What do we see? The strings 1-5 and 6-3 are transposed by the DIST move and a new string appears, which crosses them.

I can draw the same move like this:

ccdeco_surgery_2In this figure, the left column is as before, but the right column has changed. I just kept the order, from left to right, of the strings 6-3 and 1-5, and I wiggled the string 2-4 for this.

This is starting to look interestingly alike some other pictures from  DNA topology.

 

___________________________________________

 

Curious crossings

I’m coming back to  the post   Halfcross way to pattern recognition (in zipperlogic)    and I am going to modify the drawings a bit.

Instead of this convention of transforming chemlambda nodes into half-crossings

halfcross_1

I draw this mapping from chemlambda nodes to crossings:

cdeco_1

Remark that at the left we have now crossings. At the right we have the chemlambda nodes, each with a dashed half-arrow attached, so now the nodes become 4-valent locally planar ones.

As usual (here at chorasimilarity) the crossing diagrams are only locally planar!

Look closer: there are two kinds of oriented crossings, right? To each kind corresponds a colour (green or red) of a chemlambda node.

This is no longer a dictionary, there is no longer a bijective correspondence, because for each oriented crossing there are two possible chemlambda nodes, depending on where is the dashed half-arrow! That is the meaning of the wiggling blue arrow from right to left, it’s uni-directional.

OK then, instead of drawing this interpretation of the beta move

halfcross_2

we get the following one

cdeco_2

where the arrows are drawn like that in order to see what transforms into what (otherwise there is no need for those convoluted arrows in the formalism).

Likewise, instead of this

halfcross_3

I draw this:

cdeco_3

Funny, what could that mean?

________________________________________

Birth and death of zipper combinators (II)

Continues from   Birth and death of zipper combinators (I).

In the following is presented another mechanism of birth/death of zipper combinators, which is not using a garbage collecting arrow and termination node.

For any zipper combinator A there is a number n and a succession o fn CLICK, ZIP and LOC PRUNING moves such that

zipd_5

 

So this time   we transform a zipper combinator connected to a termination node into a bunch of loops.

Proof. The first part is identical with the one from the previous post, namely we remark that we can reduce the initial zipper combinator with the exit connected to a termination node into a finite collection of simpler zippers.

zipd_10

This is done by a succession of LOC PRUNING moves for (+) half-zippers.

We shall use then the following succession of moves, in order to “unlock” (-) half-zippers.

zipd_6

If we use this for the I zipper combinator then we can transform it into one loop.

zipd_7

The same trick is used for transforming the zipper combinator K into two loops.

zipd_8

The zipper combinator S is transformed into three loops, starting by the same trick.

zipd_9

The proof is done.

_____________________________

It follows that w can replace garbage collection by ELIM LOOPS, a move which appeared in earlier formulations of chemlambda.

Seen from the birth point of view, if we have enough loops then we can construct any zipper combinator (with the exit arrow connected to a termination node).

_____________________________

Birth and death of zipper combinators (I)

Because zipper logic is a graph rewriting system which does not use variable names, just like GLC and chemlabda,  it needs ways to multiply, distribute or kill  things.

See  Chemlambda, universality and self-multiplication  for multiplication, distribution or propagation phenomena.

In this post we shall see how zipper combinators die or are born, depending on the sense of reading the moves. This can happen in several ways, one of them explained in this post.

This can also be seen as the statement: we can do garbage collection in chemlambda, GLC or zipper logic.

____________________________________

 

Zipper combinators.   The set of zipper combinators is the smallest set of zipper graphs with the properties:

  •  it contains the S, K, I zipper graphs defined in  the next figure

zipper_not_6

  • for any natural number n >0 and for any n+1 zipper combinators, the zipper graph obtained by connecting the out arrows of the  zipper combinators to the in arrows of the (+n) half-zipper is a zipper combinator.

appli

  • any zipper graph which is obtained by applying a zipper logic move to a zipper combinator is a zipper combinator.

 

 

I want to prove that for any zipper combinator A, there is a number n and  a succession of n  LOCAL PRUNING,  CLICK and ZIP moves such that:

zipd_0

Proof.     Becauseof the LOC PRUNING move satisfied by  $latex  (+)$ half-zippers, it follows that by a finite number of these moves we can achieve the effect described in the next figure, for any zipper combinator A (of course, the number of these moves depends on A):

zipd_2

It is left to prove that the free arrow ending with a termination node can “eat”, one by one, all the I, K, S zipper combinators.

For the I combinator, it happens like this:

zipd_1

The case of the combinator K is described next:

zipd_3

The combinator S is absorbed by the following succession of moves:

zipd_4

 

The proof is done.

_____________________________________

We can read all backwards, by saying that an arrow connected to a termination node can give birth to  any zipper combinator, with the out arrow connected to a termination node.

_____________________________________

 

 

 

Halfcross way to pattern recognition (in zipperlogic)

Inspired by the Zipper logic and knot diagrams post, here is an alternate encoding of chemlambda.

This is part of the series on zipper logic (branded in this post as  “zipperlogic”).  The last post is Zipper logic (VI) latest version, detailed.

As can be seen in that post, zipperlogic is equivalent with chemlambda, but it has two interesting qualities: is more intuitive and it has the CLICK move.

The CLICK move  transforms a pair of opposed half-zippers into a  zipper, which is then unzipped with the ZIP move. While the ZIP move is equivalent with the graphic beta  move, there is no correspondent to the CLICK move, apparently.

The CLICK move becomes useful when we use other realizations of the zipperlogic than chemlambda.    In the one where half-zippers are realized as towers of crossings, the CLICK move turns out to be a pattern recognition move, and the ZIP move becomes the familiar R2 (Reidemeister 2) move, applied to that pattern.

That is why CLICK is interesting: because in order to apply moves in chemlambda, we have first to identify the patterns where these moves may be used.

Now, I want to justify this in the following.  I shall not aim for another realization of zipperlogic, but for one of chemlambda, inspired by the one of zipperlogic seen as acting on towers of crossings.

I shall use half-crossings.  Recall that in the post Slide equivalence of knots and lambda calculus (I) I wrote:

Louis Kauffman proposes in his book Knots and Physics  (part II, section “Slide equivalence”), the notion of slide equivalence. In his paper “Knotlogic” he uses slide equivalence (in section 4) in relation to the self-replication phenomenon in lambda calculus. In the same paper he is proposing to use knot diagrams as a notation for the elements and operation of a combinatory algebra (equivalent with untyped lambda calculus).

[…]

Obviously, we have four gates, like in the lambda calculus sector of the graphic lambda calculus. Is this a coincidence?

 

So this post can be seen as a try to answer this question.

But the halfcrossings which I use here are different than the ones defined by Louis Kauffman. There might be a way to transform ones into the others, but I have not found it yet.

____________________________

Here is the encoding of chemlambda by halfcrossings:

halfcross_1

Remark that each of the halfcrossings has a dangling, vanishing thread, like in the previous post Bacterial conjugation is beta reduction.

[I shall come back in later posts to the relevance of this formalism for horizontal gene transfer.]

Look at this as a new notation for chemlambda nodes and just replace  the green and red nodes by these halfcrossings in order to get the right moves for the halfcrossings.

With an exception: the CLICK move. This move consists into joining neighbouring dangling threads, in two situations, one related to the beta move, the other related to the FAN-IN move.

Look how the beta move appears with halfcrossings and the CLICK move used for pattern recognition (in the figure this s calld “pattern id”):

halfcross_2

Nice, right?

 

Now, the other CLICK move, involved into the identification of the pattern appearing  in the FAN-IN move.

halfcross_3

In a future post I shall look at the DIST moves, in this encoding.

_________________________________