Tag Archives: decorated tangles

Second thoughts on the dual of the extended beta move

Continues “Dual of the extended beta move. Termination as implosion or blow-out“.

After the night I found another way to look at the dual of the extended beta move. Recall that in the previous post was proved that the dual of the extended beta move is equivalent with this mystery move.

fundam_epsilon_dual2

The graph from the left hand side can be expressed by using the (emergent algebra) crossing macros:

fundam_epsilon_dual5

This form of the graph makes obvious where to apply the extended beta move. Let’s see what we get.

fundam_epsilon_dual6

Therefore it looks like the mystery move is a combination of the extended beta move and elimination of loops. There is no need to replace the termination gate by a trivalent graph, as suggested  in the previous post, although that is an idea worthy of further exploration.

Advertisements

Dual of the extended beta move. Termination as implosion or blow-out

Thanks Stephen for this comment, it made me think about the dual of the extended beta move and about the elimination of the termination gate.

The extended beta move has been introduced here. It is not, for the moment, part of the tutorial for graphic lambda calculus, but it will be soon.

Meanwhile, let me explore further the duality suggested in the post “Lambda scale in graphic lambda calculus and diagram crossings“.

The duality, although unclear as a precise mathematical object, is described by these three correspondences:

correspondence_1

The last correspondence should be understood like it appears in the left hand sides of the “dual” moves from the “Graphic beta move extended …” which I reproduce here:

fundam_epsilon_9

fundam_epsilon_10

OK, so what is the dual of the extended beta move, then?  First, let me remind you the extended beta move:

fundam_epsilon_8

The dual should be this:

fundam_epsilon_8_dual

If we look at the graph from the right hand side, then we see we can apply a beta move, like this:

fundam_epsilon_8_dual1

Therefore the “dual extended beta” move is equivalent with the curious looking move:

fundam_epsilon_dual2

That’s kind of a pruning move, but it is not on the list. Should it be?

We reason like this: by using the extended beta move and the Reidemeister 2 move (for emergent algebras) we arrive at

fundam_epsilon_dual3

We may repeat indefinitely this succesion of moves, or, alternatively, we may use the extended beta move with an arbitrary \mu  from the group \Gamma. The intuition behind this is that the gate \bar{\varepsilon} is a dilation gate, which has an input coming from the fan-out gate and the other connected to the output of the gate, therefore, by circulating along this loop, we apply an endless number of dilations of coefficient \varepsilon. At the limit (if such a concept makes sense at this level of generality), either the things blow to infinity (if \varepsilon > 1) or they implode to the value given by the fan-out gate(if \varepsilon < 1), or they circulate forever in a loop (if \varepsilon = 1) . In all cases the graph with only one input and no outputs, which we see in the previous figure, behaves exactly like the termination gate! Hence the mystery move (??) is simply a local pruning move.

Therefore, we may as well eliminate the termination gate and replace it in all the formalism by the gate from the previous figure.

fundam_epsilon_dual4

By this replacement the “dual extended beta move” is true, as a consequence of the usual graphic beta move. Moreover, if we do this replacement, then we shall have pure trivalent graphs in the formalism because the ugly univalent termination gate is replaced by a trivalent graph!

What do you think?

Graphic beta move extended, to explore

This post continues the previous: “Lambda scale in graphic lambda calculus and diagram crossings“.

Suppose, just suppose for the moment that there is an extension of the graphic beta move, suggested by the previous post. I shall call this move “extended beta move”, or “(ext \beta)”, although a better name would be “scaled beta move”.

The extension to consider has the following form:

fundam_epsilon_8

What a move! Who could have guessed such a move, involving four gates, without thinking about diagram crossings AND about lambda calculus?

Now, after expressing my enthusiasm, let’s see what this move can do.

First of all, per the previous post, the extended beta move implies the graphic beta move (by using (ext2) and LOCAL PRUNING).  Recall that in terms of the crossing macro involving only lambda calculus gates, the graphic beta move looks like this:

betar_dif2

which is the same as this:

fundam_epsilon_9

The extended beta move implies also a new move, expressed via the emergent algebra crossings, namely  this:

fundam_epsilon_10

(See the last post: by the graphic beta move, which is implied by the extended one, the emergent algebra crossing from the RHS of the figure transforms into the graph from the RHS of the first figure, which now, by the extended beta move, transforms into the graphs from the LHS of the last figure.)

Let us give a name to this last move, call it “epsilon beta move”.

Now, fact: the usual graphic beta move together with the epsilon beta move are equivalent with the extended beta move.

Proof: we have to prove only one implication, because we have already proved the other. For this, let us express the graph from the RHS of the first figure with the help of the crossing macros. It looks like this:

fundam_epsilon_11

Now we see that by applying first a epsilon beta move and then an usual graphic beta move, we recover the extended beta move.

Anyway, the extended beta move is left for further exploration.

Generating set of Reidemeister moves for graphic lambda crossings

UPDATE 2: It  turrns out there is an error in this post, concerning the move R3a. The oriented Reidemeister moves which are not respected by the untyped lambda calculus crossings are R2c, R2d, R3a, R3h. All other moves are allowed. [Added: see why by looking at the post Big sets of oriented Reidemeister moves which don’t generate all moves.]

________

UPDATE: The paper “Graphic lambda calculus and knot diagrams”, arXiv:1211.1604    contains a part of what is discussed here, at the tag “graphic lambda calculus“. There will be follow-ups and applications.

________

Last post related to the subject is “3D crossings in graphic lambda calculus“.  I shall use the notations from there, see also some earlier posts indicated there.

For oriented link diagrams the list of Reidemeister moves is large (24 moves). The list of the moves and a choice of a minimal generating set (with proofs) can be found in the paper by Michael Polyak “Minimal generating sets of Reidemeister moves“.

The following four Reidemeister moves generate all the others.

– R1a:

-R1b:

-R2a:

-R3a:

Let’s use the crossings given by graphic lambda calculus (see the post mentioned at the beginning) and let us interpret the moves as composites of graphic beta moves. Is this possible? Yes, I’ll show you in a moment.

In terms of the graphic lambda calculus, the move R1a is this:

which is just a degenerate graphic beta move with elimination of loops (short name “elimination of loops”).

The move R1b becomes this:

which is another elimination of loops move.

So, the first two moves are just degenerate graphic beta moves with elimination of loops. We may say that we used 0 graphic beta moves, because degenerated moves are much weaker than the full graphic beta move.

Let’s see how R2a looks like in terms of graphic lambda calculus:

That is a combination of two beta moves!

Finally, the move R3a looks like this:

That is a combination of 6 beta moves. The red  arrows  are those which correspond to the relation over-under of the respective crossings. (see UPDATE 2 for corrections)

Conclusion: the Reidemeister moves, as seen when interpreted in graphic lambda calculus, are combinations of an even number of graphic beta moves and any number of degenerated graphic beta moves with elimination of loops.

3D crossings in emergent algebras

This post continues the previous one  3D crossings in graphic lambda calculus . Other relevant posts are:

Four smbols

Computing with space: done!

For graphic lambda calculus see this, for knot diagrams and emergent algebras see this, sections 3-6.

In the previous post we saw that we can “construct” crossings   by using both the \lambda abstraction operation and the composition operation from lambda calculus. These operations appear as elementary gates in graphic lambda calculus, along with other two gates, namely the FAN-OUT gate denoted by Y and the \bar{\varepsilon} gate (with \varepsilon an element in a commutative group \Gamma). This last gate models the family of operations of an emergent algebra.

The FAN-OUT gate Y is used in two different contexts. The first one is   as a properly defined FAN-OUT, with behaviour described by the  global fan-out move, needed in the construction which attaches to any term in untyped lambda calculus a graph in the lambda calculus sector of the graphic lambda calculus as explained in “Local and global moves on locally planar trivalent graphs … ” step 3 in section 3. The second one is in relation with the decorated knot formalism of emergent algebras, “Computing with space, …” section 3.1″.

There is an astounding similarity between the \lambda and composition gates from lambda calculus, on one side, and FAN-OUT and \bar{\varepsilon} gates from emergent algebras, in the context of defining crossings. I shall explain this further.

In the decorated knots formalism, crossings of oriented wires are decorated with elements \varepsilon of a commutative group \Gamma. The relation between these crossings and their representations in terms of trivalent graphs is as following:

Comparing with the notations from the previous post, we see that in both cases the \lambda gate corresponds to a FAN-OUT, but, depending on the type of crossing, the composition operation gate corresponds to one of the gates decorated by \varepsilon OR \varepsilon^{-1}.

There is a second remark to be made, namely that the crossings constructed from FAN-OUT and \bar{\varepsilon} gates satisfy Reidemeister I and II moves but not Reidemeister III move. This is not a bad feature of the construction, in fact is the most profound feature, because it leads to the “chora” construction and introduction of composite gates which “in the infinitesimal limit”, satisfy also Reidemeister III, see section 6 from “Computing with space”.

In contradistinction, the crossings constructed from the \lambda abstraction and composition operation do satisfy the three Reidemeister moves.

3D crossings in graphic lambda calculus

Related:

(A)  (Graphic) beta rule as braiding

(B)  Slide equivalence of knots and lambda calculus (I)

(C) Braitenberg vehicles, enchanted looms and winnowing-fans

Let us look again at the NOTATIONS I made in the post (A) for crossings in graphic lambda calculus:

When seen in 3D, both are the same. Indeed, the 3D picture is the following one:

How to imagine the graphic beta move? Start with two wire segments in 3D, marked like this:

Glue the small blue arrow (which is just a mark on the wire) which goes downwards away from the blue wire with the small red arrow which goes downwards to the red wire:

That’s the graphic beta move, in one direction. For the opposite direction just rewind the film.

There is a slight  resemblance with the figures from the post (B), concerning slide equivalence, consisting in the fact that here and there we see crossings decomposed (or assembled from) two types of gates, namely one with one entry, two exits, the other with two entries, one exit.

Notice also that in graphic lambda calculus we have another two gates, namely the FAN-OUT gate and the TOP gate. We shall see how they couple together, next time.

Menelaus theorem by way of Reidemeister move 3

(Half of) Menelaus theorem is equivalent with a theorem by Emil Artin, from his excellent book Geometric Algebra, Interscience Publishers (1957), saying that the inverse semigroup generated by dilations (in an affine space) is composed by dilations and translations. More specifically, if \varepsilon, \mu > 0 are such that \varepsilon \mu < 1 then the composition of two dilations, one of coefficient \varepsilon and the other of coefficient \mu, is a dilation of coefficient \varepsilon \mu.

Artin contributed also to braid theory, so it may be a nice idea to give a proof of Artin interpretation of Menelaus theorem by using Reidemeister moves.

This post is related to previous ones, especially these three:

Noncommutative Baker-Campbell-Hausdorff formula

A difference which makes four differences, in two ways

Rigidity of algebraic structure: principle of common cause

which I shall use as references for what a normed group with dilations, conical group and associated decorations of tangle diagrams are.

Let’s start! I use a representation of dilations as decorations of an oriented tangle diagram.

For any \varepsilon > 0, dilations of coefficient \varepsilon and \varepsilon^{-1} provide two operations which give to the space (say X) the structure of an idempotent right quasigroup, which is equivalent to saying that decorations of the tangle diagrams by these rules are stable to the Reidemeister moves of type I and II.

A particular example of a space with dilations is a normed group with dilations, where the dilations are left-invariant.

If the decorations that we make are also stable with respect to the Reidemeister move 3, then it can be proved that definitely the space with dilations which I use has to be a conical group! What is a conical group? It is a non-commutative vector space, in particular it could be a real vector space or the Heisenberg group, or a Carnot group and so on. Read the previous posts about this.

Graphically, the Reidemeister move 3 is this sliding movement:

of CC' under the crossing AA'-BB' (remark also how the decorations of crossings \varepsilon and \mu switch places).
Further on I shall suppose that we use for decorations a conical group, with distance function denoted by d. Think about a real vector space with distance given by an euclidean norm, but don’t forget that in fact we don’t need to be so restrictive.

Take now two strings and twist them one around the other, an infinity of times, then pass a third string under the first two, then decorate everything as in the following figure

We can slide twice the red string (the one which is under) by using the Reidemeister move 3. The decorations do not change. If you want to see what is the expression of z' as a function of x,y then we easily write that

z' = \delta^{x}_{\varepsilon} \delta^{y}_{\mu} z = \delta^{x_{1}}_{\varepsilon} \delta^{y_{1}}_{\mu} z

where x_{1}. y_{1} are obtained from x,y according to the rules of decorations.

We may repeat n times the double slide movement and we get that

z' = \delta^{x_{n}}_{\varepsilon} \delta^{y_{n}}_{\mu} z

If we prove that the sequences x_{n}, y_{n} converge both to some point $w$, then by passing to the limit in the previous equality we would get that

z' = \delta^{w}_{\varepsilon} \delta^{w}_{\mu} z = \delta^{w}_{\varepsilon \mu} z

which is the conclusion of Artin’s result! Otherwise said, if we slide the red string under all the twisted pair of strings, then the outcome is the same as passing under only one string, decorated with $w$, with the crossing decorated with \varepsilon \mu.

The only thing left is to prove that indeed the sequences converge. But this is easy: we prove that the recurrence relation between x_{n+1}, y_{n+1} and x_{n},y_{n} is a contraction. See the figure:

Well, that is all.

UPDATE: I was in fact motivated to draw the figures and explain all this after seeing this very nice post of Tao, where an elementary proof of a famous result is given, by using “elementary” graphical means.