Tag Archives: approximate structures

Zipper logic appeared in math.CO (combinatorics)

Today,  a week after the submission, the article Zipper Logic   arXiv:1405.6095 appeared in math.CO , math GT, and not in math.LO , math.GT.

Actually, combinatorics, especially lately, is everywhere, being probably one of the most dynamic research fields in mathematics.

There is a small world connection of zipper logic with combinatorics,   because zipper logic is a variant of chemlambda, which is a variant of GLC, which has an emergent algebra sector  arXiv:1305.5786 section 5 , which is a sort of approximate algebraic structure, alike, but not the same as the approximate groups arXiv:1110.5008 .

____________________________________

 

Dictionary from emergent algebra to graphic lambda calculus (III)

Continuing from   Dictionary from emergent algebra to graphic lambda calculus (II) , let’s introduce the following macro, which is called “relative \Upsilon gate”:

linear_1

If this macro looks involved, then we might express it with the help of emergent algebra crossing macros, like this:

linear_2

Notice that none of these graphs are in the emergent algebra sector, a priori! However, by looking at the graph from the  left, we recognize an old friend: a chora.  See arXiv:1103.6007 section 5, for a definition  of the chora construction, but only in relation with emergent algebra gates. Previously the chora diagram appeared as a convenient notation for the relative dilation gate, but here we see it appearing in a different context. It is already present in relation with lambda calculus in the lambda-scale article arXiv:1205.0139  , section 4.  It will be important later, when I shall give an exposition of the second paragraph after Question 1 from the post Graphic lambda calculus used for quantum programming (Towards qubits III) .

With the help of the relative \Upsilon macro, we can give a new look at the mystery move from the end of dictionary II post. Indeed, look at the following succession of moves:

linear_4

The use of the mystery move is to transform the relative \Upsilon gate into a usual \Upsilon gate! So, if we accept the mystery move for the emergent algebra sector, then the effect is that the relative \Upsilon gate is in the emergent algebra sector of the graphic lambda calculus and, moreover, it can be transformed into a \Upsilon gate.

With this information let’s go back to the graphs A and C from the dictionary II post.

linear_5

We know that A \leftrightarrow C in the emergent algebra sector is the right translation of the approximate associativity from emergent algebras (formalism using binary trees, for example). In the last post we arrived at the conclusion that we can prove A \leftrightarrow C by using the mystery move as a legal move in the emergent algebra sector of the graphic lambda calculus (in combination with the other valid moves for this sector, but not using GOBAL FAN-OUT).

Instead of the graph C, we may  introduce the graph C' and compare it with the graph C:

linear_6

The only difference between C and C' is  given by the appearance of the relative \Upsilon gate, in C', instead of the regular \Upsilon gate in C.

 

We can prove  that  C' \leftrightarrow A. In particular this proves that C' is in the emergent algebra sector, even if it contains a relative \Upsilon gate. Recall that  if we don’t accept the mystery move in the emergent algebra sector, then it’s not clear if it belongs to that sector.  The proof is not given, but it’s straightforward, using the moves CO-ASSOC, CO-COMM, LOC PRUNING, R2 and ext 2 (therefore without the mystery move).

We can pass from C' to C by using the fact that the mystery move allows to transform a relative \Upsilon gate into a regular one. So this is the way in which enters the mystery move in the equivalence A \leftrightarrow C: through A \leftrightarrow C' , which can be done without the mystery move, and C' \leftrightarrow C by the mystery move, under the form of transforming a relative gate into a regular one.

Dictionary from emergent algebra to graphic lambda calculus (II)

This post continues the Dictionary from emergent algebra to graphic lambda calculus (I).     Let us see  if we can prove the approximate associativity property (for the approximate sum) in graphic lambda calculus.  I am going to use the dictionary and see if I can make the translation.

With the dilation structures/tree formalism, the approximate associativity, with it’s proof, is this:

sum_1

Let’s translate the trees, by using the dictionary.

sum_b

At first sight, the graphs A and C are clearly in the emergent algebra sector of the graphic lambda calculus. For the graph B, which corresponds to the tree from the middle of the first figure, it’s not clear if it belongs to the same sector.

The next figure shows that it does, because we can pass from A to B by a succession of moves acceptable in the sector.

sum_c

Problems appear when we try to relate B and C. The reason of these problems, we shall see, is the fact that we renounced at having variables, by employing  the “fan-out” gate \Upsilon. But this gate is a fan-out only in a very weak sense, in fact is more like a co-commutative co-monoid operation.

Let’s see, we prepare B in order to be more alike C.

sum_d

There is a part of the final graph which is encircled by a green dashed line, pay attention to it.

We prepare now C a bit:

sum_e

The graph from the right has also an encircled part in green. Look close to this graph, in parallel with the last graph from the preparation of B. We remark that these two graphs are the same outside of the respective encircled regions. If we could transform one encircled region into the other then we would have a proof that B \leftrightarrow C.

In conclusion, the mystery move

sum_f

would solve the problem of proving the approximate associativity in the emergent algebra sector of the graphic lambda calculus. Remark that if \Upsilon would really be a fan-out gate, i.e. if we could apply to it GLOBAL FAN-OUT moves, then the mystery move would be true.

But I don’t want to use global moves in the emergent algebra sector, because in the tree formalism all moves are local. Moreover, even with global fan-out, in order to be able to apply it, it would be necessary that at the labels “x” and “u” to be grafted graphs which have no connection in common, therefore, strictly speaking, we succeed to prove an approximate associativity weaker than the approximate associativity from emergent algebras/tree formalism.  (The same phenomenon, but for another identity of emergent algebras, is explained in this post.) The reason is in the apparently innocuous but hard to manage fact that we renounce of having variables in graphic lambda calculus.

Next time, about the meaning of the mystery move.

Dictionary from emergent algebra to graphic lambda calculus (I)

Because I am going to explore in future posts the emergent algebra sector, I think it is good to know where we stand with using graphic lambda calculus for describing proofs in emergent algebra as computations.  In the big map of research paths, this correspond to the black path linking “Energent algebra sector” with “Emergent algebras”.

A dictionary seems a good way to start this discussion.

Let’s see, there are three formalisms there:

  • in the first paper on spaces with dilations, Dilatation structures I. Fundamentals arXiv:math/0608536  section 4,  is introduced a formalism using binary decorated trees in order to ease the manipulations of dilation structures,
  • emergent algebras are an abstraction of dilation structures, in the sense that they don’t need a metric space to live on. The first paper on the subject is Emergent algebras arXiv:0907.1520   (see also Braided spaces with dilations and sub-riemannian symmetric spaces  arXiv:1005.5031  for explanations of the connection between dilation structures and emergent algebras, as well as for braided symmetric spaces, sub-riemannian symmetric spaces, conical groups, items you can see on the big map mentioned before). Emergent algebras is a mixture of an algebraic theory with an important part of epsilon-delta analysis.  One of the goals of graphic lambda calculus is to replace this epsilon-delta part by a computational part.
  • graphic lambda calculus, extensively described here, has an emergent algebra sector (see  arXiv:1305.5786 , equally check out the series Emergent algebras as combinatory logic part I, part II, part IIIpart IV,  ). This is not an algebraic theory, but a formalism which contains lambda calculus.

The first figure describes a dictionary of objects which appear in these three formalisms. In the first column you find objects as they appear in dilation structures – emergent algebra formalism. In the second column you find the corresponding object in the binary trees formalism. In the third column there are the respective objects as they appear in the emergent algebra sector of the graphic lambda calculus.

sum_basic

Some comments: the first two rows  are about an object called

  • “dilation (of coefficient \varepsilon, with \varepsilon \in \Gamma, a commutative group)” in dilation structures,  which is a operation in emergent algebras, indexed by \varepsilon, (the second row is about dilations of coefficient \varepsilon^{-1}),
  • it is an elementary binary tree with the node decorated by white (for \varepsilon) or black (for \varepsilon^{-1})
  • it is one of the elementary gates in graphic lambda calculus.

The third row is about the “approximate sum” in dilation structures, which is a composite operation in emergent algebras, which is a certain graph in graphic lambda calculus.

The fourth row is about the “approximate difference” and the fifth about the “approximate inverse”.

For the geometric meaning of these objects see the series on  The origin of emergent algebras part I, part II, part III,   or go directly and read arXiv:1304.3694 .

What is different between these rows?

  • In the first row we have an algebra structure based on identities between composites of operations defined on a set.
  • In the second row we have trees with leaves decorated by labels from an alphabet (of formal variables) or terms constructed recursively from those  .
  • In the third row we have graphs with no variable names. (Recall that in graphic lambda calculus there are no variable names. Everything written in red can be safely erased, it is put there only for the convenience of the reader.)

Let’s see now the dictionary of identities/moves.

moves_basic

The most important comment is that identities in emergent algebras become moves in the other two formalisms. A succession of moves is in fact a proof for an identity.

The names of the identities or moves have been commented in many places, you see there names like “Reidemeister move” which show relations to knot diagrams, etc. See this post for the names of the moves and relations to knot diagrams, as well as section 6 from  arXiv:1305.5786 .

Let’s read the first column: it says that from an algebraic viewpoint an emergent algebra is a one parameter family (indexed by \varepsilon \in \Gamma) of idempotent right quasigroups. From the geometric point of view of dilation structures, it is a formalisation of properties expected from an object called “dilation”, namely that it preserves the base-point ( “x” in the figure), that a composition of dilations of coefficients \varepsilon, \mu, with the same base-point,  is again a dilation, of coefficient \varepsilon \mu, etc.

In the second column we see two moves, R1 and R2, which can be applied anywhere in a decorated binary tree, as indicated.

In the third column we see that these moves are among the moves from graphic lambda calculus, namely that R1 is in fact related to the oriented Reidemeister move R1a, so it has the same name.

The fact that the idempotent right quasigroup indexed by the neutral element of \Gamma, denoted by 1, is trivial, has no correspondent for binary trees, but it appears as the move (ext 2) in graphic lambda calculus. Through the intermediary of this move appears the univalent termination gate.

These are the common moves. To these moves add, for the part of the emergent algebra sector, the R1b move, the local fan-out moves and some pruning moves. There is also the global fan-out move which is needed, but we are going to replace it by a local move which has the funny name of “linearity of fan-out”, but that’s for later.

The local fan-out moves and the pruning moves are needed for the emergent algebra sector but not for the binary trees or emergent algebras. They are the price we have to pay for eliminating variable names. See the algorithm for producing graphs from lambda calculus terms, for what concerns their use for solving the same problem for untyped lambda calculus. (However, the emergent algebra sector is to be compared not with the lambda calculus sector, but with the combinatory logic sector, more about this in a further post.)

We don’t need all pruning moves, but only one which, together with the local fan-out moves, form a family which could be aptly called:

co_mono_y

(notice I consider a reversible local pruning)

Grouping moves like this makes a nice symmetry with the fact that \Gamma is a commutative group, as remarked here.

As concerns the R1b move, which is the one from the next figure, I shall use it only if really needed (for the moment I don’t). It is needed for the knot diagrams made by emergent algebra gates sector, but it is not yet clear to me if we need it for the emergent algebra sector.

r1bmove

However, there is a correspondent of this move for emergent algebras. Indeed, recall that a right quasigroup is a a quasigroup if the equation x \circ a = b  has a solution, which is unique. If our emergent algebra is in fact a (family of) quasigroup(s) , as happens for the cases of conical groups or for symmetric spaces in the sense of Loos (explained in arXiv:1005.5031 ), then in particular it follows that the equation  x \circ_{\varepsilon} a = a has only the solution x = a (for \varepsilon \not = 1). This last statement has the R1b move as a correspondent in the realm of the emergent algebra sector.

Until now we have only local moves in the emergent algebra sector. We shall see that we need a global move (the global fan-out) in order to prove that the dictionary works, i.e. for proving the fundamental identities of emergent algebras within the graphic lambda calculus formalism. The goal will be to replace the global fan-out move by a new local move (i.e. one which is not a consequence of the existing moves of graphic lambda calculus). This new move will turn out to be a familiar sight, because it is related to the way we see linearity in emergent algebras.

Geometric Ruzsa inequality on groupoids and deformations

This is a continuation of  Geometric Ruzsa triangle inequalities and metric spaces with dilations .  Proposition 1 from that post may be applied to groupoids. Let’s see what we get.

Definition 1. A groupoid is a set G, whose elements are called arrows, together with a partially defined composition operation

(g,h) \in G^{(2)} \subset G \times G \mapsto gh \in G

and a unary “inverse” operation:

g \in G \mapsto g^{-1} \in G

which satisfy the following:

  •  (associativity of arrow composition) if (a,b) \in G^{(2)} and (b,c) \in G^{(2)}  then (a, bc) \in G^{(2)} and  (ab, c) \in G^{(2)} and moreover  we have a(bc) = (ab)c,
  •  (inverses and objects)   (a,a^{-1}) \in G^{(2)} and (a^{-1}, a) \in G^{(2)}  ; for any a \in G we define the origin of the arrow a to be  \alpha(a) = a^{-1} a and  the target of a to be  \omega(a) = a a^{-1};  origins and targets of arrows form the set of objects of the groupoid Ob(G),
  • (inverses again) if (a,b) \in G^{(2)} then a b b^{-1} = a  a^{-1} a b = b.

____________________

The definition is a bit unnecessary restrictive in the sense that I take groupoids to have sets of arrows and sets of objects. Of course there exist larger groupoids, but for the purpose of this post we don’t need them.

The most familiar examples of groupoids are:

  • the trivial groupoid associated to a non-empty set X is G = X \times X, with composition (x,y) (y,z) = (x,z) and inverse (x,y)^{-1} = (y,x). It is straightforward to notice that \alpha(x,y) = (y,y) and \omega(x,y) = (x,x), which is a way to say that the set of objects can be identified with X and the origin of the arrow (x,y) is y and the target of (x,y) is x.
  • any group G is a groupoid,  with the arrow operation being the group multiplication and the inverse being the group inverse. Let e be the neutral element of the group G. Then for any “arrow$ g \in G we have \alpha(g) = \omega(g) = e, therefore this groupoid has only one object,  e. The converse is true, namely groupoids with only one object are groups.
  • take a group G which acts at left on the set X  , with the action (g,x) \in G \times X \mapsto gx \in X   such that g(hx) = (gh)x and ex = x. Then G \times X is a groupoid with operation (h, gx) (g,x) = (hg, x) and inverse (g,x)^{-1} = (g^{-1}, gx).   We have \alpha(g,x) = (e,x), which can be identified with x \in X, and \omega(g,x) = (e,gx), which can be identified with gx \in X. This groupoid has therefore X as the set of objects.

For the relations between groupoids and dilation structures see arXiv:1107.2823  . The case of the trivial groupoid, which will be relevant soon, has been discussed in the post  The origin of emergent algebras (part III).

____________________

The following operation is well defined for any pair of arrows (g,h) \in G with \alpha(g) = \alpha(h) :

\Delta(g,h) = h g^{-1}

Let A, B, C \subset G be three subsets of a groupoid G with the property that there exists an object e \in Ob(G) such that for any arrow g \in A \cup B \cup C we have \alpha(g) = e.  We can define the sets \Delta(C,A)\Delta(B,C) and \Delta(B,A) .

Let us define now the hard functions f: \Delta(C,A) \rightarrow C and g: \Delta(C,A) \rightarrow A with the property: for any z \in \Delta(C,A) we have

(1)     \Delta(f(z), g(z)) = z

(The name “hard functions” comes from the fact that \Delta should be seen as an easy operation, while the decomposition (1) of an arrow into a “product” of another two arrows should be seen as hard.)

The following is a corollary of Proposition 1 from the post  Geometric Ruzsa triangle inequalities and metric spaces with dilations:

Corollary 1.  The function i: \Delta(C,A) \times B \rightarrow \Delta(B,C) \times \Delta(B,A)  defined by

i(z,b) = (f(z) b^{-1} , g(z) b^{-1})

is injective. In particular, if the sets A, B, C are finite then

\mid \Delta(C,A) \mid \mid B \mid \leq \mid \Delta(B,C) \mid \mid \Delta(B,A) \mid .

____________________

Proof.   With the hypothesis that all arrows from the three sets have the same origin, we notice that \Delta satisfies the conditions 1, 2 from Proposition 1, that is

  1. \Delta( \Delta(b,c), \Delta(b,a)) = \Delta(c,a)
  2. the function b \mapsto \Delta(b,a) is injective.

As a consequence, the proof of Proposition 1 may be applied verbatim. For the convenience of the readers, I rewrite the proof as a recipe about how to recover (z, b) from i(z,b). The following figure is useful.

bellaiche_5

We have f(z) b^{-1} and g(z) b^{-1} and we want to recover z and b. We use (1) and property 1 of \Delta in order to recover z. With z comes f(z). From f(z) and f(z) b^{-1} we recover b, via the property 2 of the operation \Delta. That’s it.

____________________

There are now some interesting things to mention.

Fact 1.  The proof of Proposition 2 from the Geometric Ruzsa post is related to this. Indeed, in order to properly understand what is happening, please read again   The origin of emergent algebras (part III)  . There you’ll see that a metric space with dilations can be seen as a family of defirmations of  the trivial groupoid. In the following I took one of the figures from the “origin III” post and modified it a bit.

bellaiche_4

Under the deformation of arrows given by  \delta_{\varepsilon}(y,x) = (\delta^{x}_{\varepsilon} y , x)    the operation \Delta((z,e)(y,e)) becomes the red arrow

(\Delta^{e}_{\varepsilon}(z,y), \delta^{e}_{\varepsilon} z)

The operation acting on points (not arrows of the trivial groupoid) which appears in Proposition 2  is \Delta^{e}_{\varepsilon}(z,y), but Proposition 2 does not come straightforward from Corollary 1 from this post. That is because in Proposition 2 we use only targets of arrows, so the information at our disposal is less than the one from Corrolary 1. This is supplemented by the separation hypothesis of Proposition 2. This works like this. If we deform the operation \Delta on the trivial groupoid by using dilations, then we mess the first image of this post, because the deformation keeps the origins of arrows but it does not keep the targets. So we could apply the Corollary 1 proof directly to the deformed groupoid, but the information available to us consists only in targets of the relevant arrow and not the origins. That is why we use the separation hypotheses in order to “move” all unknown arrow to others which have the same target, but origin now in e. The proof then proceeds as previously.

In this way, we obtain a statement about  algebraic operations (like additions, see Fact 2.) from the trivial groupoid operation. 

Fact 2.  It is not mentioned in the “geometric Ruzsa” post, but the geometric Ruzsa inequality contains the classical inequality, as well as it’s extension to Carnot groups. Indeed, it is enough to apply it for particular dilation structures, like the one of a real vectorspace, or the one of a Carnot group.

Fact 3.  Let’s see what Corollary 1 says in the particular case of a trivial groupoid. In this case the operation \Delta is trivial

\Delta((a,e), (c,e)) = (a,c)

and the “hard functions$ are trivial as well

f(a,c) = (c,e) and g(a,c) =(a,e)

The conclusion of the Corollary 1 is trivial as well, because \mid \Delta(C,A) \mid = \mid C \mid \mid A \mid (and so on …) therefore the conclusion is

\mid C \mid \mid A \mid \mid B \mid \leq \mid B \mid^{2} \mid A \mid \mid C \mid

However, by the magic of deformations provided by dilations structures, from this uninteresting “trivial groupoid Ruzsa inequality” we get the more interesting original one!

Carry propagation-free number systems and a kind of approximate groups (I)

Avizienis introduced a signed-digit number representation which allows fast, carry propagation-free addition (A. Avizienis, Signed-digit number representation for fast parallel arithmetic, IRE Trans. Comput., vol. EC-10, pp. 389-400, 1961, see also B. Parhami, Carry-Free Addition of Recoded Binary Signed-Digit Numbers, IEEE Trans. Computers, Vol. 37, No. 11, pp. 1470-1476, November 1988.)

Here, just for fun, I shall use a kind of approximate groups to define a carry-free addition-like operation.

1. The hypothesis is: you have three finite sets A, B, K, all sitting in a group G. I shall suppose that the group G is commutative, although it seems that’s not really needed further if one takes a bit of care. In order to reflect this, I shall use  + for the group operation on G, but I shall write M N  for the set of all elements of the form  x + y with x \in M and y \in N.

We know the following about the three (non-empty, of course) sets A, B, K:

  1. B \subset A,
  2. 0 \in B, 0 \in K, A = -A, B = -B, K = -K,
  3. \mid B \mid = \mid K \mid   (where \mid M \mid is the cardinal of the finite set M),
  4. A A B \subset A K  (that’s what qualifies A as a kind of an approximate group).

Let now choose a bijective function \phi: K \rightarrow B and two functions \psi: AAB \rightarrow A and \chi: AAB \rightarrow K such that

(1)   for any x, y \in A and any b \in B we have x+y+b = \psi(x+y+b) + \chi(x+y+b).

______________

2. Let X be the family of functions defined on \mathbb{Z} with values in A, with compact support, i.e. if x: \mathbb{Z} \rightarrow A belongs to X, then only a finite number of k \in \mathbb{Z} have the property that x_{k} = x(k) \not = 0.  The element 0 \in X is defined by x_{k} = 0 for any k \in \mathbb{Z}.  If x \in X with x \not = 0 then ind(x) \in \mathbb{Z} is the smallest index k \in \mathbb{Z} with x_{k} \not = 0.

______________

3.  The structure introduced at 1. allows the definition of an operation * on X. The definition of the operation is inspired from a carry-free addition algorithm.

Definition 1. If both x, y \in X are equal to 0 then x*y = 0. Otherwise, let j \in \mathbb{Z} be the smallest index k such that one of x_{k} or y_{k} is different than 0.  The element z = x*y is defined by the following algorithm:

  1. z_{k} = 0 for any k < j,
  2. let t_{j} = 0k = j. Repeat:

p_{k} = x_{k} + y_{k} + t_{k}

z_{k} = \psi(p_{k})

t_{k+1} = \phi(\chi(p_{k}))

k \rightarrow k+1

______________

4.  We may choose the functions \phi, \psi, \chi such that the operation * is starting to become interesting. Before doing this, let’s remark that:

  • the operation * is commutative as a consequence of the fact that + is commutative,
  • the operation * is conical, i.e. it admits the shift \delta: X \rightarrow X, \delta(x)(k+1) = x_{k} as automorphism ( a property encountered before for dilation structures on ultrametric spaces, see  arXiv:0709.2224  )

Proposition 1.  If the functions \phi, \chi, \psi satisfy the following:

  1. \phi(0) = \chi(0) = \psi(0) = 0
  2. \chi(x) = 0 for any x \in A
  3. \psi(-x) = - \psi(x)\phi(-x) = -\phi(x) for any x \in A A B, \chi(-x) = -\chi(x) for any x \in K,

then (X, *) is a conical quasigroup.

Geometric Ruzsa triangle inequalities and metric spaces with dilations

In arXiv:1212.5056 [math.CO]    “On growth in an abstract plane”  by  Nick Gill, H. A. Helfgott, Misha Rudnev ,  in lemma 4.1  is given a proof of the Ruzsa triangle inequality which intrigued me. Later on, at the end of the article the authors give a geometric Ruzsa inequality in a Desarguesian projective plane, based on similar ideas as the ones used in the proof of the Ruzsa triangle inequality.

All this made me write the following.

____________

Let X be a non-empty set and \Delta: X \times X \rightarrow X be an operation on X which has the following two properties:

  1. for any a, b, c \in X we have \Delta(\Delta(a,b), \Delta(a,c)) = \Delta(b,c),
  2. for any a \in X   the function z \mapsto \Delta(z,a) is injective.

We may use weaker hypotheses for \Delta, namely:

  1. (weaker) there is a function F: X \times X \rightarrow X such that F(\Delta(a,b), \Delta(a,c)) = \Delta(b,c) for any a, b, c \in X,
  2. (weaker) there is a function G: X \times X \rightarrow X such that a \mapsto G(\Delta(a,b), b) is an injective function for any b \in X.

Prop. 1. Let X be a non empty set endowed with an operation \Delta which satisfies 1. and 2. (or the weaker version of those). Then for any non empty sets A, B, C \subset X there is an injection

i: \Delta(C,A) \times B \rightarrow \Delta(B,C) \times \Delta(B,A),

where we denote by \Delta(A,B) = \left\{ \Delta(a,b) \mid a \in A, b \in B \right\}

In particular, if A, B, C are finite sets, we have the Rusza triangle inequality

\mid \Delta(C,A) \mid \mid B \mid \leq \mid \Delta(B,C) \mid \mid \Delta(B,A) \mid,

where \mid A \mid denotes the cardinality of the finite set A.

I shall give the proof for hypotheses 1., 2., because the proof is the same for the weaker hypotheses. Also, this is basically the same proof as the one of the mentioned  lemma 4.1.  The proof of the Ruzsa inequality corresponds to the choice \Delta(a,b) = -a + b, where (X,+) is a group (no need to be abelian). The proof  of the geometric Ruzsa inequality corresponds to the choice \Delta(a,b) = [b,a], with the notations from the article, with the observation that this function \Delta satisfies weaker 1. and 2.

Proof.  We can choose functions f: \Delta(C,A) \rightarrow C and g: \Delta(C,A) \rightarrow A such that for any x \in \Delta(C,A) we have x = \Delta(f(x),g(x)). With the help of these functions let

i(x,b) = (\Delta(b,f(x)), \Delta(b, g(x))).

We want to prove that i is injective. Let (c,d) = i(x,b) = i(x',b'). Then, by 1. we have x = x' = \Delta(c,d).  This gives an unique e = f(x) = f(x'). Now we know that \Delta(b, e) = \Delta(b,f(x)) = c = \Delta(b', f(x')) = \Delta(b', e). By 2. we get that b = b'     qed.

____________

In a metric space with dilations  (X, d, \delta)  we have the function  approximate difference \Delta^{e}_{\varepsilon} (a,b) based at e \in X and applied to a pair of closed points a, b \in X. This function has the property that (e,a,b) \mapsto \Delta^{e}_{\varepsilon}(a,b) converges uniformly to \Delta^{e}(a,b) as \varepsilon goes to 0. Moreover, there is a local group operation with e as neutral element such that \Delta^{e}(a,b) = -a+b, therefore the function \Delta^{e} satisfies 1. and 2.

As concerns the function \Delta^{e}_{\varepsilon}, it satisfies the following approximate version of 1.:

  1. (approximate) for any e, a, b, c \in X which are sufficiently close and for any \varepsilon \in (0,1) we have, with the notation a(\varepsilon) = \delta^{e}_{\varepsilon} a,  the relation

\Delta^{a(\varepsilon)}_{\varepsilon}(\Delta^{e}_{\varepsilon}(a,b), \Delta^{e}_{\varepsilon}(a,c)) = \Delta^{e}_{\varepsilon}(b,c).

We say that a set A \subset X is \varepsilon separated if for any x, y \in A,  the inequality  d(x,y) < \varepsilon  implies  x = y.  Further I am going to write about sets which are closed to a fixed, but arbitrary otherwise point e \in X.

Prop2.  In a metric space with dilations, let p >0 and  let A, B, C be finite sets of points included in a compact neighbourhood of e, which are closed to e \in X, such that for any  \varepsilon \in (0,p)  the sets B and \Delta^{e}_{\varepsilon}(C,A) are \mu separated. Then for any \varepsilon \leq C(\mu) there is an injective function

i: \Delta^{e}_{\varepsilon}(C,A) \times B \rightarrow \Delta^{e}_{\varepsilon}(B,C) \times \Delta^{e}_{\varepsilon}(B,A).

Proof. As previously, we choose the functions f and g. Notice that these functions depend on \varepsilon but this will not matter further.  I shall use the O(\varepsilon) notation liberally, for example y = x + O(\varepsilon) means d(x,y) \leq O(\varepsilon).  Let’s define the function i by the same formula as previously:

i(x,b) = (\Delta^{e}_{\varepsilon}(b,f(x)), \Delta^{e}_{\varepsilon}(b, g(x))).

Let (x,b) and (x',b') be pairs such that i(x,b) = i(x',b') = (c_{\varepsilon}, d_{\varepsilon}). From 1. (approximate) and from the uniform convergence mentioned previously we get that

x = x' + O(\varepsilon) = \Delta^{e}(c_{\varepsilon}, d_{\varepsilon}) + O(\varepsilon).

There is a function C(\mu)  such that \varepsilon \leq C(\mu) implies (the last from the previous relation) O (\varepsilon) < \mu.  For such a \varepsilon, by the separation  of \Delta^{e}_{\varepsilon}(C,A) we get x = x'.

Let z = f(x). From the hypothesis we have \Delta^{e}_{\varepsilon}(b, z) = \Delta^{e}_{\varepsilon}(b',z). This implies, via the structure of the function \Delta^{e} and via the uniform convergence, that b' = b + O(\varepsilon)  (by compactness, this last O(\varepsilon) does not depend on A, B, C). By the same reasoning as previously, we may choose C(\mu) such that d(b,b') < \mu if \varepsilon \leq C(\mu). This implies b = b'   qed.