Local FAN-IN eliminates GLOBAL FAN-OUT (II)

As I wrote in   Local FAN-IN eliminates global FAN-OUT (I) , the introduction of the three moves (FAN-IN and the two DIST moves) eliminates global FAN-OUT from the lambda calculus sector of the graphic lambda calculus.  In this post we shall see that we can safely eliminate other two moves, namely R1a, R1b, as well as improving the behaviour of the crossings from the \lambda-TANGLE sector.

The equilibrium is thus established: three new moves instead of the three old moves. And there are some unexpected advantages.



Proof.  (a) Done in the following picture.


The proof of (b) is here:


Finally, here is the proof of (c):



The \lambda-TANGLE sector of the graphic lambda calculus is obtained by using the lambda-crossing macros


In Theorem 6.3   arXiv:1305.5786 [cs.LO]  I proved that all the oriented Reidemeister moves (with the crossings replaced by the respective macros), with the exception of the moves R2c, R2d, R3a and R3h, can be proved by using the graphic beta move and elimination of loops.  We can improve the theorem in the following way.

Theorem.  By using the graphic beta move, elimination of loops, FAN-IN and CO-COMM, we can prove all the 16 oriented Reidemeister moves.

Proof. The missing moves R2c, R2d, R3a and R3h are all equivalent (by using the graphic beta move and elimination of loops, see this question/answer at mathoverflow) with the following switching move, which we can prove with FAN-IN and CO-COMM:


The proof is done.

Local FAN-IN eliminates global FAN-OUT (I)

For being able to build  a chemical concrete machine (see the posts  A chemical concrete machine for lambda calculus  and  Why build a chemical concrete machine, and how?) we have to prove that  universal computation can be attained with only local moves in graphic lambda calculus. Or, the lambda calculus sector of the graphic lambda calculus, which gives universality to graphic lambda calculus, uses the global FAN-OUT move (see theorem 3.1 (d)  arXiv:1305.5786 [cs.LO]. Similarly, we see in proposition 3.2 (d), which describes the way combinatory logic appears in graphic lambda calculus, that again global FAN-OUT is used.

I want to describe a way to eliminate the global FAN-OUT move from combinatory logic (as appears in graphic lambda calculus via the algorithm described here ).


There are reasons to dislike global moves in relation to B-type neural networks (see the last post    Pair of synapses, one controlling the other (B-type NN part III) ). Similar concerns can be found in the series of posts which has as the most recent one Dictionary from emergent algebra to graphic lambda calculus (III) .

In this first post I shall introduce a local FAN-IN move and two distributivity moves and I shall prove that they eliminate the need for using global FAN-OUT in combinatory logic. In the next post I shall prove that we can eliminate two other moves (so that the total number of moves of graphic lambda calculus stays the same as before) and moreover we can recover from distributivity and local FAN-OUT moves the missing oriented Reidemeister moves from the \lambda-TANGLE sector.


Definition. The local FAN-IN move is described in the next figure and it can be applied for any \varepsilon \not = 1.



  • as you see, in the move appears a dilation gate, what can this has to do with combinatory logic? As I explained previously, the properties of the gates are coming through the moves they are involved in, and not from their name. I could have introduced a new gate, with two inputs and one output, call this new gate “fan-in gate” and use it in the FAN-IN move. However, wait until the next post to see that there are other advantages, besides the economy of gates available, in using a dilation gate as a fan-in.
  • the FAN-IN move resembles to the packing arrows trick which is used extensively in the neural networks posts.  This suggests to use as a  fan-in gate


the green triangle gate and as fan-out gate the red triangle gate. This would eliminate the \Upsilon gate from the formalism, but is not clear to me how this replacement would interfere with the rest of the moves.

  • the FAN-IN move resembles with the dual of the graphic beta move, but is not the same (recall that until now I have not accepted the dual of the graphic beta move in the list of the moves of graphic lambda calculus, although there are strong reasons to do so):



which is needed in the emergent algebra sector in order to make the dictionary to work (and related as well to the goal of non using global FAN-OUT in that sector).  This latter move is in fact a distributivity move (see further), but we are free to choose different moves in different sectors of the graphic lambda calculus,

  • I know it is surprising that until now there was nothing about evaluation strategies in graphic lambda calculus, the reason being that because there are no variables then there is noting to evaluate. However, the situation is not so simple, at some point, for example in the chemical concrete machine or for neural networks, some criterion for choosing the order of moves will be needed. But it is an important point to notice that replacing global FAN-OUT (which could be seen as a remnant of having variables and evaluating them) by local FAN-IN has nothing to to with evaluation strategies.


Definition: The distributivity moves (related to the application and lambda abstraction gates) are the following:




  • the first distributivity move is straighforward, an application gate is just doubled and two fan-out moves establish the right connections. We see here why the “mystery move” can be seen as a kind of distributivity move,
  • the second distributivity move is where we need a fan-in gate (and where we use a dilation gate instead): because of th orientation of the arrows, after we double the lambda abstraction gates, we need to collect two arrows into one!


Combinatory logic terms appear in graphic lambda calculus as  trees made by application gates, with leaves one of the combinators S, K, I (seen as graphs in GRAPH.  I want to show the following. [UPDATE: made some corrections.]

Theorem.   We can replace the global FAN-OUT move with a sequence of local FAN-IN ,  DIST, CO-COMM and local pruning moves, every time the global FAN-OUT move is applied to a term made by SKI combinators.

Proof.  First, remark that a sequence of  DIST moves for the application gate allows to reduce the problem of replacing global FAN-OUT moves for any combinator to the problem of replacing it for S, K, and I. This is because the DIST move for the application gate allows to do the FAN-OUT of trees of application gates:


Now we have to prove that we can perform global FAN-OUT for I , K, S combinators.  For the combinator I the proof is the following:


For the combinator K we shall also use a local pruning move:


Finally, the proof for the combinator S is the following:


Now we are going to use 3 DIST moves, followed by the switch of arrows explained in   Local FAN-IN eliminates GLOBAL FAN-OUT (II) , which is applied in the dashed green ellipse from the next figure:


And we are done.

UPDATE: At close inspection, it turns out that we don’t need to do switch (macro) moves. Instead, if we go back at the point we were before the last figure,  we may use  first CO-ASSOC and then perform the three FAN-IN moves .


I remembered that I tried once before to keep a blog, in 2005. It still exists, but links are broken now. What a fool for not insisting with it! I reproduce further the one post, with the broken links retrieved from the Wayback Machine.

New analysis

The purpose of this blog is to report on the evolution of some very hot new ideas in mathematics and physics. The new emerging trend seems to be oriented toward a new analysis, in the same way as new geometries, other than euclidean, emerged 2 centuries ago.

This is a rather “technical” blog, intended for mathematicians and physicists. If you are looking for new ideas, hope this will be a nice place to find some of them. If you are the “publish rubbish or perish” kind of person, this place is not of much use for you. But if you are the “think before you publish” guy, and if you have a rather large horizont of mathematical interests, then you are welcome here.

The languages which I shall use are (mostly broken) English and sometimes Romanian. I reserve the right to express here also some of my personal opinions, remotely related to the main subject of the blog.

Here it starts.

For the moment the most relevant page to look for is:


but use the mighty google to search for papers on the web, with the keywords: subriemannian , “sub-Riemannian” , “Carnot-Caratheodory”, “curvature metric space”

check also the wonderful http://www.arxiv.org . My papers from there are available at


The weekend round table on UD

The comments from   Is 3D Cube a predecessor of UD?   were very useful as concerns the  idea of making open source UD like programs. There are already two projects, created by Bauke and Jim respectively, see the page   UD projects here for the relevant links.

We cannot possibly know if any of  these two proposals is like Bruce Dell’s UD, but it does not matter much because Bauke and Jim programs may well turn out to be as good as Dell’s, or why not even better. (However,  I still want to know how exactly the database of Saj’s 3D Cube is done, because of the claims he made many years ago, which are almost identical with the ones made by Dell, see the link from the beginning of this post.)

Enough chit-chat, the reason for this post   is that I suppose new discussions will follow, for example about Bauke’s still pending detailed explanations about his program. Also, any day now Jim might amaze us.

So, please start commenting here instead, if you feel the need to ask or answer questions about the two projects. Or, hey, you are welcome to announce yours!


UPDATE:   I add here Bauke’s explanations from this comment :

My algorithm consists of three steps:
1. Preparing the quadtree. (This process is very similar to frustum culling, I’m basically culling all the leaf nodes of the quadtree that are not in the frustum.) This step can be omitted at the cost of higher CPU usage.
2. Computing the cubemap. (Or actually only the visible part, if you did step 1.) It uses the quadtree to do occlusion culling. The quadtree is basically an improved z-buffer, though since the octree is descended in front to back order, it is sufficient to only store ‘rendered’ or ‘not rendered’. It furthermore allows to do constant time depth checks for larger regions.

3. Rendering the cubemap. This is just the common cubemap rendering method. I do nothing new here.

My description only explains step 2, as this is the core of my algorithm. Step 1 is an optimization and step 3 is so common that I expect the reader to already know how this step works.

Step 2 does not use the display frustum at all. It does do the perspective. But does so by computing the nearest octree leaf (for the sake of simplicity I’m ignoring the LOD/mipmap/anti-aliassing here) in the intersection of a cone and the octree model. This is shown in the following 2D images:


[image source]


[image source]

I don’t know what you mean with ‘scaling of each pixel’, but I believe that scaling of pixels only happens in step 3. In step 1 and 2 I completely ignore that this happens.

I hope this answers your questions. If not, please tell which of the 3 steps you do not understand.




UPDATE: You may like techniques used here  [source]

Why build a chemical concrete machine, and how?

Asked about this, I spent a bit thinking and I arrived at  this very brute answer:

What I have in mind can be split in two.

There is a first part concerning graphic lambda calculus seen as it’s about real molecules interacting in space. For this part I would like to construct graphs which are like a Turing machine (or other computing devices) then, the important step is to eliminate everything which is due to our human 1d writing conventions (see details in the rant from the first part of this post) and thinking and simplify such “machines”, in the same way, for example, like I saw it’s possible

A serious tool for doing this would be, for example, a program which allows to visualize and perform moves  (automatically and from the keyboard) on graphs in GRAPH.

The second part, which goes in parallel, would be to try to find in the real world (here DNA origami looks like a possible tool) ways to construct chemically, physically, such machines, which are naturally adapted to the real world because they are geometrical (an object or a property is geometrical if it can be described, or it’s invariant to an arbitrary change of parametrization, in our case, an arbitrary renaming).  For this part I want to understand first how DNA origami works, to the level of being able to absorb information and understand some of the hurdles.  This leads to applications, which are still vague in my head, but I was impressed by this video

as well as by research of Jean-Pierre Banatre and Daniel Le Metayer.

In conclusion, I can’t imagine what a syringe with 10^9 nano geometrical turing machines  graphs representing lambda terms [see UPDATE]  can’t do.


UPDATE:  Corrections to this post are made in  Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine   , where it is stressed that the “chemical concrete machine”, even if it has Turing universality property, it is not intended to be a Turing machine, (nor an automaton), as is wrongly suggested in this post.

Academic Spring and OA movement just a symptom, not cause of change

… a reaction to profound changes which  question the role of universities and scholars. It’s a symptom of an adaptation attempt.

The OA movement, which advances so slowly because of the resistance of the scholars (voluntarily lulled by the propaganda machine of the association between legacy publishing industry and rulers of universities), is just an opening for asking more unsettling questions:

  • is the  research article as we know it a viable vehicle of communication?
  • what is the difference between peer-reviewing articles and writing them?
  • should review be confined to scholars, or informed netizens (for example those detecting plagiarism) have their place in the review system?
  • is an article a definite piece of research, from the moment of publishing it (in whatever form, legacy or open), or it is forever an evolving project, due to contributions from a community of interested peoples, and if the latter is the case, then who is the author of it?
  • is it fair to publish an article inspired (in the moral sense, not the legal one) from information freely shared on the net, without acknowledging it, because is not in the form of an article?
  • is an article the goal of the research, as is the test the goal of studying?

Which is our place, as researchers? Are we like the scholars of medieval universities, becoming increasingly irrelevant, less and less creative, with our modern version of rhetoric and theological studies, called now problem solving and grant projects writing?

If you look at the timing of the end of the medieval universities and the flourishing of the early modern ones, there are some patterns.We see that (wiki source on early modern universities):

At the end of the Middle Ages, about 400 years after the first university was founded, there were twenty-nine universities spread throughout Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.[33] This pace continued until by the end of the 18th century there were approximately 143 universities in Europe and Eastern Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and Spain (23) – this was close to a 500% increase over the number of universities toward the end of the Middle Ages.

Compare with the global spread of the printing press. Compare with the influence of the printing press on the Italian Renaissance (read about Demetrios Chalkokondyles).

The scientific revolution is

Traditionally held to have begun in 1543, when were first printed the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, which gave a new confidence to the role of dissection, observation, and mechanistic view of anatomy,[59] and also De Revolutionibus, by Nicolaus Copernicus. [wiki quote]

Meanwhile, medieval universities faced more and more problems, like [source]

Internal strife within the universities themselves, such as student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements in science and the arts.[36] This era was also affected by the rise of the nation-state. As universities increasingly came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University of Paris) became more and more prominent. Although the older student-controlled universities still existed, they slowly started to move toward this structural organization. Control of universities still tended to be independent, although university leadership was increasingly appointed by the state.[37]

To finish with a quote from the same wiki source:

The epistemological tensions between scientists and universities were also heightened by the economic realities of research during this time, as individual scientists, associations and universities were vying for limited resources. There was also competition from the formation of new colleges funded by private benefactors and designed to provide free education to the public, or established by local governments to provide a knowledge hungry populace with an alternative to traditional universities.[53] Even when universities supported new scientific endeavors, and the university provided foundational training and authority for the research and conclusions, they could not compete with the resources available through private benefactors.[54]

So, just a symptom.


UPDATE:  Robin Osborne’s article is a perfect illustration  of the confusion which reigns in academia. The opinions of the author, like the following one [boldfaced by me]

When I propose to a research council or similar body that I will investigate a set of research questions in relation to a particular set of data, the research council decides whether those are good questions to apply to that dataset, and in the period during which I am funded by that research council, I investigate those questions, so that at the end of the research I can produce my answers.

show more than enough that today’s university is medieval university reloaded.  How can anybody decide a priori which questions will turn out to be good, a posteriori?  Where is the independence of the researcher? How is it possible to think that a research council may have any other than a mediocre glimpse into the eventual value of a line of research, based on bureaucratic past evidence? And for a reason: because research is supposed to be an exploration, a creation of a new territory, it’s not done yet at the moment of grant application. (Well, that’s something everybody knows, but nevertheless we pretend it does not matter, isn’t it sick?)  Instead, conformity reigns.  Mike Taylor spends a post on this article, exposing it’s weakness  as concerns OA.


UPDATE 2: Christopher Lee takes the view somewhat opposite to the one from this post, here:

In cultured cities, they formed clubs for the same purpose; at club meetings, particularly juicy letters might be read out in their entirety. Everything was informal (bureaucracy to-science ratio around zero), individual (each person spoke only for themselves, and made up their own mind), and direct (when Pierre wrote to Johan, or Nikolai to Karl, no one yelled “Stop! It has not yet been blessed by a Journal!”).

To use my nomenclature, it was a selected-papers network. And it worked brilliantly for hundreds of years, despite wars, plagues and severe network latency (ping times of 109 msec).

Even work we consider “modern” was conducted this way, almost to the twentieth century: for example, Darwin’s work on evolution by natural selection was “published” in 1858, by his friends arranging a reading of it at a meeting of the Linnean Society. From this point of view, it’s the current journal system that’s a historical anomaly, and a very recent one at that.


I am very curious about what Christopher Lee will tell us about solutions to  escape  wall-gardens and I wholeheartedly support the Selected Papers Net.

But in defense of my opinion that the main problem resides in the fact that actual academia is the medieval university reloaded, this  quote (taken out out context?) is an example of the survivorship bias. I think that the historical anomaly is not the dissemination of knowledge by using the most efficient technology, but sticking to old ways when revolutionary new possibilities appear. (In the past it was the journal and at that time scholars cried “Stop! it is published before being blessed by our authority!”, exactly alike scholars from today who cry against OA. Of course, we know almost nothing today about these medieval scholars which formed the majority at that time, proving again that history has a way to punish stupid choices.)

Pair of synapses, one controlling the other (B-type NN part III)

In  Teaser (I) and Teaser (II) I discussed about the possibility to construct neural networks with controlled connections, resembling the B-type neural networks of Turing, in the following sense:

  • they are formed by “neurons”, which in Turing’s description are boolean logic gates, and here they are graphs from graphic lambda calculus which correspond to lambda calculus terms. But there’s no real restriction on that, graphic lambda calculus being more general and powerful than untyped lambda calculus, one may consider “neurons” which are outside the lambda calculus sector. The fact is that a “neuron” here should be interpreted as any graph in GRAPH with at least one input but only one output. Later, when we shall speak about the order of performing moves in such neural networks, it will be seen that each “neuron” is a bag containing graphs which are modified (or reduced, as people say) independently one from another. Neurons are packing conventions from a formal viewpoint, but this viewpoint may not be the best one, a better viewpoint would be to see neurons as well as synapses, described further, as real constructs, like in the related project of the chemical concrete machine, In particular, neurons may do other things than computing in the lambda calculus sense (classical computation), they may do some more geometrical or physical or robot-like activities, not usually considered computations.
  • the connections between neurons are controlled in a B-type NN, by TRUE – FALSE control wires or inputs. Turing explains that controls themselves can be realized as constructions with boolean neurons (i.e. in his language B-type NNs are particular cases of his A-type NNs). In Teaser (II)    is explained how the connections between graphic lambda calculus neurons can be constructed by using a 2-zipper and a switch, such that a connection is controlled (by the same TRUE-FALSE mechanism, but this time TRUE and FALSE are literary the graphs of the corresponding terms in lambda calculus) by another connection.

There is an important difference, besides the power of the calculi used (boolean vs. graphic lambda), namely that there is no signal transmitted through the wires  of the network. That is because graphic lambda calculus does not need variable names. I shall come back to this very important point (which is a big advantage for implementing such networks in reality) in a future post, but let me mention two simple facts:

  • an analogy: think about hardware and software, a difference between them is that software may be associated to the pattern of signals which circulates in the hardware. The hardware is the machine inhabited by the software, they are not in the same plane of physical reality, right? Well, in this implementation there is literary no difference between those, exactly because there are no signals (software) transmitted through the hardware.
  • this use of names, software, variable and signals is a pain for those trying to make sense how to implement in reality computing constructs. But, if you think, the need to name that stuff “x” and that other stuff “y”, to make alpha conversions in order to avoid name clashes, or even to describe computations in a linear string of symbols, all this are in no relation with physical reality, they are only cultural constraints of humans (and their artificial constructs). It all has to do with the fact that humans communicate science through writing, and of course, it has to do with the enumeration techniques of the cartesian method , which “is designed as a technique for understanding performed by one mind in isolation, severely handicapped by the bad capacity of communication with other isolated minds”. Molecules in a cell or in a solution do not align in a string according to the experimenter writing habits, from left to right, right to left, or vertical. They just don’t wait for an external  clock  to rhythm their ballet, nor for the experimenter to draw it’s familiar coordinate system. These are all limitations of the method, not of nature.


Further, in this post, I shall use “synapse” instead “connection”.  Let’s see, now that we know we can implement synapses by a TRUE-FALSE mechanism, let’s see if we can do it simpler. Also, if it is possible to make the “synapses control other synapses” concept more symmetric.

Instead of the little magenta triangles used in Teaser I, II posts (which are not the same as the magenta triangles used in the chemical concrete machine post) , I shall use a more clear notation:


The design of a synapse proposed in Teaser (II)  involved two switches and a zipper, justified by the direct translation of TRUE-FALSE lambda calculus terms in the freedom sector of graphic lambda calculus.  Think now that in fact we have there two synapses, one controlling the other. The zipper between them is only a form of binding them together in unambiguous way.

A simplified form of this idea of a pair of synapses is the following:


In the upper part of the figure you see the pair of synapses, taking the a form like  this: ><. There are two synapses there, one is >, which is controlled by the other <.  The connection between the blue 1 and the red 3′ is controlled by the other synapse. In order to see how, let’s perform a graphic beta move and obtain the graph from the lower part of the figure.

The red 3 can connect, in the sense explained by the switch mechanism from Teaser (II) with blue 1′ or blue 2′, all three of them belonging to the controlling synapse. Say red 3  connects with blue 1′ and look what happens:


OK, so we obtain a connection between blue 1 and red 3′. Suppose now that red 3 connects with blue 2′, look:


Yes, blue 1 does not connect with red 3′, they just exchanged a pair of termination gates. That’s how the pair of synapses work.

Finally, let me remark that the use of termination gates is a bit ugly, it breaks the symmetry,  is not really needed.