Animals in lambda calculus

Here is, in more detail, how the graphic beta rule acts in the lambda calculus sector of the graphic lambda calculus.

Let us meet an animal:

animal_1

It is a graph in GRAPH, i.e. a trivalent or univalent locally planar  graph, with the decorated nodes corresponding to the operations of lambda abstraction, application, FAN-OUT (but, attention, not really a fan-out gate, unless the GLOBAL FAN-OUT move is used), and termination node (corresponding to univalent nodes).

Moreover, it has a “mouth”, named in the figure “IN”, which is continued by a tree of FAN-OUT gates (this tree may be trivial, i.e. just an arrow). The green discs represent the insertion points of the FAN-OUT tree into the rest of the graph.

After the animal eats, it spills by the OUT arrow. There are no other OUT arrows.

Animals have the following nice property:  an animal which eats a graph in the lambda calculus sector (i.e. an untyped lambda calculus term) spills out a graph in the lambda calculus sector (a term).

You may think about an animal as being a term A in untyped lambda calculus, with a variable x which is marked (now the green disks correspond to the places where the variable appears in the term). When the animal eats another term B, it means that the variable x is replaced by B in A.  Otherwise said, an animal is an expression like A[x:= ].

Animals are not graphs in the lambda calculus sector.

Let us now apply the graphic beta move: we are going to cross the IN and OUT arrows. It goes like this: first identify the places where the graphic beta move will be applied. This is explained in the next figure:

animal_2

Apply then the graphic beta move and get:

animal_3

This is another animal, with trivial FAN-OUT tree after the IN arrow.

So graphic beta move, as seen acting on the lambda calculus sector, looks like a transformation of animals. Untyped lambda calculus is so complex because animals are complex objects, defined by GLOBAL rules, not because the beta move has any intrinsic complexity inside (here I use the word “complex” in the vague, usual, sense, not in any technical sense).

Integru.org review: “Ecaterina Andronescu (minister of research), 2003 – plagiarism and falsification of data”

Concerning one article of the Romanian Minister of Education Ecaterina Andronescu, according to the review of integru.org, “6 international experts in the field confirm that the work constitutes plagiarism and falsification of data”.

UPDATE: A new review concerning a series of 4 papers just appeared.

The Ministry of Education and other, equivalent, organisms are to be held responsible for the damage incurred to the image of many innocent Romanian researchers.

SEE ALSO: When credibility goes down the drain: the grotesque case of two politicians, Ecaterina Andronescu and Victor Ponta

___

Otherwise, this new case confirms my previous analysis (and advice):

A 3-parts system for scientific credibility.

in the sense that this new case is again about an OLD DISCRETE and power hungry person (see the analysis for definitions of “OLD” and “DISCRETE”).

Congratulations to integru.org.

UPDATE: Andronescu has the following argument against any plagiarizing accusations (source here and here, in Romanian).  Here is an English translation:

“I have built all my professional career  on hard work and honesty. In this career there are several steps that you go through when you start:  from the bottom, from the first position of junior assistant, to lecturer, then  associate professor and PhD supervisor. For each stage they go through a dossier and set up a committee vote passing through the Professional Council of the  academic Senate and then to  CNATDCU  [i.e. the National Council of Attestation of Universitary Titles, Diplomas and Certificates] which  gives the  verdict.”

She has a point, indeed.  Maybe here applies the Sherlock Holmes dictum:

“When you have eliminated the impossible, whatever remains, however improbable, must be the truth”

Neural darwinism, large scale homunculus

Neural darwinism, wiki entry:

Neural Darwinism, a large scale theory of brain function by Gerald Edelman, was initially published in 1978, in a book called The Mindful Brain (MIT Press). It was extended and published in the 1989 book Neural Darwinism – The Theory of Neuronal Group Selection.”

“The last part of the theory attempts to explain how we experience spatiotemporal consistency in our interaction with environmental stimuli. Edelman called it “reentry” and proposes a model of reentrant signaling whereby a disjunctive, multimodal sampling of the same stimulus event correlated in time leads to self-organizing intelligence. Put another way, multiple neuronal groups can be used to sample a given stimulus set in parallel and communicate between these disjunctive groups with incurred latency.”

Here is my (probably very sketchy) understanding of this mysterious “reentry”.

Say X is the collection of neurons of the brain, a discrete set with large cardinality N. At any moment the “state” of the brain is partially described by a N \times N matrix of weights: the number w_{ij} is the weight of the connection of the neuron i with the neuron j (a non-negative real  number).

We may imagine such a state of the brain as being the trivial groupoid X \times X with a weight function defined on arrows with values in [0,+\infty).

Instead of neurons and weights of connections we may easily imagine more complex situations (for example take the trivial groupoid generated by connections; an arrow between two connections is the neuron incident to both connections, and so on; moreover, weights could be enriched,…), so let’s just say that a state of  the brain is a weighted groupoid.

With a dynamics of weights.

Define a “neuronal group” as being a sub-groupoid with a weight function.  Take now two neuronal groups (A,w_{A}) and (B,w_{B}).  How similar are they, in the brain?

For this we need a cost function which applies to any “weighted relation” (in the brain, i.e. in the big weighted groupoid) from A to B and spills a non-negative number. The similarity between two neural groups (with respect to a particular state of the brain) is the minimum of these numbers over all the possible connectomes between A and B.

My feeble understanding of this reentry is that, as time passes, the state of the brain evolves in a way which increases (in fact decreases the cost of) similarity of neuronal groups “encoding” aspects of the same “stimulus” which are correlated in time.

We may the imagine a “large scale homunculus” as being a similar but strict neural sub-group of the whole brain. The reentry weighted relation will then have a structure of an emergent algebra.

Indeed, there is a striking similarity between this formalization (probably very naive w.r.t. the complexity of the problem, and also totally ignoring dynamical aspects) and the characterization of emergent algebras as being related to the problem of exploring space and matching collections of maps, as described in  “Maps of metric spaces“, see also  these slides.

Just replace distances with matrices of weights, or equivalently, think about the previous image as:

I shall come back to this with all details later.

Gold Open Access: enter the mouth!

I heard this joke about two of the last dinosaurs, discussing some time after the meteorite crash:

– Carl, I dunno, there are these sneaky rats everywhere, they seem to enjoy this post apocalyptic world of ours. They are good food, but they are so small and they move so fast, totally un-dinosaurish like. But I’m hungry, Carl, I’d love to chew some, now that the big preys seem to be gone. What to do?

Carl says:

– Let’s convince them that it’s totally cool to enter  in our mouth by their own will. That way, we could stay still, with our mouths wide open and they are going to come to us and we are going to wink at them, telling them “that’s right, look, you’re totally dinosaur material, sonny, come on, enter the mouth, that’s the goal, yes, it has always been like this and it will be like this till the end of times”.

And the other dinosaur says:

– That’s a golden idea, Carl, golden!

___________________

The dinosaur named Carl appeared first time in  “The Purposeful Life” by Abstruse Goose.

The three Moirai, continued

UPDATE:  The problem of connecting two gates, as explained in this post, is equivalent with the oriented Reidemeister move R2c, itself equivalent with R3a, for the untyped lambda calculus crossing macro. Therefore we cannot, in graphic lambda calculus, without the dual of the graphic beta move, at least, solve the problem of gate connections.

_________

In the post “Ancient Turing machines (I): the three Moirai” I explained how Clotho, Atropos and Lachesis may build together a Lisp-like based Turing machine, in terms of the graphic lambda calculus.

Clotho creates new thread by inserting FAN-OUT gates (the move CREA), Atropos cuts the thread (the move GARB) and Lachesis performs graphic beta reduction. Either they have an infinite reservoir of loops, or Lachesis may also use the Reidemeister move R1a. (We discussed about oriented Reidemeister moves in the post “Generating set of Reidemeister moves for graphic lambda crossings”  ; the names of the moves are those from the paper    by Michael Polyak  “Minimal generating sets of Reidemeister moves“, only that I use the letter “R” from “Reidemeister” instead of “\Omega” used by Polyak.)

There is something missing, though, namely how to connect gates, once created. I shall explain this further. After that I shall finish with a reminder of the real goal of these posts, essentially mentioned in my comment of the last post.

Recall that the three Moirai know how to create the lambda abstraction gate, the application gate, the FAN-OUT gate and the termination gate, now the question is how they connect two gates, once they have them. In the next figure is given a solution for this.

So, the problem is this: we have two threads, marked 1-2 and 3-4, we want to obtain a thread from 1 to 4. For this we add a loop and Lachesis performs a graphic beta move (alternatively, without adding a loop, Lachesis does a R1a move). Lachesis continues by doing a second graphic beta move, as indicated in the figure. Finally, she performs a number of beta moves  equivalent with the oriented Reidemeister move R2c (see the  mentioned Polyak’s paper for notation). I have not counted how many moves are needed for R2c , but the number can be inferred from the generation of the move R2c from the moves R1a, R1b, R2a, R3a.

Now the construction is finished, let us leave the Moirai to do their job.
Finally, I shall recall my real goal, which I have never forgot. The real goal is to pass from understanding of the power of this lambda calculus sector of graphic lambda calculus to the real deal, called “computing with space”, namely to understand space from a computational perspective, not as a given receptacle, but as a small list of procedures along with some impossible to verify assertions (like that we may rescale indefinitely space), see “emergent algebras”, which can always be eliminated a posteriori, by a kind of finitization procedure.

Ancient Turing machines (I): the three Moirai

This is a first post about interpreting the Turing machine in ancient terms (I have at least another interpretation in mind, which I shall explain later).

It’s your choice to interpret it as a tongue-in-cheek or verbatim. Here are the facts.

_______

Go to the tutorial “Introduction to graphic lambda calculus” if you want to understand the graphic conventions and the moves.

_______

 

1. The three Moirai, cite from their wiki page:

In Greek mythology, the Moirai (Ancient Greek: Μοῖραι, “apportioners”, Latinized as Moerae)—often known in English as the Fates—were the white-robed incarnations of destiny (Roman equivalent: Parcae, euphemistically the “sparing ones”, or Fata; also equivalent to the Germanic Norns). Their number became fixed at three: Clotho (spinner), Lachesis (allotter) and Atropos (unturnable). […]

  • Clotho (play /ˈklθ/, Greek Κλωθώ [klɔːˈtʰɔː] – “spinner”) spun the thread of life from her distaff onto her spindle. Her Roman equivalent was Nona, (the ‘Ninth’), who was originally a goddess called upon in the ninth month of pregnancy.
  • Lachesis (play /ˈlækɨsɪs/, Greek Λάχεσις [ˈlakʰesis] – “allotter” or drawer of lots) measured the thread of life allotted to each person with her measuring rod. Her Roman equivalent was Decima (the ‘Tenth’).
  • Atropos (play /ˈætrəpɒs/, Greek Ἄτροπος [ˈatropos] – “inexorable” or “inevitable”, literally “unturning”,[16] sometimes called Aisa) was the cutter of the thread of life. She chose the manner of each person’s death; and when their time was come, she cut their life-thread with “her abhorred shears”.[17] Her Roman equivalent was Morta (‘Death’).

2. Let’s interpret their activity as something equivalent to a Turing machine. I shall use untyped lambda calculus, which has the same computational power as Turing machines. Better, I choose to work with graphic lambda calculus (tag archive , first paper), which has a sector equivalent with untyped lambda calculus.

The challenge is to arrive to generate all graphs in GRAPHby using the three Moirai, specifically by formalizing their activity in terms of graphic lambda.

The following figure contains this, let’s contemplate it and then pass to explanations.

CLOTHO   is creating the thread, namely the new move called “CREA” (from “creation”):

Basically she introduces a FAN-OUT gate into the thread. In order to make this gate to function as FAN-OUT, she also needs  from the graphic lambda calculus the moves CO-COMM (which allows her to permute the outputs) and CO-ASSOC (which allows her to not care about the order of application of a cascade of FAN_OUT gates).

ATROPOS cuts the thread, namely she is performing a move which I shall call “GARB” (from “garbage”), which is a new move introduced in graphic lambda calculus:

She picks from the moves of graphic lambda calculus LOCAL PRUNING and ELIMINATION OF LOOPS, which are kind of her style.

LACHESIS  is doing only one move, the graphic beta, described here (and see the paper) as a braiding move, when seen in knot diagrams macro. (She might actually be able to do also the oriented Reidemeister 1a move, see further.)

This is a graphic form of \beta reduction, so you may say that LACHESIS  is performing something akin to \beta reduction.

3. How does it work? The Moirai have a thread to start from. Their first goal is to produce the gates. They can easily  have two gates, one appearing after GARB, the other appearing after CREA. They still need the application gate (corresponding to the application operation in lambda calculus) and the lambda abstraction gate.

They also need to have enough threads to play with. Here are two ways of getting them. The first one is using only GARB and CREA moves. The dashed green curves represent the input and the output of their activities. The dashed red curves indicate where the moves are applied.

Another way of producing two threads from one, more specifically producing a new thread and also keeping the old one, uses also LOCAL PRUNING:

If the Moirai have only one thread and no loop, then we have to add to LACHESIS’s competences the three Reidemeister moves, or at least the Reidemeister 1a move:

Then LACHESIS may use her graphic beta move in order to get a thread and a loop.  ATROPOS has to refrain to use her ELIMINATION OF LOOPS for later!

Now the three Moirai are ready to produce the application and lambda abstraction gates. CLOTHO and LACHESIS  start with two threads (which they already have), in order to get to an intermediary step.

From here, with some help from ATROPOS, they  get a lambda gate and an application gate.

From here the Moirai have to be very clever and patient in order to construct the graphs which correspond to the lambda calculus terms needed for something equivalent of a Turing machine. They have to be clever because they want to construct graphs in GRAPH from the lambda calculus sector, and for this they have to cleverly use loops in order to satisfy, at the end, the global conditions which graphs from the lambda calculus sector satisfy (that is, basically, the condition that whatever exits from the right hand side exit of a lambda gate, has to either end in garbage, or to continue until it enters by the input of the said lambda gate).

Their work could be made easier if they learn a bit of LISP and they follow the indications of  this paper.

That’s it.

We are left with three, very vague questions:

1. Could it be that the Moirai take some shorcuts through the maze of constructing a Turing machine and instead, thread our fates in an equivalent (or more general?) way, but using less sophisticated building blocks?

2. As they spun the destiny of the Universe, they do it in a computable fashion?

3. Could the Moirai build Moirai? (I find this hard to believe, by looking at the GLOBAL CONDITIONS they have to achieve by pure wisdom.)

Scattered thoughts about creativity

For a very nice and truthful depiction of the process of (scientific?) discoveries I recommend the article “The colossal pile of jibberish behind discovery, and it’s implications for science funding” by Mark Changizi [title copied as appeared in Discover Magazine].

I shall give some excerpts and then I shall add some comments of my own.

“Far be it from me to debunk the mythical, magician-like qualities sometimes attributed to us scientists, but the dirtiest little secret in science is that our science minds are just as dirty and unbeautiful as everyone else’s… and this has important implications, both for aspiring students and for how science is funded. […]

…how we scientists find our ideas is ugly, and frankly embarrassing to show folks. That’s why we don’t put this part of the process into our journal articles or books.

But the fact that we never show the dirty mental work underlying our discoveries has bothered me, for at least two reasons.

First, students of science would be best prepared for making their own discoveries if they could see more examples of what their older mentors actually did to make theirs. If students mistakenly conclude that their mentors are magical shaman geniuses who can inscrutably channel discoveries at will, they’ll mistakenly conclude that discovery is out of their league. […] And with the discovery process not only demystified but also laid before them, students will be able to absorb brainstorming and idea-finding techniques that others have found successful. […]

Second, if this crucial step in the discovery process is not well appreciated, then the funding mechanisms for science won’t work well. […] “Dear National Science Foundation: I plan on scrawling hundreds of pages of notes, mostly hitting dead ends, until, in year 4, I hit pay dirt.” Yet that’s exactly what’s needed for the big breakthroughs that big scientific advances require […]

… it occurred to me recently that there is a simple way to begin illustrating just how much junk lies in the science trunk. Of the thirty to forty research notebooks I’ve filled with tiny handwriting over the last dozen years, I can show a sample. So, I photographed about a year-and-a-half’s worth of notes. […]

Now, the photographs don’t possess enough resolution to read much from them. My intent here is to indicate just how much of the sort of notes and brainstorming goes on.

And it also shields me from the humiliation of you reading hundreds of pages of my disorganized senselessness.

…That same disorganized senselessness that is vital to the creative process.”

Interesting!

Everybody has loads of notebooks, right? Or maybe not, instead lots of files with half written ideas, always in the process of being rewritten and finally abandoned after canibalizing some useful bits?

I remember that the most worthy part of my first notebook (when I was 10) was about how magnets work. I was fascinated by them (as well as by mirrors).  I was trying to understand how a magnet attracts or repels another without touching it. A magnet is basically just a piece of rock, it sits there doing nothing until another magnet (or a piece of iron) comes close and then, by magic, it moves. Not only that a magnet comes into motion only when another magnet is close, but it’s capacity to move never consumes, not even after ages pass by (arrived to this conclusion after repeating the experiment for two weeks). If I drop a rock on the ground I have have to lift it up again in order to make it move (by letting it fall). But  a magnet never loose it’s energy.
And then, the night after after a biology class I had an epiphany. You see, think about the magnet as being a heart. When interacting with another magnet, by a systole it sends it’s energy to the other magnet, then it relax, having a diastole and the energy comes back, closing a cicle. It has to be this!

Concerning the “embarassment factor”, I totally agree that it is better to show the work behind a discovery than pretend that it never happened. For three reasons:

– it might give new ideas to others, even about unrelated subjects,

– it stimulates the dialogue, which sometimes can be very helpful for the process of discovery,

– it improves the style of presentation, it gives courage to express one’s personal viewpoint in a more sincere way, thus creating variety where there is very little. Without this variety how to discern among two scientists, one which systematically explores a tiny patch and writes a hundred papers on the same inequality in functional analysis (say, random subject!), exhausting the subject like a vacuum cleaner, and the other who opens new directions of exploration, without going to the last detail, but giving enough clues so that the idea has a future. Both types of researchers are needed, but the first type is overrepresented, because of the modern risk aversion policies in science.

Due to the easiness of sharing, which is a feature of present day communication trough the net, this is, moreover, feasible.

I close this with an example of my own: I just discovered a draft paper from 2006, called “Deconstructing analysis I.” (there never was any II, or was it?), which contains information comparable with the one from Changizi’ example, only that is readable. I post it here just for fun, here is it, with its glorious title:

“Deconstructing analysis I. Dilatation structures for  sub-Riemannian and fractal  spaces” (2006)

Carl Einstein on Picasso and the visual brain

The article  “Carl Einstein, Daniel-Henry Kahnweiler, Cubism, and the Visual Brain”  by   made me realize that probably the cubism, invented by Picasso and Gris,  was a logical step further along the path towards the investigation of vision, opened by impressionists.

Indeed, far from being a game of abstraction, multiple viewpoints and other rubbish, it appears that cubism, at least in its first stages, represents the effort of understanding the first stages of vision, as happening in the (artist’s) brain. I find this story amazing, showing how far a brilliant mind (the one of Picasso) could go ahead of its time.

Here is a reproduction of the painting “Guitarist”, by Picasso, 1910, taken from the cited article:

It would be interesting to compare the statistics of edges and corners and blobs  (actually blobs are higher level features) in cubist paintings from this age with the statistics of same features in databases of natural images. My bet is that they are very close.

Digital Materialization, Euclideon and Fractal Image Compression

Are these:

Digital Materialization (DM)

Euclideon (Unlimited Detail)

Fractal Image Compression

related? Can be this rather simple (from a pure math viewpoint) research program doable and moreover DONE by anybody?

Let me explain. My wonder came after searching with google for

– “Digital materialization” AND “euclideon”   – no answers

– “Digital materialization” AND “Fractal image compression” – no answers

– “Fractal image compression” AND “euclideon” – no answers

1. Cite from the Digital Materialization Group homepage:

Digital Materialization (DM) can loosely be defined as two-way direct communication or conversion between matter and information that enable people to exactly describe, monitor, manipulate and create any arbitrary real object. DM is a general paradigm alongside a specified framework that is suitable for computer processing and includes: holistic, coherent, volumetric modeling systems; symbolic languages that are able to handle infinite degrees of freedom and detail in a compact format; and the direct interaction and/or fabrication of any object at any spatial resolution without the need for “lossy” or intermediate formats.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

As far as I understand, this works based on Function Representation (FREP), see HyperFun.org . The idea is to define an object in \mathbb{R}^{3} say, by a function F: \mathbb{R}^{3} \rightarrow \mathbb{R}, which describes the object as the set of points x where F(x) \leq 0. Start with a small library of functions (for example polynomials) and then construct other functions by using min, max, and so on. Therefore an object is described by a tree, with leaves decorated by functions and nodes decorated by min, max, … , operations.  This is a very simplistic, but fundamentally precise description. The main point is that there is no object (defined for example by a mesh in space), but instead we may check if a point x  in space belongs or not to the object by evaluating the sign of F(x). Moreover, translations, rotations, dilations (or any other easy to implement change of parametrization of the space) are implemented by composition with the function F which describes the object.

In particular, we may easily pass to polar coordinates based on the point of view and stay in this  coordinate system for visualization.

2. Fractal image compression is based on the fact that any compact set (like an image, described as a compact set in the 5D space = 2D (spatial) times 3D (RGB levels)) can be described as a fixed point of an iterated function system.

This brings me to

PROBLEM 1: Is there any version of the Hutchinson theorem about fixed points of IFS, with the space of compact sets replaced by a space of FREP functions? Correspondingly, is there a “FREP  compression algorithm”?

My guess is that the answer is YES. Let’s assume it is so.

3. In the article “Digital Foundry vs. Unlimited Detail”   it is reported the following 2008 post by Bruce Dell:

Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

Assuming that we take a database representing a 3D very complex object (like a piece of landscape) and we convert it, by using a FREP compression algorithm, into a tree as described at point 1, then it becomes easy to imagine how the Unlimited Detail algorithm might work.

Problem 2:  Given a FREP representation of a collection of 3D objects, describe an efficient sorting algorithm which uses the representation and outputs the part of the union of object visible from a given point at  infinity.

Conclusion: unless this is utter giberish, the modus operandi of an “unlimited detail” algorithm could be the following:

1)- start with a database of a collection of 3d objects and compress it into a FREP format

2)- perform a “mass 3d to 2d conversion”, by using a solution of the problem 2,  in polar coordinated from the viewpoint.

Combinators and stickers

From the wiki page of Combinatory logic:

Combinatory logic is a notation to eliminate the need for variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators.

For the treatment of combinators in graphic lambda calculus see section 3.1 of Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale, arxiv:1207.0332.

In this post I want to discuss about the conversion of the lambda calculus sector of the graphic lambda calculus formalism into the knot diagram sector plus stickers.

There are two stickers, they are graphs in $GRAPH$ with the following form:

I call them “stickers” because they behave like  devices which concentrate the attention at the region encircled by the closed loop. Think about a sticker with nothing written on it. This is a thing which you can stick in some place and you can write something on it. Likewise, take any graph in $GRAPH$ with a marked OUT leaf (for any example take any combinator in graphic lambda calculus form, which always has an unique OUT leaf) and connect it with one of the stickers (i.e. “write it on the sticker”). The operation analoguous to  sticking the written sticker on something is to cross the sticker connected with a combinator with another combinator, where “crossing” refers to the knot diagrams sector, with its graphic lambda crossings.

Look how the stickers behave:

They may be converted one into another (by way of graphic beta moves). Moreover, they transform the application operation gate into a crossing!

Let us see the behaviour of some combinators connected with stickers. Take the combinator I = \lambda x.x. In the next figure, we see it in the upper left corner, in graphic lambda formalism. (The figure has two layers. One may read only what is written in black, considering after what happens if the red drawings are added.)

In black, we see that the graph of I connected with a sticker transforms by a graphic beta move into a loop. If we add the red part, we see that the graph transforms into a loop with a crossing with a graph A. If we perform a second graphic beta move (in red) then we get A. This corresponds to the combinatory logic relation IA \rightarrow A.

Likewise, let’s consider the combinator K = \lambda xy.x, with associated graph  described in the left upper corner of the  next figure.

With the added red part, it corresponds to the relation KA \rightarrow \lambda y. A with y a fresh variable for A.

I skip the combinator S for a future discussion, but from this post it looks almost clear that by using stickers we may convert the combinatory logic sector of the graphic lambda calculus into  the knot diagrams sector enhanced with stickers and maybe with zippers (but we shall see that we may eliminate one or another, in some sense).

The zipper macro and zipper moves

I continue from the post Generating set of Reidemeister moves for graphic lambda crossings , where the “crossing macros” over graphic lambda calculus were discussed.

Another interesting macro  (over graphic lambda calculus) is the zipper, together with its associated zipper moves.

Let’s take n \geq 2 a natural number and let’s consider the following graph in GRAPH, called the n-zipper:

At the left is the n-zipper graph; at the right is a NOTATION for it, or a macro.  We could as well take n =2, with obvious modifications of the figure, so the 2-zipper exists. Even n=1 makes sense, but the 1-zipper is kind of degenerate, see later.

There is a graphic beta move which we can perform on the n-zipper graph. In the following picture I figured in red the place where the graphic beta move is applied.

In terms of zipper notation this graphic beta move has the following appearance:

We see that a n-zipper transforms into a (n-1)-zipper plus an arrow. We may repeat this move, as long as we can. What is the result? A n-zipper move:

The 1-zipper move, called ZIP_{1} is just the graphic beta move, which transforms the 1-zipper into two arrows.

Nice, now what can we do with zippers and crossings?

A 3-parts system for scientific credibility

In Nature appeared the article “Plagiarism exposed in Romanian grant applications”  by Alison Abbott. This comes after previous articles “Romanian prime minister accused of plagiarism” by Quirin Schiermeier and “Romanian scientists fight plagiarism” by Alison Abbott.

UPDATE: See   the review of integru.org   concerning  the Romanian Minister of Education Ecaterina Andronescu. (and my short comment here).

As a researcher I am concerned about this subject, see the post “On plagiarism scandals in Romania, an easy test“.

Let me explain my views about this subject and, after that, propose a solution for the real problem which lurks beneath, which is the one of scientific credibility.

Here are several aspects about the problem of credibility of Romanian research.

1. In an ideal world, there should be no such thing as Romanian research, there are only individuals, freely communicating across the world via the internet. However, in the real world things are far more complex.

2. Romanian researchers come in several blends. You have mathematicians and hard sciences alike, which communicate their research in a “universal language” (and mostly in some dialect of broken English), let’s call them the “LOUD researchers”.

You also have researchers in the humanities, which may feel cornered by the specific of the language and regional subjects. However, branches of engineers, for example, or medical sciences, which are far more “discrete” about spreading their research results freely on the net, fall in the second category of “DISCRETE researchers”.

3. When it comes to plagiarism and other bad practices, the bad news for the bad guys is that it is now rather easy to detect bad behaviours, simply by using search engines and by communicating with other fellow researchers. Because the net is a new thing (historically speaking), we have to take into consideration the age of the researchers (or a kind of a “virtual age”), splitting the previous 2 classes in pairs: “YOUNG” for those who use the net to cite, to search, to communicate their research, “OLD” for those who think that paper publication is everything and a lawyer is a good replacement for moral, even if they don’t mind to search a bit on the net for fishing new ideas.

So we have:

A – YOUNG LOUD  researcher

B- YOUNG DISCRETE  researcher

C- OLD LOUD  researcher

D- OLD DISCRETE researcher

4. The age is important because in Romania, being a more traditionalistic society, the managers, the bosses, tend to be old. So power is biased towards the old guys.

5. Also by tradition, the humanities and the “practical” (i.e. medical, engineering, …) have a much more stronger cultural influence, which means that in the Romanian culture, power is biased towards the DISCRETE.

In conclusion, “discrete”, i.e. B and D categories are more prone, very vaguely speaking (and not pointing to individuals), to contain abusers, but they are also more likely to have more power in the Romanian research system.

This is supported by the fact that the plagiarism scandals, up to now, at least, were so obvious and so easy to expose. This shows, in my opinion that it always was about OLD DISCRETE guys (according to the definition of “old” I consider here), which have power.

There is, now, the beginning of a power struggle between, the DISCRETE and the LOUD faction. Indeed, the YOUNG  DISCRETE guys are scared of being exposed as not as worthy as their more international YOUNG LOUD fellows, therefore there is not a fight between YOUNG and OLD categories.

Moreover, the situation is aggravated by the fact that, in the last 15 years, say, a lot of people got positions and diplomas which do not reflect their level. The fault, in my opinion, is entirely on the power side, be it the Ministry of Education, various commissions and organisms, the managers of universities, etc.  This produced a significant inflation of the DISCRETE class, be them  old scientific mediocrities (with political talents instead) or their young opportunistic clones.

That is my clinical diagnostic.

If you are not too bored until now, then maybe you shall find some interest in the solution (or “a solution”) which I propose. The purpose is to expose the DISCRETE class to the level of the LOUD class.

The solution it is not to force researchers to do anything, neither to form a scientific police. The solution is to NOT rely on authority arguments, but instead, to look for CREDIBILITY and COLLABORATION.

I propose a 3-parts system:

– the researcher, willing to get credibility, has a web page with his results, in a international language, offering freely his papers or links to open access repositories of its papers and other achievements,

– institutions, be them universities, associations of scientists, ministries of education, academies, smart IT guys offering a platform for communication, which are willing to give credibility AOC  STAMPS,  list the criteria they use on their respective web sites, in an international language,

– the public, that is anybody willing to learn if the researcher is good of anything, can make their opinion by comparing the page of the researcher with the list of criteria of the institution, independently of both previous parts.

The advantages are the following:

– if an institution has stupid, or corrupt, of otherwise not good criteria, then they loose credibility, by exposure to public, as well as the researchers which pride themselves with AOC stamps from the said institution,

– the results (articles, etc) of researchers can be more easily subjected to scrutiny for plagiarism or other bad practices. (There are many other bad practices, among them the selective citations. In “Boring mathematics, artistes pompiers and impressionists” I propose a Google Scholar Maffia tool which could group researchers in the same subject  into small “churches”, citing one another but not the ones outside the “church”.)

– the system is international, which is always a good thing in research,

– there is no reliance on authority. Indeed, suppose that a Romanian power institution issues its AOC stamps based on unreasonable criteria, then the stamps may become a bad mark, hurting both the institution and the researchers which satisfy them

– there is no one-for-all measure. Maybe some researchers desire to be less internationally exposed, but to have more time for teaching. They will take AOC stamps from institutions which price teaching more than research, and so on.

– being dispersed, the system is less prone to manipulation by a small group of individuals. (It is more sensible to “popularity contests” behaviours though.)

There are already efforts towards a net based collaboration against plagiarism, like the site integru.org. There is much more to be done, but my advice for the international “public” is, for the moment, to consider the credibility criterion of having its research papers freely available in open access repositories.

Generating set of Reidemeister moves for graphic lambda crossings

UPDATE 2: It  turrns out there is an error in this post, concerning the move R3a. The oriented Reidemeister moves which are not respected by the untyped lambda calculus crossings are R2c, R2d, R3a, R3h. All other moves are allowed. [Added: see why by looking at the post Big sets of oriented Reidemeister moves which don’t generate all moves.]

________

UPDATE: The paper “Graphic lambda calculus and knot diagrams”, arXiv:1211.1604    contains a part of what is discussed here, at the tag “graphic lambda calculus“. There will be follow-ups and applications.

________

Last post related to the subject is “3D crossings in graphic lambda calculus“.  I shall use the notations from there, see also some earlier posts indicated there.

For oriented link diagrams the list of Reidemeister moves is large (24 moves). The list of the moves and a choice of a minimal generating set (with proofs) can be found in the paper by Michael Polyak “Minimal generating sets of Reidemeister moves“.

The following four Reidemeister moves generate all the others.

– R1a:

-R1b:

-R2a:

-R3a:

Let’s use the crossings given by graphic lambda calculus (see the post mentioned at the beginning) and let us interpret the moves as composites of graphic beta moves. Is this possible? Yes, I’ll show you in a moment.

In terms of the graphic lambda calculus, the move R1a is this:

which is just a degenerate graphic beta move with elimination of loops (short name “elimination of loops”).

The move R1b becomes this:

which is another elimination of loops move.

So, the first two moves are just degenerate graphic beta moves with elimination of loops. We may say that we used 0 graphic beta moves, because degenerated moves are much weaker than the full graphic beta move.

Let’s see how R2a looks like in terms of graphic lambda calculus:

That is a combination of two beta moves!

Finally, the move R3a looks like this:

That is a combination of 6 beta moves. The red  arrows  are those which correspond to the relation over-under of the respective crossings. (see UPDATE 2 for corrections)

Conclusion: the Reidemeister moves, as seen when interpreted in graphic lambda calculus, are combinations of an even number of graphic beta moves and any number of degenerated graphic beta moves with elimination of loops.