Carl Einstein on Picasso and the visual brain

The article  “Carl Einstein, Daniel-Henry Kahnweiler, Cubism, and the Visual Brain”  by   made me realize that probably the cubism, invented by Picasso and Gris,  was a logical step further along the path towards the investigation of vision, opened by impressionists.

Indeed, far from being a game of abstraction, multiple viewpoints and other rubbish, it appears that cubism, at least in its first stages, represents the effort of understanding the first stages of vision, as happening in the (artist’s) brain. I find this story amazing, showing how far a brilliant mind (the one of Picasso) could go ahead of its time.

Here is a reproduction of the painting “Guitarist”, by Picasso, 1910, taken from the cited article:

It would be interesting to compare the statistics of edges and corners and blobs  (actually blobs are higher level features) in cubist paintings from this age with the statistics of same features in databases of natural images. My bet is that they are very close.

Digital Materialization, Euclideon and Fractal Image Compression

Are these:

Digital Materialization (DM)

Euclideon (Unlimited Detail)

Fractal Image Compression

related? Can be this rather simple (from a pure math viewpoint) research program doable and moreover DONE by anybody?

Let me explain. My wonder came after searching with google for

– “Digital materialization” AND “euclideon”   – no answers

– “Digital materialization” AND “Fractal image compression” – no answers

– “Fractal image compression” AND “euclideon” – no answers

1. Cite from the Digital Materialization Group homepage:

Digital Materialization (DM) can loosely be defined as two-way direct communication or conversion between matter and information that enable people to exactly describe, monitor, manipulate and create any arbitrary real object. DM is a general paradigm alongside a specified framework that is suitable for computer processing and includes: holistic, coherent, volumetric modeling systems; symbolic languages that are able to handle infinite degrees of freedom and detail in a compact format; and the direct interaction and/or fabrication of any object at any spatial resolution without the need for “lossy” or intermediate formats.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

As far as I understand, this works based on Function Representation (FREP), see HyperFun.org . The idea is to define an object in \mathbb{R}^{3} say, by a function F: \mathbb{R}^{3} \rightarrow \mathbb{R}, which describes the object as the set of points x where F(x) \leq 0. Start with a small library of functions (for example polynomials) and then construct other functions by using min, max, and so on. Therefore an object is described by a tree, with leaves decorated by functions and nodes decorated by min, max, … , operations.  This is a very simplistic, but fundamentally precise description. The main point is that there is no object (defined for example by a mesh in space), but instead we may check if a point x  in space belongs or not to the object by evaluating the sign of F(x). Moreover, translations, rotations, dilations (or any other easy to implement change of parametrization of the space) are implemented by composition with the function F which describes the object.

In particular, we may easily pass to polar coordinates based on the point of view and stay in this  coordinate system for visualization.

2. Fractal image compression is based on the fact that any compact set (like an image, described as a compact set in the 5D space = 2D (spatial) times 3D (RGB levels)) can be described as a fixed point of an iterated function system.

This brings me to

PROBLEM 1: Is there any version of the Hutchinson theorem about fixed points of IFS, with the space of compact sets replaced by a space of FREP functions? Correspondingly, is there a “FREP  compression algorithm”?

My guess is that the answer is YES. Let’s assume it is so.

3. In the article “Digital Foundry vs. Unlimited Detail”   it is reported the following 2008 post by Bruce Dell:

Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

Assuming that we take a database representing a 3D very complex object (like a piece of landscape) and we convert it, by using a FREP compression algorithm, into a tree as described at point 1, then it becomes easy to imagine how the Unlimited Detail algorithm might work.

Problem 2:  Given a FREP representation of a collection of 3D objects, describe an efficient sorting algorithm which uses the representation and outputs the part of the union of object visible from a given point at  infinity.

Conclusion: unless this is utter giberish, the modus operandi of an “unlimited detail” algorithm could be the following:

1)- start with a database of a collection of 3d objects and compress it into a FREP format

2)- perform a “mass 3d to 2d conversion”, by using a solution of the problem 2,  in polar coordinated from the viewpoint.

Combinators and stickers

From the wiki page of Combinatory logic:

Combinatory logic is a notation to eliminate the need for variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators.

For the treatment of combinators in graphic lambda calculus see section 3.1 of Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale, arxiv:1207.0332.

In this post I want to discuss about the conversion of the lambda calculus sector of the graphic lambda calculus formalism into the knot diagram sector plus stickers.

There are two stickers, they are graphs in $GRAPH$ with the following form:

I call them “stickers” because they behave like  devices which concentrate the attention at the region encircled by the closed loop. Think about a sticker with nothing written on it. This is a thing which you can stick in some place and you can write something on it. Likewise, take any graph in $GRAPH$ with a marked OUT leaf (for any example take any combinator in graphic lambda calculus form, which always has an unique OUT leaf) and connect it with one of the stickers (i.e. “write it on the sticker”). The operation analoguous to  sticking the written sticker on something is to cross the sticker connected with a combinator with another combinator, where “crossing” refers to the knot diagrams sector, with its graphic lambda crossings.

Look how the stickers behave:

They may be converted one into another (by way of graphic beta moves). Moreover, they transform the application operation gate into a crossing!

Let us see the behaviour of some combinators connected with stickers. Take the combinator I = \lambda x.x. In the next figure, we see it in the upper left corner, in graphic lambda formalism. (The figure has two layers. One may read only what is written in black, considering after what happens if the red drawings are added.)

In black, we see that the graph of I connected with a sticker transforms by a graphic beta move into a loop. If we add the red part, we see that the graph transforms into a loop with a crossing with a graph A. If we perform a second graphic beta move (in red) then we get A. This corresponds to the combinatory logic relation IA \rightarrow A.

Likewise, let’s consider the combinator K = \lambda xy.x, with associated graph  described in the left upper corner of the  next figure.

With the added red part, it corresponds to the relation KA \rightarrow \lambda y. A with y a fresh variable for A.

I skip the combinator S for a future discussion, but from this post it looks almost clear that by using stickers we may convert the combinatory logic sector of the graphic lambda calculus into  the knot diagrams sector enhanced with stickers and maybe with zippers (but we shall see that we may eliminate one or another, in some sense).

The zipper macro and zipper moves

I continue from the post Generating set of Reidemeister moves for graphic lambda crossings , where the “crossing macros” over graphic lambda calculus were discussed.

Another interesting macro  (over graphic lambda calculus) is the zipper, together with its associated zipper moves.

Let’s take n \geq 2 a natural number and let’s consider the following graph in GRAPH, called the n-zipper:

At the left is the n-zipper graph; at the right is a NOTATION for it, or a macro.  We could as well take n =2, with obvious modifications of the figure, so the 2-zipper exists. Even n=1 makes sense, but the 1-zipper is kind of degenerate, see later.

There is a graphic beta move which we can perform on the n-zipper graph. In the following picture I figured in red the place where the graphic beta move is applied.

In terms of zipper notation this graphic beta move has the following appearance:

We see that a n-zipper transforms into a (n-1)-zipper plus an arrow. We may repeat this move, as long as we can. What is the result? A n-zipper move:

The 1-zipper move, called ZIP_{1} is just the graphic beta move, which transforms the 1-zipper into two arrows.

Nice, now what can we do with zippers and crossings?

A 3-parts system for scientific credibility

In Nature appeared the article “Plagiarism exposed in Romanian grant applications”  by Alison Abbott. This comes after previous articles “Romanian prime minister accused of plagiarism” by Quirin Schiermeier and “Romanian scientists fight plagiarism” by Alison Abbott.

UPDATE: See   the review of integru.org   concerning  the Romanian Minister of Education Ecaterina Andronescu. (and my short comment here).

As a researcher I am concerned about this subject, see the post “On plagiarism scandals in Romania, an easy test“.

Let me explain my views about this subject and, after that, propose a solution for the real problem which lurks beneath, which is the one of scientific credibility.

Here are several aspects about the problem of credibility of Romanian research.

1. In an ideal world, there should be no such thing as Romanian research, there are only individuals, freely communicating across the world via the internet. However, in the real world things are far more complex.

2. Romanian researchers come in several blends. You have mathematicians and hard sciences alike, which communicate their research in a “universal language” (and mostly in some dialect of broken English), let’s call them the “LOUD researchers”.

You also have researchers in the humanities, which may feel cornered by the specific of the language and regional subjects. However, branches of engineers, for example, or medical sciences, which are far more “discrete” about spreading their research results freely on the net, fall in the second category of “DISCRETE researchers”.

3. When it comes to plagiarism and other bad practices, the bad news for the bad guys is that it is now rather easy to detect bad behaviours, simply by using search engines and by communicating with other fellow researchers. Because the net is a new thing (historically speaking), we have to take into consideration the age of the researchers (or a kind of a “virtual age”), splitting the previous 2 classes in pairs: “YOUNG” for those who use the net to cite, to search, to communicate their research, “OLD” for those who think that paper publication is everything and a lawyer is a good replacement for moral, even if they don’t mind to search a bit on the net for fishing new ideas.

So we have:

A – YOUNG LOUD  researcher

B- YOUNG DISCRETE  researcher

C- OLD LOUD  researcher

D- OLD DISCRETE researcher

4. The age is important because in Romania, being a more traditionalistic society, the managers, the bosses, tend to be old. So power is biased towards the old guys.

5. Also by tradition, the humanities and the “practical” (i.e. medical, engineering, …) have a much more stronger cultural influence, which means that in the Romanian culture, power is biased towards the DISCRETE.

In conclusion, “discrete”, i.e. B and D categories are more prone, very vaguely speaking (and not pointing to individuals), to contain abusers, but they are also more likely to have more power in the Romanian research system.

This is supported by the fact that the plagiarism scandals, up to now, at least, were so obvious and so easy to expose. This shows, in my opinion that it always was about OLD DISCRETE guys (according to the definition of “old” I consider here), which have power.

There is, now, the beginning of a power struggle between, the DISCRETE and the LOUD faction. Indeed, the YOUNG  DISCRETE guys are scared of being exposed as not as worthy as their more international YOUNG LOUD fellows, therefore there is not a fight between YOUNG and OLD categories.

Moreover, the situation is aggravated by the fact that, in the last 15 years, say, a lot of people got positions and diplomas which do not reflect their level. The fault, in my opinion, is entirely on the power side, be it the Ministry of Education, various commissions and organisms, the managers of universities, etc.  This produced a significant inflation of the DISCRETE class, be them  old scientific mediocrities (with political talents instead) or their young opportunistic clones.

That is my clinical diagnostic.

If you are not too bored until now, then maybe you shall find some interest in the solution (or “a solution”) which I propose. The purpose is to expose the DISCRETE class to the level of the LOUD class.

The solution it is not to force researchers to do anything, neither to form a scientific police. The solution is to NOT rely on authority arguments, but instead, to look for CREDIBILITY and COLLABORATION.

I propose a 3-parts system:

– the researcher, willing to get credibility, has a web page with his results, in a international language, offering freely his papers or links to open access repositories of its papers and other achievements,

– institutions, be them universities, associations of scientists, ministries of education, academies, smart IT guys offering a platform for communication, which are willing to give credibility AOC  STAMPS,  list the criteria they use on their respective web sites, in an international language,

– the public, that is anybody willing to learn if the researcher is good of anything, can make their opinion by comparing the page of the researcher with the list of criteria of the institution, independently of both previous parts.

The advantages are the following:

– if an institution has stupid, or corrupt, of otherwise not good criteria, then they loose credibility, by exposure to public, as well as the researchers which pride themselves with AOC stamps from the said institution,

– the results (articles, etc) of researchers can be more easily subjected to scrutiny for plagiarism or other bad practices. (There are many other bad practices, among them the selective citations. In “Boring mathematics, artistes pompiers and impressionists” I propose a Google Scholar Maffia tool which could group researchers in the same subject  into small “churches”, citing one another but not the ones outside the “church”.)

– the system is international, which is always a good thing in research,

– there is no reliance on authority. Indeed, suppose that a Romanian power institution issues its AOC stamps based on unreasonable criteria, then the stamps may become a bad mark, hurting both the institution and the researchers which satisfy them

– there is no one-for-all measure. Maybe some researchers desire to be less internationally exposed, but to have more time for teaching. They will take AOC stamps from institutions which price teaching more than research, and so on.

– being dispersed, the system is less prone to manipulation by a small group of individuals. (It is more sensible to “popularity contests” behaviours though.)

There are already efforts towards a net based collaboration against plagiarism, like the site integru.org. There is much more to be done, but my advice for the international “public” is, for the moment, to consider the credibility criterion of having its research papers freely available in open access repositories.

Generating set of Reidemeister moves for graphic lambda crossings

UPDATE 2: It  turrns out there is an error in this post, concerning the move R3a. The oriented Reidemeister moves which are not respected by the untyped lambda calculus crossings are R2c, R2d, R3a, R3h. All other moves are allowed. [Added: see why by looking at the post Big sets of oriented Reidemeister moves which don’t generate all moves.]

________

UPDATE: The paper “Graphic lambda calculus and knot diagrams”, arXiv:1211.1604    contains a part of what is discussed here, at the tag “graphic lambda calculus“. There will be follow-ups and applications.

________

Last post related to the subject is “3D crossings in graphic lambda calculus“.  I shall use the notations from there, see also some earlier posts indicated there.

For oriented link diagrams the list of Reidemeister moves is large (24 moves). The list of the moves and a choice of a minimal generating set (with proofs) can be found in the paper by Michael Polyak “Minimal generating sets of Reidemeister moves“.

The following four Reidemeister moves generate all the others.

– R1a:

-R1b:

-R2a:

-R3a:

Let’s use the crossings given by graphic lambda calculus (see the post mentioned at the beginning) and let us interpret the moves as composites of graphic beta moves. Is this possible? Yes, I’ll show you in a moment.

In terms of the graphic lambda calculus, the move R1a is this:

which is just a degenerate graphic beta move with elimination of loops (short name “elimination of loops”).

The move R1b becomes this:

which is another elimination of loops move.

So, the first two moves are just degenerate graphic beta moves with elimination of loops. We may say that we used 0 graphic beta moves, because degenerated moves are much weaker than the full graphic beta move.

Let’s see how R2a looks like in terms of graphic lambda calculus:

That is a combination of two beta moves!

Finally, the move R3a looks like this:

That is a combination of 6 beta moves. The red  arrows  are those which correspond to the relation over-under of the respective crossings. (see UPDATE 2 for corrections)

Conclusion: the Reidemeister moves, as seen when interpreted in graphic lambda calculus, are combinations of an even number of graphic beta moves and any number of degenerated graphic beta moves with elimination of loops.