Straw-man argument against comments in epijournals

This is a comment which awaits moderation (for some time) at Gowers “Good guys” post, therefore I post it here. Here is it, with some links  added:

After reading the rather heated exchanges around the subject of comments in epijournals, I am surprised by the fact that the best argument against comments that people here were able to find is by conflating comments in epijournals with comments in blogs.

I cannot imagine who would like to have comments in epijournals (or any other OA model) of the same quality as those on the average blog.

Therefore my impression is that much of the discussion here is just an example of a straw-man fallacy.

It is enough to look around and see that there are models who could inspire us.

I have proposed in several comments and posts like this one or the other to consider comments in OA journals on the par with the talk pages of Wikipedia, and peer-reviews as wiki pages.

Others have proposed the mathoverflow or reddit as models. Any of those proposals are stellar compared to comments in blogs.

Besides, I doubt very much that there is a majority against comments and I believe that Mike Taylor is only more vocal than others and for this he deserves some congratulations (and some respect, as a fellow scientist).

_______________

On peer-review and the big value it may have for epijournals, or even as a publishing business model, see also the posts:

The three main sectors of graphic lambda calculus

For a web tutorial on graphic lambda calculus go here.  There will be detailed explanations concerning the three main sectors of the graphic lambda calculus, for the moment here is a brief description of them.

Sectors.

A sector of the graphic lambda calculus is:

  • a set of graphs, defined by a local or global condition,
  • a set of moves from the list of all moves available.

The name “graphic lambda calculus” comes from the fact that there it has untyped lambda calculus as a sector. In fact, there are three important sectors of graphic lambda calculus:

Teaser: 2D UD

Here are two images which may (or may not) give an idea about another fast algorithm for real time rendering of 3D point cloud scenes (but attention, the images are drawn for the baby model of 2D point cloud scenes).  The secret lies in the database.

I have a busy schedule the next weeks and I have to get it out of my system. Therefore,  if anybody gets it then please send me a comment here. Has this been done before, does it work?

The images now: the first has no name

eucludeon

The second image is a photo of the Stoa Poikile, taken from here:

Stoa_Poikile

Hint: this is a solution for the ray shooting problem (read it) which eliminates trigonometry, shooting rays, computing intersections, and it uses only addition operation (once the database is well done), moreover, the database organized as in the pictures cannot be bigger than the original one (thus it is also a compression of the original database).

_______________

See the solution given  by JX of an unlimited detail algorithm here and here.

Peer-reviews don’t protect against plagiarism and articles retraction. Why?

After reading one more post from the excellent blog Retraction Watch, this question dawned on me: if the classical peer-review is such a good thing, then why is it rather inefficient when it comes to detecting flaws or plagiarism cases which later are exposed by the net?

Because I have seen implicit and explicit blaming of:

  • authors, seeking to publish as many papers as possible (because only the number of them counts, not their contents)
  • journals, seeking to fill their pages with any costs, also failing to protect the authors which gave them the copyrights.

There is a missing link in this chain: what about the peer-reviews? I bet that many articles submitted for publication are not accepted as a consequence of peer-review reports which detect flaws or plagiarism attempts. However, so many other papers are published after they pass the peer-review filter, only to be found later as being flawed or plagiarizing.

I think this is the strongest argument against old-ways, let’s talk in private  practice.  It shows that even  if the great majority of researchers is honest and dedicated to commit to best practices in the field, the very few who try to trick, to “boost” their CVs, escape undetected during the classical peer-review process  because of the tradition to talk in private about research, to follow the authority paths, and so on. This practice was not bad at all, before the net era, it was simply a part if the immunitary system of the research community. On the other side, there is no reason to believe that flawed or plagiarized articles are more frequent now than before. The difference which makes such articles easier to detect is the net, which allows public expressions of doubt and fast communication of evidence (“don’t believe me, here is the link to the evidence, make you own opinion”).

Don’t you think that peer-review could get better, not worse, by becoming a public activity which results from the contribution of (few or many) peers?

_______________

On peer-review and the big value it may have for epijournals, or even as a publishing business model, see also the posts:

Diorama, Myriorama, Unlimited detail-orama

Let me tell  in plain words  the explanation by  JX about how a UD algorithm might work (is not just an idea, is supported by proof, experiments, go and see this post).

It is too funny! Is the computer version of a diorama. Is an unlimited-detail-orama.

Before giving the zest of the explanation of JX, let’s thinks: do you ever saw a totally artificial construction which, when you look at it, it tricks your mind to believe you look at an actual, vast piece of landscape, full of infinite detail? Yes, right? This is a serious thing, actually, it poses a lot of questions about how much can be  compressed the 3D visual experience of a mind boggling  huge database of 3D points.

Indeed, JX explains that his UD type algorithm has two parts:

  • indexing: start with a database of 3D points, like a laser scan. Then, produce another database of cubemaps centered in a net of equally spaced “centerpoints” which cover the 3D scene. The cubemaps are done at screen resolution, obtained as a projection of the scene on a reasonably small cube centered at the centerpoint. You may keep these cubemaps in various ways, one of these is by linking the centerpoint with the visible 3D points. Compress (several techniques suggested).   For this part of the algorithm there is no time constraint, it is done before the real-time rendering part.
  • real-time rendering: input where the camera is, get only the points seen from closest  centerpoint, get the cubemap, improve it by using previous cubemaps and/or neighbouring cubemaps. Take care about filling holes which appear when you change the point of view.

Now, let me show you this has been done before, in the meatspace.  And even more, like animation! Go and read this, is too funny:

  • The Daguerre Dioramas. Here’s (actually an improved version of) your cubemap JX: (image taken from the linked wiki page)

Diorama_diagram

  • But maybe you don’t work in the geospatial industry and you don’t have render farms and huge data available. Then you may use a Myriorama, with palm trees, gravel, statues, themselves rendered as dioramas. (image taken from the linked wiki page)

Myriorama_cards

  • Would you like to do animation? Here is it, look at the nice choo-choo train (polygon-rendered, at a a scale)

ExeterBank_modelrailway

(image taken from this wiki page)

Please, JX, correct me if I am wrong.

Comments in epijournals: we may learn from Wikipedia

I think comments in epijournals (or whatever other form of Open Access from A to Z) should be considered as a service to the community. Don’t believe me and please form your own opinion, considering the following adaptation of Wikipedia:Core content policies.

The motivation of this post is to be found in the dispute over the value of commenting, happening in the comments to the post “Why I’ve also joined the good guys” by Tim Gowers. There you may find both pros and cons for allowing comments to articles “published” in epijournals.  Among the cons were comparisons of such comments to comments in blogs, fear that comments will actually damage the content, fear that they will add too much noise and so on.

In reply I mentioned in one comment Wikipedia.  Because Wikipedia is  one big example of a massively networked collaboration which does provide quality content, even if it is not hierarchically regulated. Please consider this: Wikipedia has a way to deal with vandalism, noise, propaganda and many other negative phenomena which, in the opinion of some, may damage those epijournals which will be willing to provide the service of commenting published articles.

I shall try therefore to learn from Wikipedia’s experience. The wikipedians evolved a set of principles, guidelines and policies which may be adapted to solve this  problem our  community of mathematicians have.

In fact, maybe Wikipedia rules could improve also the peer-review system. After a bit of thinking, if we are after a system which selects  informed comments, done by our peers, then we are talking  about a kind of peer-review.

What is the purpose of comments? Are they the same as peer-review?

These are questions I have not seen, please provide me links to any relevant sources where such questions were considered.

Here is my proposal, rather sketchy at this moment (it should be like this, only public discussion could improve or kill it, if inappropriate).

We may think about peer-reviews and comments as if they are wiki pages. Taking this as an hypothesis, they must conform at least to the Wikipedia:Core content policies :

  •  Neutral point of view: “All Wikipedia articles   comments and peer-reviews  must be written from a neutral point of view, representing significant views fairly, proportionately and without bias”.
  • Verifiability: “Material challenged in comments, peer-reviews  and all quotations, must be attributed to a reliable, published source.  Verifiability means that people reading and editing the encyclopedia  epijournal can check that information comes from a reliable source.”
  • No original research: “All material in Wikipedia  comments and peer-reviews must be attributable to a reliable, published source. Articles Comments and peer-reviews may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.

To those who will discard this proposal by saying that it is not possible to achieve these policies in practice, I recall: Wikipedia exists. Let’s learn from it.

To comment or not to comment, that is the question?

Some comments  to Gowers post “Why I’ve also joined the good guys” make me write a third reaction note. I want to understand why there is so much discussion around the idea of  the utility of comments to articles “published” (i.e. selected from arxiv or other free OA repositories) in epijournals.

UPDATE: For epijournals see Episciences.org and also the blog post  Episciences: de quoi s’agit-il?.

UPDATE 2: Read “Comments in epijournals: we may learn from Wikipedia” for a constructive proposal concerning comments (and peer-reviews as well).

I take as examples the comments by Izabella Laba  and  Mike Taylor.  Here they are:

Izabella Laba, link to comment:

I would not submit a paper to a journal that would force me to have a mandatory comment page on every article. I have written several long posts already on this type of issues, so here I’ll only say that this is my well considered opinion based on my decades of experience in mathematics, several years of blogging, and following (and sometimes commenting on) blogs with comment sections of varying quality. No amount of talk about possible fixes etc. will make me change my mind.

Instead, I want to mention a few additional points.

1) A new journal needs to develop a critical mass of authors. While having comment pages for articles may well attract some authors, making them mandatory pages will likely turn off just as many. In particular, the more senior and established authors are less likely to worry about the journal being accepted by promotion committees etc, but also less likely to have the time and inclination to manage and moderate discussion pages.

2) It is tempting to think that every paper would have a lively, engaging and productive comment page. In reality, I expect that this would only happen for a few articles. The majority of papers might get one or two lazy comments. The editors would have to spend time debating whether this or that lazy comment is negative enough or obnoxious enough to be removed, in response to the inevitable requests from the authors; but the point is that no greater good was achieved by having the comment page in the first place.

3) It is also tempting that such comment pages would contain at least a reasonably comprehensive summary of follow-up work (Theorem 1 was extended to a wider class of functions in [A], Conjecture 2 was proved in [B], and the range of exponents in Theorem 3 was proved to be sharp in [C]). But I don’t believe that this will happen. When I write an article, it is my job to explain clearly and informatively how my results relate to existing literature. It is *not* my job to also post explanations of that on multiple comment pages for cited articles, I certainly would not have the time to do that, and I’m not convinced that we could always could on the existence of interested and willing third parties.

A better solution would be to allow pingbacks (say, from the arXiv), so that the article’s journal page shows also the list of articles citing it. Alternatively, authors and editors might be allowed to add post-publication notes of this type (separate from the main article).

4) Related to this, but from a broader perspective: what is it that journals are supposed to accomplish, aside from providing a validation stamp? The old function of disseminating information has already been taken over by the internet. I believe that the most important thing that journals should be doing now is consolidating information, improving the quality of it, raising the signal to noise ratio.

I can see how this goal would be served by having a small number of discussion pages where the commenters are knowledgeable and engaged. In effect, these pages would serve as de facto expository papers in a different format. I do not think that having a large number of comment pages with one or two comments on them would have the same effect. It would not consolidate information – instead, it would diffuse it further.

On a related note, since I mentioned expository papers – it would be excellent to have a section for those. Right now, the journal market for expository papers is very thin: basically, it’s either the Monthly (limited range of topics) or the AMS Bulletin (very small number of papers, each one some sort of a “big deal”). But there is no venue, for instance, for the type of expository papers that researchers often write when they try to understand something themselves. (Except maybe for conference proceedings, but this is not a perfect solution, for many reasons.)

I will likely have more thoughts on it – if so, I’ll post a longer version of this on my own blog.

Mike Taylor, link to comment:

“I would not submit a paper to a journal that would force me to have a mandatory comment page on every article … No amount of talk about possible fixes etc. will make me change my mind.”

I am sorry to hear that. Without in the slighting expecting or intended to change you’re mind, I’ll say this: I can easily imagine that within a few more years, I will be refusing to submit to journals that do not have a comment page on my article. From my perspective, the principle purpose of publishing an article is to catalyse discussion and further work. I am loath to waste my work on venues that discourage this.

“It is tempting to think that every paper would have a lively, engaging and productive comment page. In reality, I expect that this would only happen for a few articles. The majority of papers might get one or two lazy comments.”

The solution to this is probably for us to write more interesting papers.

I totally agree with Mike Taylor and I am tempted to add that authors not willing to accept comments to their articles will deserve a future Darwin award for publication policies.  But surely is their right to lower the chances for their research to  produce descendants.

Say you are a film maker. What do you want?

  • a) to not allow your film to be seen because some of the critics may not appreciate it
  • b) to disseminate your film as much as possible and to learn from the critics and public about eventual weak points and good points of it

If the movie world would be alike to the actual academic world then most of the film makers would choose a), because it does not matter if the film is good or bad, only matters how many films you made and, among them, how many were supported by governmental grants.

A second argument for allowing comments to be made is Wikipedia.  It is clear to (almost) anybody that Wikipedia would not be what it is if it were only based on the 500-1000 regular editors (see the wiki page on Aaron Swartz and Wikipedia). Why is then impossible to imagine that we can make comments to the article a very useful feature of epijournals? Simply by importing some of the well proven rules from wikipedia concerning contributors!

On the reasons of such reactions which disregard the reality, another time. I shall just point to the fact that is still difficult to accept models of thinking based not on pyramidal bureaucratic organizational structures but on massive networked collaboration.   Pre-internet, the pyramidal organization was the most efficient. Post internet it makes no sense because the cost of organizing (Coase cost) went to almost nil.

But thought reflexes are still alive, because we are only humans.

Discussion about how an UD algorithm might work

I offer this post for discussions around UD type algorithms. I shall update this post, each time indicating the original comment with the suggested updates.

[The rule concerning comments on this blog is that the first time you comment, I have to aprove it. I keep the privilege of not accepting or deleting comments which are not constructive]

For other posts here on the subject of UD see the dedicated tag unlimited detail.

I propose you to start from this comment by JX, then we may work on it to make it clear (even for a mathematician). Thank you JX for this comment!

I arranged a bit the comment, [what is written between brackets is my comment]. I numbered each paragraph, for easiness.

Now I worked and thought enough to reveal all the details, lol. [see this comment by JX]
I may dissapoint you: there’s no much mathematics in what I did. JUST SOME VERY ROUGH BRUTE-FORCE TRICKS.

1) In short: I render cubemaps but not of pixels – it is cubemaps of 3d points visible from some center.

2) When camera is in that cubemap’s center – all points projected and no holes visible. When camera moves, the world realistically changes in perspective but holes count increases. I combine few snapshots at time to decrease holes count, I also use simple hole filling algorithm. My hole filling algorithm sometimes gives same artifacts as in non-cropped UD videos (bottom and right sides) .

[source JX #2]   ( link to the artifacts image )this artifacts can be received after appliying hole-filling algorithm from left to right and then from top to the bottom, this why they appear only on right and bottom sides. Another case is viewport clipping of groups of points arranged into grid: link from my old experiment with such groups.

This confirms that UD has holes too and his claim “exactly one point for each pixel” isn’t true.
3) I used words “special”, “way”, “algorithm” etc just to fog the truth a bit. And there is some problems (with disk space) which doesn’t really bother UD as I understand. [that’s why they moved to geospatial industry] So probably my idea is very far from UD’s secret. Yes, it allows to render huge point clouds but it is stupid and I’m sure now it was done before. Maybe there is possibility to take some ideas from my engine and improve them, so here is the explanation:
4) Yes, I too started this project with this idea: “indexing is the key”. You say to the database: “camera position is XYZ, give me the points”. And there’s files in database with separated points, database just picks up few files and gives them to you. It just can’t be slow. It only may be very heavy-weight (impossible to store such many “panoramas”) .

5) I found that instead of keeping _screen pixels_ (like for panoramas) for each millimeter of camera position it is possible to keep actual _point coordinates_ (like single laser scanner frame) and project them again and again while camera moves and fill holes with other points and camera step between those files may be far bigger than millimeters (like for stereo-pairs to see volumetric image you only need two distant “snapshots”).

6) By “points linked with each other” I meant bunch of points linked to the some central points. (by points I mean _visible_ points from central point).

7) What is central point? Consider this as laser scanner frame. Scanner is static and catches points around itself. Point density near scanner is high and vice versa.

8) So again: my engine just switches gradually between virtual “scanner” snapshots of points relative to some center. During real-time presentation, for each frame a few snapshots are  projected, more points projected from the nearest, less from far snapshots.

9) Total point count isn’t very big, so real-time isn’t impossible. Some holes appear, simple algorithm fills them using only color and z-buffer data.
10) I receive frames(or snapshots) by projecting all the points using perspective matrix, I use fov 90, 256×256 or 512×512 point buffer (like z-buffer but it stores relative (to the scanner) point position XYZ).

11) I do this six times to receive cubemap. Maximum points in the frame is 512x512x6. I can easily do color interpolation for the overlapped points. I don’t pick color of the point from one place. This makes data interleaved and repeated.

12) Next functions allow me to compress point coordinates in snapshots to the 16bit values. Why it works – because we don’t need big precision for distant points, they often don’t change screen position while being moved by small steps.

int32_t expand(int16_t x, float y)
{
int8_t sign = 1;
if (x<0) { sign = -1; x = -x; }
return (x+x*(x*y))*sign;
}

int16_t shrink(int32_t z, float y)
{
int8_t sign = 1;
if (z<0) { sign = -1; z = -z; }
return ((sqrtf(4*y*z+1)-1)/(2*y))*sign;
}

13) I also compress colors to 16bit. I also compress normals to one 24bit value. I also add shader number (8bit) to the point. So one point in snapshot consists of:  16bit*3 position + 24bit normal + 16bit color + 8bit shader.

14) There must be some ways to compress it more (store colors in texture (lossy jpeg), make some points to share shader and normals). Uncompressed snapshot full of points (this may be indoor snapshot) 512x512x6 = 18Mb , 256x256x6 = 4,5Mb

Of course, after lzma compression (engine reads directly from ulzma output, which is fast) it can be up to 10 times smaller, but sometimes only 2-3 times. AND THIS IS A PROBLEM. I’m afraid, UD has smarter way to index it’s data.

For 320×240 screen resolution 512×512 is enough, 256×256 too, but there will be more holes and quality will suffer.

To summarize engine’s workflow:

15) Snapshot building stage. Render all scene points (any speed-up may be used here: octrees or, what I currently use: dynamic point skipping according to last point distance to the camera) to snapshots and compress them. Step between snapshots increases data weight AND rendering time AND quality. There’s no much sense to make step like 1 point. Or even 100 points. After this, scene is no longer needed or I should say scene won’t be used for realtime rendering.

16) Rendering stage. Load nearest snapshots to the camera and project points from them (more points for closer snapshots, less for distant. 1 main snapshot + ~6-8 additional used at time. (I am still not sure about this scheme and changing it often). Backfa..point culling applied. Shaders applied. Fill holes. Constantly update snapshots array according to the camera position.

17) If I restrict camera positions, it is possible to “compress” huge point cloud level into relatively small database. But in other cases my database will be many times greater than original point cloud scene. [ See comments   JX#2  , JX#3 , chorasimilarity#4 , chorasimilarity#5 . Here is an eye-candy image of an experiment by JX, see JX#2:]

eye_candy_by_JX

Next development steps may be:

18) dynamic camera step during snapshot building (It may be better to do more steps when more points closer to camera (simple to count during projection) and less steps when camera in air above the island, for example),

19) better snapshot compression (jpeg, maybe delta-coding for points), octree involvement during snapshot building.

20) But as I realized disk memory problems, my interest is falling.

Any questions?

UD question

I try to formulate the question about how Unlimited Detail works like this:

Let D be a database of 3D points, containing information about  M points. Let also S be the image on the screen, say with N pixels. Problem:

  • reorganize the database D to obtain another database D’ with at most O(M) bits, such that
  • starting from D’ and a finite (say 100 bytes) word there exists an algorithm which finds the image on the screen in O(N log(M)) time.

Is this reasonable?

For example, take N=1. The finite word means the position and orientation of the screen in the 3D world of the database. If the M points would admit a representation as a number (euclidean invariant hash function?) of order M^a (i.e. polynomial in the number of points), then it would be reasonable to expect  D’ to have dimension of order O(log(M)), so in this case simply by traversing D’ we get the time O(log(M)) = O(N log(M)). Even if we cannot make D’ to be O(log(M)) large, maybe the algorithm still takes O(log(M)) steps simply because M is approximately the volume, so the diameter in 3D space is roughly between M^(1/3) and M,  or due to scaling of the perspective the algorithm may still hop through D’ in geometric, not arithmetic steps.

The second remark is that there is no restriction for the time which is necessary for transforming D into D’.

AZ open access

Instead of Diamond OA (as mentioned in Tim Gowers very interesting “Why I’ve also joined the good guys“) I suggest that a better and inspiring name for this yet mysterious idea if epijournals would be

AZ open access

or open access from A to Z. There is another, better justification for this name, see the end of the post!

Diamond and Gold names just betray that many people don’t get the idea that in the age of the net is not good to base one business model on the SCARCITY OF GOODS. Gold and diamonds are valuable because they are scarce. Information, on the contrary, is abundant and it thrives from being shared. Google got it, for example, they are not doing bad, right? Therefore, why base the business publishing model on the idea of making the information scarce, in order to have value? You already have value, because value itself is just a kind of carrier of information.

The name AZ OA is a tribute. It means:

Aaron SwartZ Open Access.

Good news from the good guys

The very recent post of Gowers “Why I’ve also joined the good guys” is good news! It is about a platform for “epijournals”, or in common (broken, in my case) English means a system of peer-reviewing arxiv articles.

UPDATE: For epijournals see Episciences.org and also the blog post  Episciences: de quoi s’agit-il?.

If you have seen previous posts here on this subject, then you imagine I am very excited about this! I  posted immediately a comment, is awaiting moderation just appeared, so here is it for the posterity:

Congratulations, let’s hope that it will work (however I don’t understand the secrecy behind the idea). For some time I try to push an idea which emerged from several discussions, described here  Peer-review turned on its head has market value(also see Peer-review is Cinderella’s lost shoe )  with very valuable contributions from readers, showing that the model may be viable, as a sort of relative of the pico-publication idea.

Secrecy (if there is one or I am just uninformed) is not a good idea, because no matter how smart is someone, there is always a smarter idea waiting to germinate in another one’s head. It is obvious that:

  • a public discussion about this new model will improve it beyond the imagination of the initiators, or it will show its weakness (if any), just like in the case of a public discussion about an encryption protocol, say. If you want the idea to stand, then discuss it publicly,
  • the model has to provide an incentive for the researchers to do peer-reviews. There are two aspects about this: 1)  the researchers are doing peer-reviews for free anyway, for the old-time journals, so maybe the publishers themselves could consider the idea to organize the peer-review process, 2) anything is possible once you persuade enough people that it’s a good idea.
  • any association between expired reflexes (like vanity publication, or counting the first few bits of articles, like ISI, for the sake of HR departments) will harm the project. In this respect see the excellent post MOOCs teach OA a lesson   by Eric Van de Velde, where it is discussed why the idea of Massively Open Online Courses (MOOCs) had much more success in such a short time then the OA movement.

Enough for now, I am looking forward to hear more about epijournals.

UPDATE: There is no technical reason to ignore  some of the eprints which are already on arxiv. By this I mean the following question: are epijournals considering only peer-reviewing new arxiv eprints, or there is any interest of peer-reviewing existing eprints?

UPDATE 2: This comment by Benoît Régent-Kloeckner    clarifies who is the team behind epijournals. I reproduce the comment here:

I can clarify a bit the “epi-team” composition. Jean-Pierre Demailly tried to launch a similar project some years ago, but it had much less institutional support and did not work out. More recently, Ariane Rolland heard about this tentative and, having contact at CCSD, made them meet with Jean-Pierre. That’s the real beginning of the episciences project, which I joined a bit later. The names you should add are the people involved in the CCSD: Christine Berthaud, head of CCSD, Laurent Capelli who is coding the software right now, and Agnès Magron who is working on the communication with Ariane.

Gnomonic cubes: a simultaneous view of the extended graphic beta move

Recall that the extended  beta move is equivalent with the pair  of  moves :

beta_beta_star

where the first move is the graphic beta move and the second move is the dual of the beta move, where duality is (still loosely) defined by the following diagram:

correspondence_1

In this post I want to show you that it is possible to view simultaneously these two moves. For  that I need to introduce the gnomonic cube. (Gnomons appeared several times in this blog, in expected or unexpected places, consult the dedicated tag “gnomon“).

From the wiki page about the gnomon,     we see that

A three dimensional gnomon is commonly used in CAD and computer graphics as an aid to positioning objects in the virtual world. By convention, the X axis direction is colored red, the Y axis green and the Z axis blue.

3DGraphicsGnomon

(image taken from the mentioned wiki page, then converted to jpg)

A gnomonic cube is then just a cube with colored faces. I shall color the faces of the gnomonic cube with symbols of the gates from graphic lambda calculus! Here is the construction:

gnomonic_cube_2

So, to each gate is associated a color, for drawing conveniences. In the upper part of the picture is described how the faces of the cube are decorated. (Notice the double appearance of the \Upsilon gate, the one used as a FAN-OUT.)  In the lower part of the picture are given 4 different views of the gnomonic cube. Each face of the cube is associated with a color. Each color is associated with a gate.

Here comes the simultaneous view of the pair of moves which form, together, the extended beta move.

gnomonic_cube_3

In this picture is described a kind of a 3D move, namely the pair of gnomonic cubes connected with the blue lines can be replaced by the pair of red lines, and conversely.

If you project to the UP face of the dotted big cube then you get the graphic beta move. The UP view is the viewpoint from lambda calculus (metaphorically speaking).

If you project to the RIGHT face then you get the dual of the graphic beta move. The RIGHT view is  the viewpoint from emergent algebras (…).

Instead of 4 gates (or 5 if we count \varepsilon^{-1} as different than \varepsilon), there is only one: the gnomonic cube. Nice!

Animals in lambda calculus II. The monoid structure and patterns

This is a continuation of “Animals in lambda calculus“. You may need to consult the web tutorial on graphic lambda calculus.

Animals are particular graphs in GRAPH. They were introduced in the mentioned post. I wrote there that animals have the meaning of a kind of transformations over terms in lambda calculus. I shall come back to this at the end of this post and a future post will develop this subject.

The anatomy of an animal is described in the next figure.

animal_4

The animal has therefore a body, which is a graph in GRAPH with only one output, decorated in the figure with “OUT”.  The allowed gates in the body are: the \lambda gate, the \curlywedge gate, the termination and the \Upsilon gates.

To the body is grafted a \Upsilon tree, with the root decorated in the figure with “IN”. The said tree is called “the spine” of the animal and the grafting points are called “insertions” and are figured by green small disks in the figure.

An animal may have a trivial spine (no \Upsilon gate, only a wire). The most trivial animal is the wire itself, with the body containing just a wire and with no insertion points. Let’s call this animal “the identity”.

Animals may compose one with another, simply by grafting the “OUT” of one animal to the “IN” of the other. The body, the spine and the insertions of the composed animal are described in the next figure.

animal_5

The moral is that the spine and the insertions  of the composed animal are inherited from the first animal in the composition, excepting the case when the first animal has trivial spine (not depicted in the figure).

The pleasant thing is that the set of animals is a monoid with composition and with the identity animal as neutral element. That is: the composition of animals is associative and the identity is the neutral element.

In the first post about animals it is explained that the graphic beta move transforms an animal into an animal.  Indeed, the animal

animal_1

is transformed into the animal

animal_3_p

The graphic beta move goes in both directions.  For example the identity animal is transformed by the graphic beta move into the following animal

animal_3_pp

With this monoid structure in mind, we may ask if there is any category structure behind, such that the animals are (decorations of) arrows. Otherwise said, is there any interesting category which admits a functor from it to the monoid of animals, seen as a category with one object and animals as arrows?

An interesting category is the one of “patterns”, the subject will be developed in a future post. But here I shall explain, in a not totally rigorous way,  what a pattern is.

The category of patterns has the untyped lambda terms as objects (with some care about the use of alpha equivalence) and patterns as arrows. A category may be defined only in terms of its arrows, therefore let me say what a pattern should be.

A bit loosely speaking, a pattern is a pair (A[u:=B], B), where A,B are untyped lambda calculus terms and u is a variable. I call it a pattern because really it is one. Namely, start from the following data: a term A, a variable u (which appears in A … ) and another term B. They generate a new term A[u:=B] which has all the occurences of the term B in A[u:=B] which were caused by the substitution u:=B, marked somehow.

If you think a bit in graphic lambda terms, that is almost an animal with the “IN” decorated by B and the “OUT” decorated by A[u:=B].

Composition of animals corresponds to composition of patterns, namely (C[v:= A[u:=B]], A[u:=B]) (A[u:=B], B) = (D[u:=B], B), where D is a term which is defined (with care) such that D[u:=B] = C[v:= A[u:=B]].

Clearly, the definition of patterns ant their composition is not at all rigorous, but it will be made so in a future post.

Second thoughts on Gowers’ “Why I’ve joined the bad guys”

This post, coming after the “Quick reaction…“, is the second dedicated to the post “Why I’ve joined the bad guys” by Tim Gowers.

Let’s calm down a bit. I could discuss at length about the multiple reasons why the arguments from the mentioned post are wrong, or twisted, or otherwise. Maybe for another time, but for now it is enough to say that it looks like a piece of not well designed PR for gold open access. PR is a profession by itself, it has its  techniques and means to achieve the goal, but here the stellar mathematician Gowers just shows that PR is not among his strengths.

It is clear that the crux of the matter is dissapointment.  Gowers, who was the initiator of the cost of knowledge movement, of the polymath project, is now trying to sell us the gold open access?

Maybe it means that there is a need for public figures to support this shaky construction.

At second thought, the FoM is not the end of the world as we knew it. Is just yet another journal which tries to salvage what it can from the old publication model, who was once essential for the research community, but is now obsolete because the net is here.

The real matter is though not FoM, or Gowers “betrayal”, but the fact that we have to look for new models of publication. Once such a model is found then naturally any FoM will decay to oblivion.

Take for example the business of publication of encyclopedias. Enters Wikipedia, who proved it is scalable and it is sustained by millions of enthusiasts, btw, and now the encyclopedias business is no longer viable. It will happen the same with the publication of research articles.

Better is to try to think about a good model.  Consider for example two related ideas, discussed here:

Quick reaction on Gowers’ “Why I’ve joined the bad guys”

Here are some quick comments on the post “Why I’ve joined the bad guys” by Timothy Gowers.  For starters, don’t read only Gowers post, but do go and read as well Orr Shalit’   Worse than Elsevier.

[UPDATE: See also Second thoughts on Gowers’ “Why I’ve joined the bad guys”, it’s more constructive.]

 

I really think this is the worse moment to discuss such subjects.

The long, but not heavy post by Gowers is curious.  Let’s see:

Re: “It is just plain wrong to ask authors to pay to get their articles published“.

Let me begin with the “it is just plain wrong” part. A number of people have said that they find APCs morally repugnant. However, that on its own is not an argument. It reminds me of some objections to stem cell research. Many people feel that that is wrong, regardless of any benefits that it might bring. Usually their objections are on religious grounds, though I imagine that even some non-religious people just feel instinctively that stem-cell research is wrong.

If that is the level of the discussion then here is an answer on a par: What do you think  Aaron Swartz would say about such an argument pro APC?

[Who is Aaron Swartz: (wikipedia), (official website), (blog) .]

Re: APC vs APC.

In my previous post about Forum of Mathematics I made a bad mistake, which was to suggest that APC stood for “author publication charge” rather than “article processing charge”.

Ah, OK. So the author pays after, not before.

Forum of Mathematics will not under any circumstances expect authors to meet APCs out of their own pockets, and I would refuse to be an editor if it did. (I imagine the same holds for all the other editors.) Of course, it is one thing to say that authors are not expected to pay, and another to make sure that that never happens. Let me describe the safeguards that will be put in place.

If this is true, then it would be the same to do like this: don’t expect authors to pay. If they want to pay in order to help the journal, then they can make a Paypal  contribution.

Re: “What??!! How can it cost £500 to process an article?

So how can the costs reach anything like £500? I’ll talk in general terms here, and not specifically about Forum of Mathematics. There are many things that an academic journal does to a paper once it has gone through the refereeing process and been accepted. It does copy-editing, typesetting, addition of metadata, and making sure the article appears on various bibliographic databases.

Short answer: Latex and Google Scholar. Organizing peer-review is the only worthy service today.

Re: Forum of Mathematics is even worse than Elsevier.

Please tell me where in his post Orr Shalit claims that FoM is worse than Elsevier.

Re: “Authors are doing a service to the world, so making them pay is ridiculous“.

…that service is already done the moment they put their paper on the arXiv or their home page (assuming they do). So why do they bother to publish? As I think everybody agrees, now that we have the internet, the main function left for journals is providing a stamp of quality.

… for money.  Yes, this is the truth, actually, everybody agrees.  These stamps are needed for a reason which has nothing to do with math or science, see further.

There is a big question about whether we actually need journals for that, but that question is independent of the question of who benefits from the service provided by journals.

Let me parse this: the questions

  1. “do we need journals for providing quality stamps?”
  2. “who benefits from this service provided by journals?”

are independent. Say the question 1.  has answer “yes”, then it does not matter who benefits by providing a needed service.Say the question 1.  has answer “no”, then again it does not matter who benefits from providing a useless service. Hm…

The main person who benefits from the stamp of quality is the author, who boosts his or her CV and has a better chance when applying for jobs and so on.

Yes, everybody knows that this is the reason why researchers feel forced to publish in the old way. So let me translate: the real reason of existence for journals is to simplify the work of the HR departments.

If you feel that APCs are wrong because if anything you as an author should be paid for the wonderful research you have done, I would counter that (i) it is not journals who should be paying you — they are helping you to promote yourself, and (ii) if your research is good, then you will be rewarded for it, by having a better career than you would have had without it.

(i) the purpose of journals used to be the one of disseminating knowledge, (ii) the same argument applies for green open access journals.

Re: Maybe a typical article costs around £500 to process under the current system, but do we need what we get for that money?

This is a much more serious question. While I’m discussing it, let me also highlight another misconception, which is that the editors of FoM regard it as a blueprint for the future of all of mathematical publishing.

… well, that is almost enough for a quick reaction. Let’s stop and think about:

“is a misconception [that] the editors of FoM regard it as a blueprint for the future of all of mathematical publishing.”

Here is a last one, though.

Re: I don’t want traditional-style journals with APCs. I want much more radical change.

I basically agree with this, but as I argued in the previous section, I think that there is a case for having APCs at least as a transitional arrangement.

This reminds me about that dinosaur joke.

Unlimited detail and 3D portal engines, or else real-time path tracing

Here are two new small pieces which might, or not, add to the understanding of how the Unlimited Detail – Euclideon algorithm might work. (Last post on this subject is Unlimited detail, software point cloud renderers, you may want to read it.)

3D-portal engines: From this 1999 page “Building a 3D portal engine“, several quotes (boldfaced by me):

Basically, a portal based engine is a way to overcome the problem of the incredible big datasets that usually make up a world. A good 3D engine should run at a decent speed, no matter what the size of the full world is; speed should be relative to the amount of detail that is actually visible. It would of course be even better if the speed would only depend on the number of pixels you want to draw, but since apparently no one has found an algorithm that does that, we’ll go for the next best thing.

A basic portal engine relies on a data set that represents the world. The ‘world’ is subdivided in areas, that I call ‘sectors’. Sectors are connected through ‘portals’, hence the name ‘Portal Engine’. The rendering process starts in the sector that the camera is in. It draws the polygons in the current sector, and when a portal is encountered, the adjacent sector is entered, and the polygons in that sector are processed. This would of course still draw every polygon in the world, assuming that all sectors are somehow connected. But, not every portal is visible. And if a portal is not visible, the sector that it links to doesn’t have to be drawn. That’s logical: A room is only visible if there’s a line of sight from the camera to that room, that is not obscured by a wall.

So now we have what we want: If a portal is invisible, tracing stops right there. If there’s a huge part of the world behind that portal, that part is never processed. The number of polygons that are actually processed is thus almost exactly equal to the number of visible polygons, plus the inserted portal polygons.

By now it should also be clear where portals should be inserted in a world: Good spots for portals are doors, corridors, windows and so on. That also makes clear why portal engines suck at outdoor scenes: It’s virtually impossible to pick good spots for portals there, and each sector can ‘see’ virtually every other sector in the world. Portal rendering can be perfectly combined with outdoor engines though: If you render your landscape with another type of engine, you could place portals in entrances of caves, buildings and so on. When the ‘normal’ renderer encounters a portal, you could simply switch to portal rendering for everything behind that portal. That way, a portal engine can even be nice for a ‘space-sim’…

So let’s dream and ask if there is any way to construct the database for the 3D scene such that the rendering process becomes an algorithm for finding the right portals, one for each pixel maybe. To think about.  The database is not a tree, but from the input given by the position of the viewer, the virtually available portals (which could be just pointers attached to faces of octrees, say, which point to the faces of smaller cubes which are visible from the bigger face, seen as a portal) organize themselves into a tree. Therefore the matter of finding what to put on a screen pixel could be solved by a search algorithm.

As a small bonus, here is the link to a patent of Euclideon Pty. Ltd. : An alternate method for the child rejection process in regards to octree rendering – AU2012903094.

Or else real-time path tracing. Related to Brigade 2, read here, and  a video:

Aaron Swartz

Just learned about this via this post on G+ by , which I reproduce here, with some links and short comments added by myself, in order to provide more information.  The parts of the  mentioned post will appear as quoted ” and my addition will appear unquoted, but [between brackets].

[Who is Aaron Swartz: (wikipedia), (official website), (blog) .]

This is terribly sad news: Aaron Swartz committed suicide in New York City on Friday, according to this MIT Tech article:
http://tech.mit.edu/V132/N61/swartz.html

Aaron was facing possible life in prison because he allegedly tried to make journal articles public. The Justice Department’s indictment accuses Aaron of connecting his computer to MIT’s network (without authorization — he was at Harvard, not MIT) and sucking down over 1 million already published academic journal articles from the JSTOR database, which the professors and other authors who wrote them probably would have liked to be free to the public anyway. Here’s the indictment:
http://www.scribd.com/doc/60362456/Aaron-Swartz-indictment

Aaron’s scraper wasn’t that well-programmed, and it’s true that he allegedly did this without authorization from MIT. But he downloaded no confidential data from JSTOR — again, these are academic journal articles — and released no data at all. Because Harvard had a JSTOR subscription, Aaron actually had the right to download any of the JSTOR articles for his own use (but not to redistribute or perform a bulk download). Demand Progress compared it to “trying to put someone in jail for allegedly checking too many books out of the library.”

If JSTOR was upset, this seems like the type of wrong that could have been remedied through civil litigation. But JSTOR decided against it, with its general counsel Nancy Kopans telling the New York Times that “we are not pursuing further action” against Aaron:
http://www.nytimes.com/2011/07/20/us/20compute.html?_r=0

[Maybe worthy to mention a passage from the linked article: “In 2008, Mr. Swartz released a “Guerrilla Open Access Manifesto,” calling for activists to “fight back” against the sequestering of scholarly papers and information behind pay walls.”]

The especially sad thing is that JSTOR announced this week that it is now making “more than 4.5 million articles” available to the public at no cost:
http://lj.libraryjournal.com/2013/01/academic-libraries/many-jstor-journal-archives-now-free-to-public/

BTW, Aaron helped to create RSS, founded Demand Progress [link to the website], which was active on the anti-SOPA front, and sold Infogami to Reddit (now part of Conde Nast):
http://news.cnet.com/8301-31921_3-20128166-281/copyright-bill-controversy-grows-as-rhetoric-sharpens/

Perhaps Aaron should have been punished for trespassing, which he did do if the DOJ has its facts right. But last fall the Feds instead slapped him with a superseding indictment featuring 13 felony counts that would mean a worst-case scenario of $4M in fines and possible life in prison (I think we can safely say that 50+ years in prison for someone in their late 20s is life):
http://ia700504.us.archive.org/29/items/gov.uscourts.mad.137971/gov.uscourts.mad.137971.53.0.pdf
http://www.techdirt.com/articles/20120917/17393320412/us-government-ups-felony-count-jstoraaron-swartz-case-four-to-thirteen.shtml

I’d be shocked if Congress ever intended for computer crime law to be used this way; I wonder what Lanny Breuer, the head of the DOJ’s criminal division, would say if asked about it the next time he testifies on Capitol Hill. Perhaps Breuer would say this case is a model of restraint: after all, Aaron wasn’t charged with violating the No Electronic Theft Act, which would have added yet another set of felony charges! Paging Harvey Silverglate…
http://www.harveysilverglate.com/Books/ThreeFeloniesaDay.aspx

I never met Aaron, and don’t know what led him to this point. Perhaps it was unrelated to the criminal charges. But it’s very sad news, and I can’t help thinking that the possibility of life behind bars, because of alleged bulk downloading that many Americans might be surprised even qualifies as a crime, led to Aaron’s decision to commit suicide. His criminal trial was scheduled to begin in two months.

Zork’s bloogorithm is counterfactual thinking

Yes, I am speaking about Scott Aaronson’ post “Zork’s bloogorithm“, which comes after his excellent “Happy New Year! My response to M. I. Dyakonov” (with an outstanding and funny list of comments, a very good read).

In order to avoid any misunderstanding, here is the structure of the post. Readers may go directly to the part which is more interesting, according to personal preferences.

  • Proof that Scott’s argument is counterfactual thinking, (dry, not funny)
  • Steampunk and the belief in universal mathematics, (funny hopefully, but controversial)
  • My innocent opinion on QC. (?)
  • Proof that Scott’s argument is counterfactual thinking.  Here is the argument which I claim is counterfactual:

Let me put it this way: if we ever make contact with an advanced extraterrestrial civilization, they might have three sexes and five heads. But they, too, will have encountered the problem of factoring integers into primes. Indeed, because they’ll inhabit the same physical universe as we do, they’ll even have encountered the problem of simulating quantum physics. And therefore, putting the two together, they’ll almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.

This passage can be put in the following form:

  1. Aliens  inhabit the same physical universe as we do
  2. We encountered the problem of simulating quantum physics
  3. Therefore the aliens almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.

This is counterfactual because in this world we don’t know if aliens exist, therefore we cannot make from this an argument for the fact that quantum computing is one of those ideas so strong  because they are unavoidable on the path of understanding the universe. Mind you, it might be so or not, but “Zork’s bloogorithm”  argument does not hold, that’s my point here.

  • Steampunk and the belief in universal mathematics.

Preceding “Zork’s bloogorithm” argument is the belief of Scott (and many others) that (boldfaces are mine):

How do I know that the desire for computational power isn’t just an arbitrary human quirk?

Well, the reason I know is that math isn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.

To this, bearing also in mind that the argument by aliens is counterfactual, I responded

By the same argument steampunk should be history, not alternative history.

Read also Vadim #47  and Scott #49 comments, which fail to notice that steampunk, as the argument by aliens of Scott, are BOTH examples of counterfactual thinking.

To this, I pretended to be an alien and responded (comment #52 still in limbo, awaiting moderation):

…let’s take the first line from the wiki page on steampunk: “Steampunk is a sub-genre of science fiction that typically features steam-powered machinery, especially in a setting inspired by industrialized Western civilization during the 19th century.”
Now, as an alien, I just extended my tonsils around the qweeky blob about hmm, let me translate with gooble: “alternative reality based fiction on the past 5^3 years greatest idea”, and it tastes, excuse me, reads, as:
“Turinkpunk is a sub-genre of science fiction that typically features computing machinery, especially in a setting inspired by industrialized Western civilization during the 20th century.”
[I localised some terms, for example 1337=industrial Western civilization and OMG=20th century]

(Funny, right?)

Besides arguing about thinking fallacies, what do YOU think, is math arbitrary, i.e. a cultural construct, or is it unavoidable? I think that it is a construct, contrary to many, for example I think that euclidean geometry is based on viral pythagorean  ideas  (see the posts about gnomons everywhere here and here), that curvature is a notion yet not completely understood and culturally charged, and that the Baker-Campbell-Hausdorff formula is still too much commutative, to give three examples.

  • My innocent opinion on QC. 

I am completely outside of the QC realm but, in order to disperse any misunderstanding, here is what I think about  quantum computation.

First of all I believe that the research around the  idea of computation is so profound that it represents the third great advance in the history of humankind, after the ancient greek philosophers and Newton (and founding fellows of the Royal Society).

As a part of this, quantum computation is a natural thing to explore. I believe that at some point there will be constructed a device which everybody will agree to call it a quantum computer.

But, being an optimistic person, I don’t believe that the Turing machine is the last great idea in the history of humankind (hence the comparison which I made between steampunk and those arguments for QC, by saying that steam engines were the greatest idea of the industrial revolution, but not last great idea, so let’s not assume now that QC is the last great idea).

Arguments that there is life after computation are to be found in the life sciences, where mysteries abound and computation-based ideas are ineffective. Available mathematics seems ineffective as well, in stark contradiction with the well known old opinion on “The unreasonable effectiveness of Mathematics in the Natural Sciences“.

This is not meaning as an  appeal to use  magic instead mathematics, but is only a sign that many more new mathematics (and computation) ideas await us in the future, to be discovered.

Can you turn Nalebinding into a Turing machine?

Is Nalebinding (suitable to use as) another ancient Turing machine? In a previous series of posts I argued that the three Moirai (aka the Fates) have this capability, see:

From wikipedia page about Nalebinding:

“Nålebinding (Danish: literally “binding with a needle” or “needle-binding”, also naalbinding, nålbinding or naalebinding) is a fabric creation technique predating both knitting and crochet. Also known in English as “knotless netting,” “knotless knitting,” [1] or “single needle knitting,” the technique is distinct from crochet in that it involves passing the full length of the working thread through each loop, unlike crochet where the work is formed only of loops, never involving the free end.”

“Nålebinding works well with short pieces of yarn; based on this, scholars believe that the technique may be ancient, as long continuous lengths of yarn are not necessary. The term “nålebinding” was introduced in the 1970s.[1]

The oldest known samples of single-needle knitting include the color-patterned sandal socks of the Coptic Christians of Egypt (4th century CE), and hats and shawls from the Paracas and Nazca cultures in Peru, dated between 300 BCE and 300 CE.[2][3]

Here is a pair of ancient egyptian socks (courtesy this wiki page) made by the nalebinding technique.

socks

OK, so this is an ancient technique which bears a recent Danish name. I googled for math related to nalebinding and I got some links pointing to hyperbolic surfaces and other nice math visualisations (try to  google for your pleasure), but actually I am afraid of giving the said links because nalebinding has a huge fan base which might get attracted to this post by inadvertence. This post is not, I repeat NOT about the nalebinding hobby.

According to what I could grasp from this very interesting page, there exists a scientific notation for nalebinding stitches which was introduced in the article (cite from the said page):

Hansen, Egon H. “Nalebinding: definition and description.” Textiles in Northern Archaeology: NESAT III Textile Symposium in York 6-9 May 1987, ed. Penelope Walton and John P. Wild, pp. 21-27. London: Archetype Publications, 1990.

Presents a notational system for describing nålebinding stitches that is based on the course the thread takes in traversing each stitch. No historical information included, but lots of technical plates of interlacement variants.

[I need a copy of this article. I could not reach it until now, please, could anybody send me a pdf?]

I think Nalebinding could be turned into a Turing machine. Here comes the more technical part.  Indeed, in graphic lambda calculus the main move is the graphic beta move (which corresponds to beta reduction in lambda calculus).

betar

Or, this graphic beta move can be put into the form of a braiding move. First we define the crossing macro:

betar_dif_11

then we re-write the graphic beta move as:

betar_dif12

Once untyped lambda calculus is put into nalebinding notation, at least in principle is possible to construct a Turing machine. In practice, it might be more feasible to directly construct one.

Anybody raising the (nalebinded) glove?

Unlimited detail, software point cloud renderers

This is a continuation of the discussion from the posts dedicated to and  ,  only this time I want to comment briefly on the euclideon algorithm side.

In one comment by JX there is a link to this image:

description

Let me copy the relevant bits:

“There is no relation between scene size or complexity and framespeed. Only window resolution affects performance. There is no profit from repetitive models, bigger scenes just use more disk space, no matter how unique models it has. Search algorithm is used, but it is NOT raytracing, there’s no rays.”

In a previous comment JX writes:

“I also don’t use octree or any tree for now. Neither DM (don’t even understood it fully).”

Well, that’s a big difference (if true) between JX (and Unlimited Detail) on one side and other software point cloud renderers. I shall present two examples in the next videos. The authors use octrees and their demonstrations are much more impressive than JX’s picture.

UPDATE: Check out the comment by JX, answering some implicit questions.

 

Let’s use the zipper

I shall use the zipper macro for improving  upon the computation done in “Emergent sums and differences in graphic lambda calculus”  as an example of computation in graphic lambda calculus.

We have to glue the following graphs, by respecting the red labels

emer_zipper_2

in order to obtain the graph which is the subject of manipulation in  the mentioned post:

emer_zipper_3

Zippers are for this. We shall use a 4-zipper.

emer_zipper_4

I shall let you glue all graphs in the right places indicated by the red labels. It is funny to see here a mixture of lambdas, applications and emergent algebras dilation gates.

Peer-review is Cinderella’s lost shoe

Scientific publishers are in some respects like Cinderella. They used to provide an immense service to the scientific world, by disseminating  new results and archiving old results into books. Before the internet era, like Cinderella at the ball, they were everybody’s darling.

Enters the net. At the last moment, Cinderella tries to run from this new, strange world.

954930-opera-queensland-cinderella

(image taken from here)

Cinderella does not understand  what happened so fast. She was used with the scarcity (of economic goods), to the point that she believed everything will be like this all her life!

What to do now, Cinderella? Will you sell open access for gold? [UPDATE: or will you apeal to court?]

cinderella-disneyscreencaps-com-7165

(image found here)

But wait! Cinderella forgot something. Her lost shoe, the one she discarded when she ran out from the ball.

In the scientific publishers world, peer-review is the lost shoe. (As well, we may say that up to now, researchers who are writing peer-reviews are like Cinderella too, their work is completely unrewarded and neglected.)

In the internet era the author of a scientific research paper is free to share his results with the scientific world by archiving a preprint version of her/his paper in free access repositories.  The author, moreover, HAS to do this  because the net offers a much better dissemination of results than any old-time publisher. In order (for the author’s ideas) to survive, making a research paper scarce by constructing pay-walls around it is clearly a very bad idea.  The only thing which the gold open access  does better than green open access is that the authors pay the publisher for doing the peer review (while in the case of arxiv.org, say, the archived articles are not peer-reviewed).

Let’s face it: the publisher cannot artificially make scarce the articles, it is a bad idea. What a publisher can do, is to let the articles to be free and to offer the peer-review service.

Like Cinderella’s lost shoe, in this moment the publisher throws away the peer-reviews (made gratis by fellow researchers) and tries to sell the article which has acceptable peer-review reports.

Why not the inverse? The same publisher, using the infrastructure it has, may try to sell the peer-review reports of freely archived articles AFTER. There is a large quantity of articles which are freely available, in open access repositories like arxiv.org. They are “published” already, according to the new rules of the game. Only that they are not reviewed.

Let the publishers do this! It would be a service that is needed, contrary to the dissemination of knowledge service which is clearly obsolete. (See also Peer-review turned on its head has market value.)

Emergent sums and differences in graphic lambda calculus

See the page Graphic lambda calculus for background.

Here I want to discuss the treatment of one identity concerning approximate sums and differences in emergent algebras. The identity is the following:

\Delta^{x}_{\varepsilon}(u, \Sigma^{x}_{\varepsilon}(u,v)) = v

The approximate sum (maybe emergent sum would be a better name) \Sigma^{x}_{\varepsilon}(u,w) has the following associated graph in $GRAPH$:

emer_sum_1

The letters in red “x, u, w, \Sigma” are there only for the convenience of the reader.

Likewise, the graph in GRAPH which corresponds to the approximate difference (or emergent difference) \Delta^{x}_{\varepsilon}(u,w) is the following:

emer_diff_1

The graph which corresponds to \Delta^{x}_{\varepsilon}(u, \Sigma^{x}_{\varepsilon}(u,v)) is this one:

emer_dif_sum_1

By a succession of CO-ASSOC moves we arrive to this graph:

emer_dif_sum_2

We are ready to apply an R2 move to get:

emer_dif_sum_3

We use now an ext2 move at the node marked by “1”

emer_dif_sum_4

followed by local pruning

emer_dif_sum_5

Here comes the funny part! We cannot continue unless we work with a graph where at the edges marked by the red letters “x, u” we put two disjoint (not connected by edges) graphs in GRAPH, say X, U:

emer_dif_sum_6

Let us suppose that from the beginning we had X, U connected at the edges marked by the red letters x, u, and proceed further. My claim is that by three  GLOBAL FAN-OUT  moves we can make the following move

emer_dif_sum_7

We use this move and we obtain:

emer_dif_sum_8

As previously, we use an R2 move and another ext2 move to finally obtain this:

emer_dif_sum_9

which is the answer we were looking for. We could use GLOBAL PRUNING to get rid of the part of the graph which ends by a termination gate.

Approximate groupoids again

This post is for future (google) reference for my project of relating approximate groups with emergent algebras. I would appreciate any constructive comment which could validate (or invalidate) this path of research.

Here is the path I would like to pursue further. The notion of approximate groupoid (see here for the definition) is not complete, because it is flattened, i.e. the set of arrows K should be seen as a set of variables. What I think is that the correct notion of approximate groupoid is a polynomial functor over groupoids (precisely a specific family of such functors). The category Grpd is cartesian closed,  so it has an associated model of (typed) lambda calculus. By using this observation I could apply emergent algebra techniques (under the form of my graphic lambda calculus, which was developed with — and partially funded by —  this application in mind) to approximate groupoids and hope  to obtain streamlined proofs of Breuillard-Green-Tao type results.

The origin of emergent algebras

In the last section “Why is the tangent space a group?” (section 8) of the great article by A. Bellaiche, The tangent space in sub-riemannian geometry*, the author explains a very interesting story, where names of Gromov and Connes appear, which is the first place, to my knowledge, where the idea of emergent algebras appear.

In a future post I shall comment more consistently on the math, but this time let me give you the relevant passages.

[p. 73] “Why is the tangent space a group at regular points? […] I have been puzzled by this question. Drawing a Lie algebra from the bracket structure of some X_{i}‘s did not seem to me the appropriate answer. I remember having, at last, asked M. Gromov about it (1982). The answer came under the form of a little apologue:

Take a map f: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}. Define its differential as

(79)              D_{x} f(u) = \lim_{\varepsilon \rightarrow 0} \varepsilon^{-1} \left[ f(x+\varepsilon u) - f(x) \right],

provided convergence holds. Then D_{x}f is certainly homogeneous:

D_{x}f(\lambda u) = \lambda D_{x}f(u),

but it need not satisfy the additivity condition

D_{x}f(u+v) = D_{x}f(u) + D_{x}f(v).

[…] However, if the convergence in (79)  is uniform on some neghbourhood of (x,0)  […]  then D_{x}f is additive, hence linear. So, uniformity was the key. The tangent space at p is a limit, in the [Gromov-]Hausdorff sense, of pointed spaces […] It certainly is a homogeneous space — in the sense of a metric space having a 1-parameter group of dilations. But when the convergence is uniform with respect to p, which is the case near regular points, in addition, it is a group.

Before giving the proof, I want to tell of another, later, hint, coming from the work of A. Connes. He has made significant use of the following observation: The tangent bundle TM to a differentiable manifold M is, like M \times M, a groupoid. […] In fact TM is simply a union of groups. In [8], II.5, it is stated that its structure may be derived from that of M \times M by blowing up the diagonal in M \times M. This suggests that, putting metrics back into the picture, one should have

(83)          TM = \lim_{\varepsilon \rightarrow 0} \varepsilon^{-1} (M \times M)

[…] in some sense to be made precise.

There is still one question. Since the differentiable structure of our manifold is the same as in Connes’ picture, why do we not get the same abelian group structure? One can answer: The differentiable structure is strongly connected to (the equivalence class of) Riemannian metrics; differentiable maps are locally Lipschitz, and Lipschitz maps are almost everywhere differentiable. There is no such connection between differentiable maps and the metric when it is sub-riemannian. Put in another way, differentiable maps have good local commutation properties with ordinary dilations, but not with sub-riemannian dilations \delta_{\lambda}.

So, one should not be abused by (83) and think that the algebraic structure of T_{p}M stems from the absolutely trivial structure of M \times M! It is concealed in dilations, as we shall now prove.

*) in the book Sub-riemannian geometry, eds. A. Bellaiche, J.-J. Risler, Progress in Mathematics 144, Birkhauser 1996

One more argument for open access publication

… and why some scientists might dislike it.

specialist

The “fugitive idiot” is inspired by Clifford Truesdell book “An Idiot’s Fugitive Essays on Science: Methods, Criticism, Training, Circumstances”, Springer-Verlag, 1984, which is a must-read for the history of classical thermodynamics, in particular.

Extensionality in graphic lambda calculus

This is part of the Tutorial:Graphic lambda calculus.

I want to discuss here the introduction of extensionality in the graphic lambda calculus. In some sense, extensionality is already present in the emergent algebra moves. Have you noticed the move “ext2”?

However, the eta-reduction from untyped lambda calculus needs a new move. I called it the ext1 move in arXiv:1207.0332 [cs.LO] paragraph 2.7. It is a global move, because in order to use it one has to check a global condition (without bound on the number of nodes and edges involved in the condition). In the mentioned paper I stated that the move applies only to graphs  which represent lambda calculus terms, but now I see no reason why it has to be confined to this sector, therefore here I shall formulate the ext1 move in more generality.

Move ext1.  If there is no oriented path from “2” to “1” outside the left hand side picture then one may replace this picture by an edge. Conversely, if there is no oriented path connecting “2” with “1” then one may replace the edge with the graph from the left hand side of the following picture:

ext1r

This move acts like eta-reduction when translated back from graphs to lambda calculus terms. “Ext” comes from “extensionality”.

Let us see why we need to formulate the move like this. Suppose that we eliminate the global condition  “there is no oriented path from “2” to “1” outside the left hand side  of the previous picture”. In particular let us suppose that there is an edge from “2” to “1” which completes the graphs from the LHS of the previous picture. For this graph, which appears at left in the next figure, we may use the graphic beta move like this (numbers in red indicate how we use the graphic beta move):

non_ext1_move

If we could apply the ext 1 move to this graph, then the result would be the following:

non_ext1_move2

Not only the result of this move is one loop, but it is the “wrong” loop, in the sense that this loop is obtained by closing the edge which vanishes in thin air when the graphic beta move is applied. There is no contradiction though, unless we wish to decorate the edges (i.e. to evaluate the result of the computation). It is just strange.

There is one question left: why the move called “ext2”, which is one of the emergent algebra moves, is also an extensionality move? A superficial answer is that the move ext2 says that the dilation of coefficient “1” is the identity function.

_____________________________

Return to Tutorial:Graphic lambda calculus