Vote! Pro or con about having comments in future OA journals?

The vote is anonymous.

UPDATE: Amazing, there are so many views and almost nobody votes.

See the discussion about this subject at Gowers “Good guys” post.  See also the tag comments in epijournals at this blog.

Straw-man argument against comments in epijournals

This is a comment which awaits moderation (for some time) at Gowers “Good guys” post, therefore I post it here. Here is it, with some links  added:

After reading the rather heated exchanges around the subject of comments in epijournals, I am surprised by the fact that the best argument against comments that people here were able to find is by conflating comments in epijournals with comments in blogs.

I cannot imagine who would like to have comments in epijournals (or any other OA model) of the same quality as those on the average blog.

Therefore my impression is that much of the discussion here is just an example of a straw-man fallacy.

It is enough to look around and see that there are models who could inspire us.

I have proposed in several comments and posts like this one or the other to consider comments in OA journals on the par with the talk pages of Wikipedia, and peer-reviews as wiki pages.

Others have proposed the mathoverflow or reddit as models. Any of those proposals are stellar compared to comments in blogs.

Besides, I doubt very much that there is a majority against comments and I believe that Mike Taylor is only more vocal than others and for this he deserves some congratulations (and some respect, as a fellow scientist).


On peer-review and the big value it may have for epijournals, or even as a publishing business model, see also the posts:

The three main sectors of graphic lambda calculus

For a web tutorial on graphic lambda calculus go here.  There will be detailed explanations concerning the three main sectors of the graphic lambda calculus, for the moment here is a brief description of them.


A sector of the graphic lambda calculus is:

  • a set of graphs, defined by a local or global condition,
  • a set of moves from the list of all moves available.

The name “graphic lambda calculus” comes from the fact that there it has untyped lambda calculus as a sector. In fact, there are three important sectors of graphic lambda calculus:

Teaser: 2D UD

Here are two images which may (or may not) give an idea about another fast algorithm for real time rendering of 3D point cloud scenes (but attention, the images are drawn for the baby model of 2D point cloud scenes).  The secret lies in the database.

I have a busy schedule the next weeks and I have to get it out of my system. Therefore,  if anybody gets it then please send me a comment here. Has this been done before, does it work?

The images now: the first has no name


The second image is a photo of the Stoa Poikile, taken from here:


Hint: this is a solution for the ray shooting problem (read it) which eliminates trigonometry, shooting rays, computing intersections, and it uses only addition operation (once the database is well done), moreover, the database organized as in the pictures cannot be bigger than the original one (thus it is also a compression of the original database).


See the solution given  by JX of an unlimited detail algorithm here and here.

Peer-reviews don’t protect against plagiarism and articles retraction. Why?

After reading one more post from the excellent blog Retraction Watch, this question dawned on me: if the classical peer-review is such a good thing, then why is it rather inefficient when it comes to detecting flaws or plagiarism cases which later are exposed by the net?

Because I have seen implicit and explicit blaming of:

  • authors, seeking to publish as many papers as possible (because only the number of them counts, not their contents)
  • journals, seeking to fill their pages with any costs, also failing to protect the authors which gave them the copyrights.

There is a missing link in this chain: what about the peer-reviews? I bet that many articles submitted for publication are not accepted as a consequence of peer-review reports which detect flaws or plagiarism attempts. However, so many other papers are published after they pass the peer-review filter, only to be found later as being flawed or plagiarizing.

I think this is the strongest argument against old-ways, let’s talk in private  practice.  It shows that even  if the great majority of researchers is honest and dedicated to commit to best practices in the field, the very few who try to trick, to “boost” their CVs, escape undetected during the classical peer-review process  because of the tradition to talk in private about research, to follow the authority paths, and so on. This practice was not bad at all, before the net era, it was simply a part if the immunitary system of the research community. On the other side, there is no reason to believe that flawed or plagiarized articles are more frequent now than before. The difference which makes such articles easier to detect is the net, which allows public expressions of doubt and fast communication of evidence (“don’t believe me, here is the link to the evidence, make you own opinion”).

Don’t you think that peer-review could get better, not worse, by becoming a public activity which results from the contribution of (few or many) peers?


On peer-review and the big value it may have for epijournals, or even as a publishing business model, see also the posts:

Diorama, Myriorama, Unlimited detail-orama

Let me tell  in plain words  the explanation by  JX about how a UD algorithm might work (is not just an idea, is supported by proof, experiments, go and see this post).

It is too funny! Is the computer version of a diorama. Is an unlimited-detail-orama.

Before giving the zest of the explanation of JX, let’s thinks: do you ever saw a totally artificial construction which, when you look at it, it tricks your mind to believe you look at an actual, vast piece of landscape, full of infinite detail? Yes, right? This is a serious thing, actually, it poses a lot of questions about how much can be  compressed the 3D visual experience of a mind boggling  huge database of 3D points.

Indeed, JX explains that his UD type algorithm has two parts:

  • indexing: start with a database of 3D points, like a laser scan. Then, produce another database of cubemaps centered in a net of equally spaced “centerpoints” which cover the 3D scene. The cubemaps are done at screen resolution, obtained as a projection of the scene on a reasonably small cube centered at the centerpoint. You may keep these cubemaps in various ways, one of these is by linking the centerpoint with the visible 3D points. Compress (several techniques suggested).   For this part of the algorithm there is no time constraint, it is done before the real-time rendering part.
  • real-time rendering: input where the camera is, get only the points seen from closest  centerpoint, get the cubemap, improve it by using previous cubemaps and/or neighbouring cubemaps. Take care about filling holes which appear when you change the point of view.

Now, let me show you this has been done before, in the meatspace.  And even more, like animation! Go and read this, is too funny:

  • The Daguerre Dioramas. Here’s (actually an improved version of) your cubemap JX: (image taken from the linked wiki page)


  • But maybe you don’t work in the geospatial industry and you don’t have render farms and huge data available. Then you may use a Myriorama, with palm trees, gravel, statues, themselves rendered as dioramas. (image taken from the linked wiki page)


  • Would you like to do animation? Here is it, look at the nice choo-choo train (polygon-rendered, at a a scale)


(image taken from this wiki page)

Please, JX, correct me if I am wrong.

Comments in epijournals: we may learn from Wikipedia

I think comments in epijournals (or whatever other form of Open Access from A to Z) should be considered as a service to the community. Don’t believe me and please form your own opinion, considering the following adaptation of Wikipedia:Core content policies.

The motivation of this post is to be found in the dispute over the value of commenting, happening in the comments to the post “Why I’ve also joined the good guys” by Tim Gowers. There you may find both pros and cons for allowing comments to articles “published” in epijournals.  Among the cons were comparisons of such comments to comments in blogs, fear that comments will actually damage the content, fear that they will add too much noise and so on.

In reply I mentioned in one comment Wikipedia.  Because Wikipedia is  one big example of a massively networked collaboration which does provide quality content, even if it is not hierarchically regulated. Please consider this: Wikipedia has a way to deal with vandalism, noise, propaganda and many other negative phenomena which, in the opinion of some, may damage those epijournals which will be willing to provide the service of commenting published articles.

I shall try therefore to learn from Wikipedia’s experience. The wikipedians evolved a set of principles, guidelines and policies which may be adapted to solve this  problem our  community of mathematicians have.

In fact, maybe Wikipedia rules could improve also the peer-review system. After a bit of thinking, if we are after a system which selects  informed comments, done by our peers, then we are talking  about a kind of peer-review.

What is the purpose of comments? Are they the same as peer-review?

These are questions I have not seen, please provide me links to any relevant sources where such questions were considered.

Here is my proposal, rather sketchy at this moment (it should be like this, only public discussion could improve or kill it, if inappropriate).

We may think about peer-reviews and comments as if they are wiki pages. Taking this as an hypothesis, they must conform at least to the Wikipedia:Core content policies :

  •  Neutral point of view: “All Wikipedia articles   comments and peer-reviews  must be written from a neutral point of view, representing significant views fairly, proportionately and without bias”.
  • Verifiability: “Material challenged in comments, peer-reviews  and all quotations, must be attributed to a reliable, published source.  Verifiability means that people reading and editing the encyclopedia  epijournal can check that information comes from a reliable source.”
  • No original research: “All material in Wikipedia  comments and peer-reviews must be attributable to a reliable, published source. Articles Comments and peer-reviews may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.

To those who will discard this proposal by saying that it is not possible to achieve these policies in practice, I recall: Wikipedia exists. Let’s learn from it.