Tag Archives: open peer review

Open peer review is something others should do, Open science is something you could do

This post follows Peer review is not independent validation, where it is argued that independent validation is one of the pillars of the scientific method. Peer review is only a part of the editorial process. Of course that peer review is better than nothing, but it is only a social form of validation, much less rigorous than what the scientific method asks.

If the author follows the path of Open science, then the reader has the means to perform an independent validation. This is great news, here is why.

It is much easier to do Open science than to change the legacy publishing system.

Many interesting alternatives to the legacy publishing have been proposed already. There is green OA, there is gold OA (gold is for $), there is arXiv.org. There are many other versions, but the main problem is that research articles are not considered really serious unless they are peer reviewed. Legacy publishing provides this, it is actually the only service they provide. People are used to review for established journals and any alternative publishing system has to be able to compete with that.

So, if you want to make an OA platform, it’s not serious unless you find a way to make other people to peer review the articles. This is hard!

People are slowly understanding that peer review is not what we should aim for. We are so used with the idea that peer review is that great thing which is part of the scientific method. It is not! Independent validation is the thing, peer review is an old, unscientific way (very useful, but not useful enough to allow research finding to pass the validation filter).

The alternative, which is Open science, is that the authors of research findings make open all the data, procedures, programs, etc, everything they have. In this way, any other group of researchers, anybody else willing to try can validate those research findings.

The comparison is striking. The reviewers of the legacy publishing system don’t have magical powers, they just read the article, they browse the data provided by the very limited article format and they make an opinion about the credibility of the research findings. In the legacy system, the reviewer does not have the means to validate the article.

In conclusion, it is much simpler to do Open science than to invent a way to convince people to review your legacy articles. It is enough to make open your data, your programs, etc. It is something that you, the author can do.

You don’t have to wait for the others to do a review for you. Release your data, that’s all.

Peer review is not independent validation

People tend to associate peer review with science. As an example, even today there are still many scientists who believe that an arXiv.org article is not a true article, unless it has been peer reviewed. They can’t trust the article, without reading it first, unless it passed the peer review, as a part of the publishing process.

Just because a researcher puts a latex file in the arXiv.org (I continue with the example), it does not mean that the content of the file has been independently validated, as the scientific method demands.

The part which slips from the attention is that peer review is not independent validation.

Which means that a peer reviewed article is not necessarily one which passes the scientific method filter.

This simple observation is, to me, the key for understanding why so many research results communicated in peer reviewed articles can not be reproduced, or validated, independently. The scale of this peer reviewed article rot is amazing. And well known!

Peer review is a part of the publishing process. By itself, it is only a social validation. Here is why: the reviewers don’t try to validate the results from the article because they don’t have the means to do it in the first place. They do have access only to a story told by the authors. All the reviewers can do is to read the article and to express an opinion about it’s credibility, based on the reviewers experience, competence (and biases).

From the point of view of legacy publishers, peer review makes sense. It is the equivalent of the criteria used by a journalist in order to decide to publish something or not. Not more!

That is why it is very important for science to pass from peer review to validation. This is possible only in an Open Science frame. Once more (in this Open(x) fight) the medical science editors lead. From “Journal Editors To Researchers: Show Everyone Your Clinical Data” by Harlan Krumholz, a quote:

“[…] last Wednesday, the editors of the leading medical journals around the world made a proposal that could change medical science forever. They said that researchers would have to publicly share the data gathered in their clinical studies as a condition of publishing the results in the journals. This idea is now out for public comment.

As it stands now, medical scientists can publish their findings without ever making available the data upon which their conclusions were based.

Only some of the top journals, such as The BMJ, have tried to make data sharing a condition of publication. But authors who didn’t want to comply could just go elsewhere.”

This is much more than simply saying “peer review is bad” (because is not, only that it is not a part of the scientific method, it is a part of the habits of publishers). It is a right step towards Open Science. I repeat here my opinion about OS, in the shortest way I can:

There are 2 parts involved in a research communication:   A (author, creator, the one which has something to disseminate) and R (reader). The legacy publishing process introduces a   B (reviewer).  A puts something in a public place, B expresses a public opinion about this and R uses B’s opinion as a proxy for the value of A’s thing, in order to decide if A’s thing is worthy of R’s attention or not.  Open Access is about the direct interaction of A with R, Open Peer-Review is about transparent interaction of A with B, as seen by R and Validation (as I see it) is improving the format of A’s communication so that R could make a better decision than the social one of counting on B’s opinion.

That’s it! The reader is king and the author should provide everything to the reader, for the reader to be able to independently validate the work. This is the scientific method at work.

 

One of the first articles with means for complete validation by reproducibility

I have not stressed enough this aspect. The article

M. Buliga, Molecular computers

is one of the first articles which comes with complete means of validation by reproducibility.

This means that along with the content of the article, which contains animations and links to demonstrations, comes a github repository with the scripts which can be used to validate (or invalidate, of course) this work.

I can’t show you here how the article looks like, but I can show you a gif created from this  video of a demonstration which appears also in the article (however, with simpler settings, in order to not punish too much the browser).

KebzRb

This is a chemical like computation of the Ackermann(2,2) function.

In itself, is intended to show that if autonomous computing molecules can be created by the means proposed in the article, then impressive feats can be achieved.

This is part of the discussion about peer review and the need to pass to a more evolved way of communicating science.There are several efforts in this direction, like for example PeerJ’s paper-now commented in this post. See also the post Fascinating: micropublications, hypothes.is for more!

Presently one of the most important pieces of this is the peer review, which is the social practice consisting in declarations of one, two, four, etc anonymous professionals that they have checked the work and they consider it valid.

Instead, an ideal should be the article which runs in the browser, i.e. one which comes with means which would allow anybody to validate it up to external resources, like the works by other authors.

(For example, if I write in my article that “According to the work [1]   A is true. Here we prove that B follows from A.” then I should provide means to validate the proof that A implies B, but it would be unrealistical to be ask me to provide means to validate A.)

This is explained in more detail in Reproducibility vs peer review.

Therefore, if you care about evolving the form of the scientific article, then you have a concrete, short example of what can be done in this direction.

Mind that I am stubborn enough to cling to this form of publication, not because I am afraid to submit these beautiful ideas to legacy journals, but because I want to promote new ways of sharing research by using the best content I can make.

_________________________________________

Harsh assessment

I need a hard objective and harsh assessment of the demos, moves pages, all this effort I make. I am looking for funding, I don’t get one presently, so there might be something wrong I do.
Please be as harsh as possible. Thank you!

I am waiting for your comments. If you want to make a private comment then add in your message the following string

pe-240v

and the comment will go to the moderation queue.

If you have not made any comments here, until now, then by default the comment goes to moderation.

So, please mention in the comment if you want to keep it private.

Assessment for what?

or anything about chemlambda.

This is a big project, I see people are interested in more advanced stuff, like distributed computing, but they usually fail to understand the basics.

On the other side, I am a mathematician learning to program. So I’m lousy at that (for the moment), but I hope I make my point about the basics with these demos and help pages.

_________________________________________________________________________

Open notebook science for everyone, done by everyone

I am deeply impressed by the post:

Jean Claude Bradley Memorial Symposium; July 14th; let’s take Open Notebook Science to everyone

Here are some quotes:

Jean-Claude Bradley was one of the most influential open scientists of our time. He was an innovator in all that he did, from Open Education to bleeding edge Open Science; in 2006, he coined the phrase Open Notebook Science. His loss is felt deeply by friends and colleagues around the world.

“Science, and science communication is in crisis. We need bold, simple visions to take us out of this, and Open Notebook Science (ONS) does exactly that. It:

  • is inclusive. Anyone can be involved at any level. You don’t have to be an academic.
  • is honest. Everything that is done is Open, so there is no fraud, no misrepresentation.
  • is immediate. The science is available as it happens. Publication is not an operation, but an attitude of mind
  • is preserved. ONS ensures that the record, and the full record, persists.
  • is repeatable or falsifiable. The full details of what was done are there so the experiment can be challenged or repeated at any time
  • is inexpensive. We waste 100 Billion USD / year of science through bad practice so we save that immediately. But also we get rid of paywalls, lawyers, opportunity costs, nineteenth century publishing practices, etc.”

Every word is true!

This is the future of the research communication. Or at least the beginning of it. ONS has open, perpetual peer review as a subset.

Personal notes.  Look at the left upper corner of this page, it reads:

chorasimilarity | computing with space | open notebook.

Yay! the time  is coming!  the weirdos who write on arXiv, now figshare,  who use open notebooks, all  as a replacement for legacy publication,   will soon be mainstream 🙂

Now, seriously, let’s put some gamification into it, so those who ask “what IS a notebook?”  can play too. They ARE the future. Hope that soon the Game of Research and Review, aka playing  MMORPG  games at the knowledge frontier, will emerge.

There are obvious reasons for that:

  • the smartphone freeds us from staying in one physical place while we surf the virtual world
  • which has as an effect that we rediscover that physical space is important for our interactions, see  Ingress
  • gamification of human activities is replacing the industrial era habits, (pyramidal, static organizations, uniformization, identification of humans with their functions (worker, consumer, customer, student) and with their physical location (this or that country, city, students in the benchs, professors at the chair, payment for working hours ans for staying at the counter, legacy publishing).

See also Notes for “Internet of Things not Internet of Objects”.

 

_________________________________________________

 

 

The tone goes up on the OPEN front

This post has a collection of savory quotes and further comments about the psychological changes which are ongoing, around new ways of dissemination and communication of scientific research.

Aka OPEN …

  • access
  • peer review
  • data
  • notebooks

We are closing to a change, a psychological change, from indifference and disdain from the majority of (more or less established) researchers to a public acknowledgement of the stupidity and immorality of the procedure which is in force, still.

[Rant, jump over if not interested into personal stuff.

Please take into consideration that even if I embrace with full heart these changes, I don’t have any merit or real contribution to these, excepting modest posts here at chorasimilarity, under the tags cost of knowledge and open peer review. More than this, I suffered like probably some of my colleagues by choosing to publish through arXiv mostly and not playing the stupid game, which led to a very damaged career, but unfortunately I did not had the opportunity to create change through participation in teams which now are shaping the future of OPEN whatever. Bravo for them, my best wishes for them, why not sometimes a honest criticism from my small point of view, and thanks for the feeling of revenge which I have, the “I was right” feeling which I hope will grow and grow, because really the research world is damaged to the bones by this incredible stupidity, maybe cupidity and surely lack of competence and care for the future manifested by a majority of leaders.

The second thing I want to mention is that even if I refer to “them”, to a “majority”, all these generalizations have to be nuanced by saying that, as always, as everywhere, the special ones, the creative ones, the salt and pepper of the research world are either excused or completely innocent. They are also everywhere, maybe many of them not in any strong influence position (as in music, for example, the most well known musicians are always never the best, but surely they are among the most hard working), but creating their stuff and possibly not really caring about these social aspects, because they are too deep into the platonic realm. All of them are not the subject or part of any “majority”, they are not “them” in any way.

The third point is that there may be a sloppy use of “young” and “old”. This has nothing to do with physical age. It is true that every old moron was a young moron before. Every old opportunist was a young one some years earlier. Their numbers are continually replenished and we find them everywhere, albeit much more present than the salt and pepper of the research community, and more in the good hard worker, but not really, seriously creative part.  No, young or old refers to the brain quality, not to physical age.

End of rant]

Back to the subject. From timid or rather lonely comments, now we passed to more strong ones.

And the words are harder.

From Causes of the persistence of impact factor mania, by Arturo Casadevall and Ferric C. Fang,

“Science and scientists are currently afflicted by an epidemic of mania manifested by associating the value of research with the journal where the work is published rather than the content of the work itself. The mania is causing profound distortions in the way science is done that are deleterious to the overall scientific enterprise. In this essay, we consider the forces responsible for the persistence of the mania and conclude that it is maintained because it disproportionately benefits elements of the scientific enterprise, including certain well-established scientists, journals, and administrative interests.”

Fully agree with them, besides of this I consider very interesting their explanation that we face a manifestation of the tragedy of the commons.

From Academic self-publishing: a not-so-distant-future, here is a big quote, is too beautiful to crop:

A glimpse into the future
Erin is driving back home from the laboratory with a big smile on her face. After an exciting three-hour brainstorming session discussing the intracranial EEG data from her last experiment, she can’t wait to get her hands back on the manuscript. A new and unexpected interpretation of the findings seems to challenge a popular assumption about the role of sleep in declarative memory consolidation. She had been looking over the figures for more than a month without seeing a clear pattern. But now, thanks to a moment of insight by one of her colleagues, the pieces finally fit together and a new logic is emerging. She realizes it will be hard for the community to accept these new findings, but the methodology is solid and she is now convinced that this is the only reasonable explanation. She is so anxious to see what Axell’s group thinks about new evidence that refutes its theoretical model.

After a week’s hard work, the first draft is ready. All the figures and their long descriptive legends are in place, the literature review is exhaustive, the methodology is clear as a bell, and the conclusions situate the finding in the general context of the role of sleep in memory consolidation. Today, the group had a brief morning meeting to decide which colleagues they will ask to review their draft. Of course, they will ask Axell for his opinion and constructive criticism, but they also agree to invite Barber to confirm that the application of independent component analysis on the data was performed correctly, and Stogiannidis to comment on the modification of the memory consolidation scale. For a review of the general intracranial EEG methodology, the group decides to first approach Favril herself and, if she declines, they will ask Zhang, who recently reviewed the subject for Nature.

After the lunch break, Erin submits the manuscript to the university’s preprint repository that provides a DOI (digital object identifier) and an open attribution licence. When she hits the submit button, she feels a chill running down her spine. More than a year’s hard work is finally freely available to her peers and the public. The next important step is to invite the reviewers. She logs in to her LIBRE profile and inserts the metadata of the manuscript with a hyperlink to the repository version (see LIBRE, 2013). She then clicks the invite reviewer button and writes a quick personal message to Axell, briefly summarizing the main result of the study and why she thinks his opinion is vital for the debate this manuscript will spark. She then invites Stogiannidis to comment on the modification of the memory consolidation scale, and Barber, specifically asking him to check the application of independent component analysis, and also letting him know that all data are freely and openly available at Figshare. After finishing with the formal invitations, Erin tweets the LIBRE link to her followers and sends it as a personal message to specific colleagues from whom she would like to receive general comments. She can now relax. The word is out!

A couple of weeks later, Erin is back at work on the project. Both Favril and Zhang refused to review because of heavy work schedules, but Stogiannidis wrote an excellent report totally approving the modification of her scale. She even suggested a future collaboration to test the new version on a wider sample. Barber also submitted a brief review saying that he doesn’t find any caveats in the analysis and approves the methodology. As Erin expected, Axell didn’t take the new result lightly. He submitted a harsh critique, questioning both the methodology and the interpretation of the main findings. He even mentioned that there is a new paper by his group currently under journal review, reporting on a similar experiment with opposite results. Being pipped to the post and being second to report on this innovative experimental design, he must be really peeved, thinks Erin. She grins. Maybe he will learn the lesson and consider self-publishing next time. Anyway, Erin doesn’t worry too much as there are already two independent colleagues who have marked Axell’s review as biased on LIBRE. Last night, Xiu, Erin’s colleague, finished retouching one of the figures based on a very insightful comment by one of LIBRE’s readers, and today she will upload a new version of the manuscript, inviting some more reviewers.

Two months later, Erin’s paper is now in version number 4.0 and everyone in the group believes it is ready for submission to a journal and further dissemination. The issues raised by seven reviewers have now been adequately addressed, and Axell’s review has received six biased marks and two negative comments. In addition, the paper has attracted a lot of attention in the social media and has been downloaded dozens of times from the institutional repository and has been viewed just over 300 times in LIBRE. The International Journal for the Study of the Role of Sleep in Memory Consolidation has already been in touch with Erin and invited her to submit the paper to them, but everybody in the group thinks the work is of interest to an even wider audience and that it should be submitted to the International Journal for the Study of Memory Consolidation. It charges a little more – 200 euros – but it is slightly more esteemed in the field and well worth the extra outlay. The group is even considering sending the manuscript in parallel to other journals that embrace a broader neuroscience community, now that the group’s copyright and intellectual property rights have been protected. Anyway, what is important (and will count more in the grant proposal Erin plans to submit next year) is that the work has now been openly approved by seven experts in the field. She is also positive that this paper will attract ongoing reviews and that she may even be invited as an expert reviewer herself now that she is more visible in the field. A debate has started in her department about how much the reviewer’s track record should weigh in how future tenure decisions are evaluated, and she has been invited to give a talk on her experience with LIBRE and the versioning of the group’s manuscript, which has now become a dynamic paper (Perakakis et al., 2011).”

I love this, in all details! I consider it among the most well written apology of, particularly, open peer review. [See if you care, also my post Open peer review as a service.]

From Your university is paying too much for journals, by Bjorn Brembs:

“Why are we paying to block public access to research, when we could save billions by allowing access?”

Oh, I’m sure that those in charge with these decisions have their reasons.

From the excellent We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson

“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…

Read also the older, but great We have met the enemy and it is us by Mark Johnston.  I commented about it here.

What is your opinion about all this? It’s getting hotter.

_________________________________________

My first NSF experience and the future of GLC

Just learned that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF.

I read the reviews and my conclusion is that they are well done. The 6 reviewers all make good points and a good job to detect strong points and weaknesses of the project.

Thank you NSF for this fair process. As the readers of this blog know, I don’t have the habit to hide my opinions about bad reviews, which sometimes may be harsh. Seen from this point of view, my thanks look, I hope, even more sincere.

So, what was the project about? Distributed computing, like in the “GLC actors, artificial chemical connectomes, topological issues and knots”  arXiv:1312.4333 [cs.DC], which was branded as useful for secure computing. The project has been submitted in Jan to Secure and Trustworthy Cyberspace (SaTC) NSF program.

The point was to get funding which allows the study of the Distributed GLC, which is for the moment fundamental research.  There are reasons to believe that distributed GLC may be good for secure computing, principal among them being that GLC (and chemlambda, actually the main focus of research) is not based on the IT paradigm of gates and wires, but instead on something which can be described as signal transduction, see How is different signal transduction from information theory?   There is another reason, now described by the words “no semantics“.

But basically,  this is not naturally a project in secure computing. It may become one, later, but for the moment the project consists into understanding asynchronous, decentralized computations performed by GLC actors and their biological like behaviour. See What is new in distributed GLC?

Together with Louis Kauffman, we are about to study this, he will present at the ALIFE 14 conference our paper  Chemlambda, universality and self-multiplication,   arXiv:1403.8046.

There is much more to tell about this, parts were told already here at chorasimilarity.

From this moment I believe that instead of thinking security and secrecy, the project should be open to anybody who wishes to contribute, to use or to criticize. That’s the future anyway.

______________________________________________________

 

 

Gamification of peer review with Ingress

Seems possible to adapt Ingress in order to play the Game of Research and Review.

In the post MMORPGames at the knowledge frontier I propose a gamification of peer review which is, I see now very close to the Ingress game:

“… we could populate this world and play a game of conquest and exploration. A massively multiplayer online game.  Peer-reviews of articles decide which units of places are wild and which ones are tamed. Claim your land (by peer-reviewing articles), it’s up for grabs.  Organize yourselves by interacting with others, delegating peer-reviews for better management of your kingdoms, collaborating for the exploration of new lands.

Instead of getting bonus points, as mathoverflow works, grab some piece of virtual land that you can see! Cultivate it, by linking your articles to it or by peer-reviewing other articles. See the boundaries of your small or big kingdom. Maybe you would like to trespass them, to go into a near place? You are welcome as a trader. You can establish trade with other near kingdoms by throwing bridges between the land, i.e. by writing interdisciplinary articles, with keywords of both lands. Others will follow (or not) and they will populate the new boundary land you created.”

In Ingress (from the wiki source):

“The gameplay consists of establishing “portals” at places of public art, landmarks, cenotaphs, etc., and linking them to create virtual triangular fields over geographic areas. Progress in the game is measured by the number of Mind Units, i.e. people, nominally controlled by each faction (as illustrated on the Intel Map).[7][8] The necessary links between portals may range from meters to kilometers, or to hundreds of kilometers in operations of considerable logistical complexity.[9] International links and fields are not uncommon, as Ingress has attracted an enthusiastic following in cities worldwide[10] amongst both young and old,[11] to the extent that the gameplay is itself a lifestyle for some, including tattoos. ”

 

Instead of public art, Portals could be openaccess articles (from the arXiv, for example, not from the publishers).

 

“A portal with no resonators is unclaimed; to claim a portal for a faction, a player deploys at least one resonator on it.”

Resonators are reviews.

Links between portals are keywords.

 

Something to think about!

____________________________________________________

 

 

Sometimes an anonymous review is “a tale told by an idiot …”

… “full of sound and fury, signifying nothing.” And the editor believes it, even if it is self-contradictory, after sitting on the article for half a year.

There are two problems:

  • the problem of time; you write a long and dense article, which may be hard to review and the referee, instead of declining to review it, it keeps it until the editor presses him to write a review, then he writes some fast, crappy report, much below the quality of the work required.
  • the problem of communication: there is no two way communication with the author. After waiting a considerable amount of time, the author has nothing else to do than to re-submit the article to another journal.

Both problems could be easily solved by open peer-review. See Open peer-review as a service.

The referee can well be anonymous, if he wishes, but a dialogue with the author and, more important, with other participants could only improve the quality of the review (and by way of consequence, the quality of the article).

I reproduce further such a review, with comments. It is about the article “Sub-riemannian geometry from intrinsic viewpoint” arXiv:1206.3093 .  You don’t need to read it, maybe excepting the title, abstract and contents pages, which I reproduce here:

Sub-riemannian geometry from intrinsic viewpoint
Marius Buliga
Institute of Mathematics, Romanian Academy
P.O. BOX 1-764, RO 014700
Bucuresti, Romania
Marius.Buliga@imar.ro
This version: 14.06.2012

Abstract

Gromov proposed to extract the (differential) geometric content of a sub-riemannian space exclusively from its Carnot-Caratheodory distance. One of the most striking features of a regular sub-riemannian space is that it has at any point a metric tangent space with the algebraic structure of a Carnot group, hence a homogeneous Lie group. Siebert characterizes homogeneous Lie groups as locally compact groups admitting a contracting and continuous one-parameter group of automorphisms. Siebert result has not a metric character.
In these notes I show that sub-riemannian geometry may be described by about 12 axioms, without using any a priori given differential structure, but using dilation structures instead.
Dilation structures bring forth the other intrinsic ingredient, namely the dilations, thus blending Gromov metric point of view with Siebert algebraic one.
MSC2000: 51K10, 53C17, 53C23

1 Introduction       2
2 Metric spaces, groupoids, norms    4
2.1 Normed groups and normed groupoids      5
2.2 Gromov-Hausdorff distance     7
2.3 Length in metric spaces       8
2.4 Metric profiles. Metric tangent space      10
2.5 Curvdimension and curvature     12

3 Groups with dilations      13
3.1 Conical groups     14
3.2 Carnot groups     14
3.3 Contractible groups   15

4 Dilation structures  16
4.1 Normed groupoids with dilations     16
4.2 Dilation structures, definition    18

5 Examples of dilation structures 20
5.1 Snowflakes, nonstandard dilations in the plane    20
5.2 Normed groups with dilations    21
5.3 Riemannian manifolds    22

6 Length dilation structures 22
7 Properties of dilation structures    24
7.1 Metric profiles associated with dilation structures    24
7.2 The tangent bundle of a dilation structure    26
7.3 Differentiability with respect to a pair of dilation structures    29
7.4 Equivalent dilation structures     30
7.5 Distribution of a dilation structure     31

8 Supplementary properties of dilation structures 32
8.1 The Radon-Nikodym property    32
8.2 Radon-Nikodym property, representation of length, distributions     33
8.3 Tempered dilation structures    34
9 Dilation structures on sub-riemannian manifolds   37
9.1 Sub-riemannian manifolds    37
9.2 Sub-riemannian dilation structures associated to normal frames     38

 

10 Coherent projections: a dilation structure looks down on another   41
10.1 Coherent projections     42
10.2 Length functionals associated to coherent projections    44
10.3 Conditions (A) and (B)     45

11 Distributions in sub-riemannian spaces as coherent projections    45
12 An intrinsic description of sub-riemannian geometry    47
12.1 The generalized Chow condition     47
12.2 The candidate tangent space    50
12.3 Coherent projections induce length dilation structures  53

Now the report:

 

Referee report for the paper


 Sub-riemannian geometry from intrinsic viewpoint

Marius Buliga
for

New York Journal of Mathematics (NYJM).

One of the important theorems in sub-riemannian geometry is a result
credited to Mitchell that says that Gromov-Hausdorff metric tangents
to sub-riemannian manifolds are Carnot groups.
For riemannian manifolds, this result is an exercise, while for
sub-riemannian manifolds it is quite complicate. The only known
strategy is to define special coordinates and using them define some
approximate dilations. With this dilations, the rest of the argument
becomes very easy.
Initially, Buliga isolates the properties required for such dilations
and considers
more general settings (groupoids instead of metric spaces).
However, all the theory is discussed for metric spaces, and the
groupoids leave only confusion to the reader.
His claims are that
1) when this dilations are present, then the tangents are Carnot groups,
[Rmk. The dilations are assumed to satisfy 5 very strong conditions,
e.g., A3 says that the tangent exists – A4 says that the tangent has a
multiplication law.]
2) the only such dilation structures (with other extra assumptios) are
the riemannian manifolds.
He misses to discuss the most important part of the theory:
sub-riemannian manifolds admit such dilations (or, equivalently,
normal frames).
His exposition is not educational and is not a simplification of the
paper by Mitchell (nor of the one by Bellaiche).




The paper is a cut-and-past process from previous papers of the
author. The paper does not seem reorganised at all. It is not
consistent, full of typos, English mistakes and incomplete sentences.
The referee (who is not a spellchecker nor a proofread) thinks that
the author himself could spot plenty of things to fix, just by reading
the paper (below there are some important things that needs to be
fixed).


The paper contains 53 definitions – fifty-three!.
There are 15 Theorems (6 of which are already present in other papers
by the author of by other people. In particular 3 of the theorems are
already present in [4].)
The 27 proofs are not clear, incomplete, or totally obvious.

The author consider thm 8.10 as the main result. However, after
unwrapping the definitions, the statement is: a length space that is
locally bi-lipschitz to a commutative Lie group is locally
bi-lipschitz to a Riemannian manifold. (The proof refers to Cor 8.9,
which I was unable to judge, since it seems that the definition of
“tempered” obviously implies “length” and “locally bi-lipschitz to the
tangent”)


The author confuses the reader with long definitions, which seems very
general, but are only satisfied by sub-riemannian manifolds.
The definitions are so complex that the results are tautologies, after
having understood the assumptions. Indeed, the definitions are as long
as the proofs. Just two examples: thm 7.1 is a consequence of def 4.4,
thm 9.9 is a consequence of def 9.7.

Some objects/notions are not defined or are defined many pages after
they are used.



Small remarks for the author:

def 2.21 is a little o or big O?


page 13 line 2. Which your convention, the curvdim of a come in infinite.
page 13 line -2. an N is missing in the norm


page 16 line 2, what is \nu?

prop 4.2 What do you mean with separable norm?

page 18 there are a couple of “dif” which should be fixed.
in the formula before (15), A should be [0,A]

pag 19 A4. there are uncompleted sentences.

Regarding the line before thm 7.1, I don’t agree that the next theorem
is a generalisation of Mitchell’s, since the core of his thm is the
existence of dilation structures.

Prop 7.2 What is a \Gamma -irq

Prop 8.2 what is a geodesic spray?

Beginning of sec 8.3 This is a which -> This is a

Beginning of sec 9 contains a lot of English mistakes.

Beginning of sec 9.1 “we shall suppose that the dimension of the
distribution is globally constant..” is not needed since the manifold
is connected

thm 9.2 rank -> step

In the second sentence of def 9.4, the existence of the orthonormal
frame is automatic.

 

Now, besides some of the typos, the report is simply crap:

  • the referee complains that I’m doing it for groupoids, then says that what I am doing applies only to subriemannian spaces.
  • before, he says that in fact I’m doing it only for riemannian spaces.
  • I never claim that there is a main result in this long article, but somehow the referee mentions one of the theorems as the main result, while I am using it only as an example showing what the theory says in the trivial case, the one of riemannian manifolds.
  • the referee says that I don’t treat the sub-riemannian case. Should decide which is true, among the various claims, but take a look at the contents to get an opinion.
  • I never claim what the referee thinks are my two claims, both being of course wrong,
  • in the claim 1) (of the referee) he does not understand that the problem is not the definition of an operation, but the proof that the operation is a Carnot group one (I pass the whole story that in fact the operation is a conical group one, for regular sub-riemannian manifolds this translates into a Carnot group operation by using Siebert, too subtle for the referee)
  • the claim 2) is self-contradictory just by reading only the report.
  • 53 definitions (it is a very dense course), 15 theorems and 27 proofs, which are with no argument: “ not clear, incomplete, or totally obvious
  • but he goes on hunting the typos, thanks, that’s essential to show that he did read the article.

There is a part of the text which is especially perverse: The paper is a cut-and-past process from previous papers of the
author.

Mind you, this is a course based on several papers, most of them unpublished! Moreover, every contribution from previous papers is mentioned.

Tell me what to do with these papers: being unpublished, can I use them for a paper submitted to publication? Or else, they can be safely ignored because they are not published? Hmm.

This shows to me that the referee knows what I am doing, but he does not like it.

Fortunately, all the papers, published or not, are available on the arXiv with the submission dates and versions.

 

______________________________________

See also previous posts:

________________________________________

 

 

The price of publishing with arXiv

This is a very personal post. It is emotionally triggered by looking at this old question  Downsides of using the arXiv and by reading the recent The coming Calculus MOOC Revolution and the end of math research.

What I think? That a more realistic reason for a possible end (read: shrinking) of math research comes from  thinking  that there are any downsides of using the arXiv. That there are any downsides of using an open peer review system. It comes from those who are moderately in favour of open research until they participate into a committee or until it comes to protecting their own little church from strange ideas.

And from others, an army of good but not especially creative researchers, a high mediocracy (high because selected, however) who will probably sink research for a time, because on the long term a lot of mediocre research results add to noise. But on the short term, this is a very good business: write many mediocre, correct articles, hide them behind a paywall and influence the research policy to favour the number (and not the content) of those.

What I think  is that will happen exactly like it happened with the academic painters, a while ago.

You know that I’m right.

Now, because the net is not subtle enough, in order to show you that indeed, these people are right from a social point of view, to say that there is a price for not behaving as they expect, indulge me to explain what was the price which I paid for using the arXiv as the principal means of publication.

The advantage: I had a lot of fun. I wrote articles which contain more than one idea, or which use more than one field of research. I wrote articles on subjects which genuinely interest me, or articles which contain more questions than answers. I wrote articles which were not especially designed to solve problems, but to open ones. I changed fields, once about 3-4 years.

The price: I was told that I don’t have enough published articles. I lost a lot of cites, either because the citation was incorrectly done, or because the databases (like ISI) don’t count well those (not that I care, really). Because I change fields (for those who know me, it’s clear that I don’t do this randomly, but because there are connections between fields) I seem to come from nowhere and go nowhere. Socially, and professionally, is very bad for the career to do what I did. Most of the articles I sent for publication (to legacy publishers) have spent incredible amounts of time there and most of the refusals were of the type “seems OK but maybe another journal” or “is OK but our journal …”. I am incredibly (i.e. the null hypothesis statistically incredible) unlucky to publish in legacy journals.

But, let me stress this, I survived. And I still have lots of ideas, better than before, and I’m using dissemination tools (like this blog) and I am still having a lot of fun.

So, it’s your choice: recall why you have started to do research, what dreams you had. I don’t believe you that you dreamed, as a kid, to write a lot of ISI papers about a lot of arcane problems of others, in order to attract grant financing from bureaucrats who count what is your social influence.

_____________________________________________

Who wins from failed peer reviews?

The recent retraction of 120 articles from non-OA journals, coming after the attack on OA by the John Bohannon experiment, is the subject of Predatory Publishers: Not Just OA (and who loses out?). The article asks:

Who Loses Out Under Different “Predator” Models?

and an answer is proposed.  Further I want to comment on this.

First, I remark that the results of the  Bohannon experiment (which is biased because it is done only on a selected list of OA journals) show that the peer review process may be deeply flawed for some journals (i.e. those OA journals which accepted the articles sent by Bohannon) and for some articles at least (i.e. those articles sent by Bohannon which were acepted by the OA journals).

The implication of that experiment is that maybe there are other articles which were published by OA journals after a flawed peer review process.

On the other side, Cyril Labbé discovered  120 articles in some non  OA journals which were nonsense automatically generated by SCIgen. It is clear that the publication of these 120 article shows that the peer review process (for those articles and for those journals) was flawed.

The author of the linked article suggests that the one who loses from the publication of flawed articles, in OA or non OA journals, is the one who pays! In the case of legacy publishers this is the reader. In the case of Gold OA publishers this is the author.

This is correct. The reason why the one who pays loses is that the one who pays is cheated by the flawed peer review. The author explains this very well.

But it is an incomplete view. Indeed, the author recognizes that the main service offered by the publishers is the  well done peer review. Before discussing who loses from publication of flawed articles, let’s recognize that this is what the publisher really sells.

At least in a perfect world, because the other thing a publisher sells is vanity soothing. Indeed, let’s return to the pair of discoveries made by Bohannon and Labbé and see that while in the case of Bohannon experiment the flawed articles were made up with for the experiment purpose,  Labbé discovered articles written by researchers who tried to publish something for the sake of publishing.

So, maybe before asking who loses from flaws in the peer review, let’s ask who wins?

Obviously, unless there is a conspiracy going on from some years,  the researchers who submitted  automatically generated articles to prestigious non OA publishers did not want their papers to be well peer reviewed. They hoped their papers will pass this filter.

My conclusion is:

  • there are two things a publisher sells: peer review as a service and vanity
  • some Gold OA journals and some legacy journals turned out to have flawed peer review service
  • indeed, the one who pays and does not receive the service looses
  • but also the one who exploits the flaws of the badly done  peer review service wins.

Obviously green OA will lead to fewer losses and open peer review will lead to fewer wins.

Open peer review as a service

The recent discussions about the creation of a new Gold OA journal (Royal Society Open Science)  made me to write this post. In the following there is a concentrate of what I think about the legacy publishers, Gold OA publishers and the open peer review as a service.

Note: the idea is to put in one place the various bits of this analysis, so that it is easy to read. The text is assembled from slightly edited parts of several posts from chorasimilarity.

(Available as a published google drive doc here.)

Open peer review as a service   

Scientific publishers are in some respects like Cinderella. They used to provide an immense service to the scientific world, by disseminating  new results and archiving old results into books. Before the internet era, like Cinderella at the ball, they were everybody’s darling.

Enters the net. At the last moment, Cinderella tries to run from this new, strange world.

Cinderella does not understand  what happened so fast. She was used with the scarcity (of economic goods), to the point that she believed everything will be like this all her life!

What to do now, Cinderella? Will you sell open access for gold?

But wait! Cinderella forgot something. Her lost shoe, the one she discarded when she ran out from the ball.

In the scientific publishers world, peer-review is the lost shoe. (As well, we may say that up to now, researchers who are writing peer-reviews are like Cinderella too, their work is completely unrewarded and neglected.)

In the internet era the author of a scientific research paper is free to share his results with the scientific world by archiving a preprint version of her/his paper in free access repositories.  The author, moreover, HAS to do this  because the net offers a much better dissemination of results than any old-time publisher. In order (for the author’s ideas) to survive, making a research paper scarce by constructing pay-walls around it is clearly a very bad idea.  The only thing which the gold open access  does better than green open access is that the authors pay the publisher for doing the peer review (while in the case of arxiv.org, say, the archived articles are not peer-reviewed).

Let’s face it: the publisher cannot artificially make scarce the articles, it is a bad idea. What a publisher can do, is to let the articles to be free and to offer the peer-review service.

Like Cinderella’s lost shoe, in this moment the publisher throws away the peer-reviews (made gratis by fellow researchers) and tries to sell the article which has acceptable peer-review reports.

Context. Peer-review is one of the pillars of the actual publication of research practice. Or, the whole machine of traditional publication is going to suffer major modifications, most of them triggered by its perceived inadequacy with respect to the needs of researchers in this era of massive, cheap, abundant means of communication and organization. In particular, peer-review is going to suffer transformations of the same magnitude.

We are living interesting times, we are all aware that internet is changing our lives at least as much as the invention of the printing press changed the world in the past. With a difference: only much faster. We have an unique chance to be part of this change for the better, in particular  concerning  the practices of communication of research.

In front of such a fast evolution of  behaviours, a traditionalistic attitude is natural to appear, based on the argument that slower we react, a better solution we may find. This is however, in my opinion at least, an attitude better to be left to institutions, to big, inadequate organizations, than to individuals.

Big institutions need big reaction times because the information flows slowly through them, due to their principle of pyramidal organization, which is based on the creation of bottlenecks for information/decision, acting as filters. Individuals are different in the sense that for them, for us, the massive, open, not hierarchically organized access to communication is a plus.

The bottleneck hypothesis. Peer-review is one of those bottlenecks, traditionally. It’s purpose is to separate the professional  from the unprofessional.  The hypothesis that peer-review is a bottleneck explains several facts:

  • peer-review gives a stamp of authority to published research. Indeed, those articles which pass the bottleneck are professional, therefore more suitable for using them without questioning their content, or even without reading them in detail,
  • the unpublished research is assumed to be unprofessional, because it has not yet passed the peer-review bottleneck,
  • peer-reviewed publications give a professional status to authors of those. Obviously, if you are the author of a publication which passed the peer-review bottleneck then you are a professional. More professional publications you have, more of a professional you are,
  • it is the fault of the author of the article if it does not pass the peer-review bottleneck. As in many other fields of life, recipes for success and lore appear, concerning means to write a professional article, how to enhance your chances to be accepted in the small community of professionals, as well as feelings of guilt caused by rejection,
  • the peer-review is anonymous by default, as a superior instance which extends gifts of authority or punishments of guilt upon the challengers,
  • once an article passes the bottleneck, it becomes much harder to contest it’s value. In the past it was almost impossible because any professional communication had to pass through the filter. In the past, the infallibility of the bottleneck was a kind of self-fulfilling prophecy, with very few counterexamples, themselves known only to a small community of enlightened professionals.

This hypothesis explains as well the fact that lately peer-review is subjected to critical scrutiny by professionals. Indeed, in particular, the wave of detected plagiarisms in the class of peer-reviewed articles lead to the questioning of the infallibility of the process. This is shattering the trust into the stamp of authority which is traditionally associated with it.  It makes us suppose that the steep rise of retractions is a manifestation of an old problem which is now revealed by the increased visibility of the articles.

From a cooler point of view, if we see the peer-review as designed to be a bottleneck in a traditionally pyramidal organization,  is therefore questionable if the peer-review as a bottleneck will survive.

Social role of peer-review. There are two other uses of peer-review, which are going to survive and moreover, they are going to be the main reasons for it’s existence:

  • as a binder for communities of peers,
  • as a time-saver for the researchers.

I shall take them one-by-one.

On communities of peers. What is strange about the traditional peer-review is that although any professional is a peer, there is no community of peers. Each researcher does peer-reviewing, but the process is organized in such a manner that we are all alone.

To see this, think about the way things work: you receive a demand to review an article, from an editor, based on your publication history, usually, which qualifies you as a peer. You do your job, anonymously, which has the advantage of letting you be openly critical with the work of your peer, the author. All communication flows through the editor, therefore the process is designed to be unfriendly with communications between peers. Hence, no community of peers.

However, most of the researchers who ever lived on Earth are alive today. The main barrier for the spread of ideas is a poor mean of communication. If the peer-review becomes open, it could foster then the appearance of dynamical communities of peers, dedicated to the same research subject.

As it is today, the traditional peer-review favours the contrary, namely the fragmentation of the community of researchers which are interested in the same subject into small clubs, which compete on scarce resources, instead of collaborating. (As an example, think about a very specialized research subject which is taken hostage by one, or few, such clubs which peer-reviews favourably only the members of the same club.)

Time-saver role of peer-review. From the sea of old and new articles, I cannot read all of them. I have to filter them somehow in order to narrow the quantity of data which I am going to process for doing my research.

The traditional way was to rely on the peer-review bottleneck, which is a kind of pre-defined, one size for all solution.

With the advent of communities of peers dedicated to narrow subjects, I can choose the filter which serves best my research interests. That is why, again, an open peer-review has obvious advantages. Moreover, such a peer-review should be perpetual, in the sense that, for example, reasons for questioning an article should be made public, even after the “publication” (whatever such a word will mean in the future). Say, another researcher finds that an older article, which passed once the peer-review, is flawed for reasons the researcher presents. I could benefit from this information and use it as a filter, a custom, continually upgrading filter of my own, as a member of one of the communities of peers I am a member of.

All the steps of the editorial process used by legacy publishers are obsolete. To see this, is enough to ask “why?”.

  1. The author sends the article to the publisher (i.e. “submits” it). Why? Because in the old days the circulation and availability of research articles was done almost exclusively by the intermediary of the publishers. The author had to “submit” (to) the publisher in order for the article to enter through the processing pipe.
  2. The editor of the journal seeks reviewers based on  hunches, friends advice, basically thin air. Why? Because, in the days when we could pretend we can’t search for every relevant bit of information, there was no other way to feed our curiosity but from the publishing pipe.
  3. There are 2 reviewers who make reports. (With the author, that makes 3 readers of the article, statistically more than 50% of the readers the article will have,  once published.) Why? Because the pyramidal way of organization was, before the net era, the most adapted. The editor on top, delegates the work to reviewers, who call back the editor to inform him first, and not the author, about their opinion. The author worked, let’s say, for a year and the statistically insignificant number of 2 other people make an opinion on that work in … hours? days? maybe a week of real work? No wonder then that what exits through the publishing pipe is biased towards immediate applications, conformity of ideas and the glorified version of school homeworks.
  4. The editor, based solely on the opinion of 2 reviewers, decides what to do with the article. He informs the author, in a non-conversational way, about the decision. Why? Because again of the pyramidal organization way of thinking. The editor on top, the author at the bottom. In the old days, this was justified by the fact that the editor had something to give to the author, in exchange of his article: dissemination by the means of industrialized press.
  5. The article is published, i.e. a finite number of physical copies are typed and sent to libraries and particulars, in exchange for money. Why? Nothing more to discuss here, because this is the step the most subjected to critics by the OA movement.
  6. The reader chooses which of the published articles to read based on authority arguments. Why? Because there was no way to search, firsthand, for what the reader needs, i.e. research items of interest in a specific domain. There are two effects of this.

(a) The raise of importance of the journal over the one of the article.

(b) The transformation of research communication into vanity chasing.

Both effects were (again, statistically) enforced by poor science policy and by the private interests of those favoured by the system, not willing to  rock the boat which served them so well.

Given that the entire system is obsolete, what to do? It is, frankly, not our business, as researchers, to worry about the fate of legacy publishers, more than about, say, umbrella repairs specialists.

Does Gold OA sell the peer-review service?  It is clear that the reader is not willing to pay for the research publications, simply because the reader does not need the service which is classically provided by a publisher: dissemination of knowledge. Today the researcher who puts his article in an open repository does a much better dissemination  than legacy publishers with their old tricks.

Gold OA is the idea that if we can’t force the reader to pay, maybe we can try with the author. Let’s see what exactly is the service which Gold OA publishers offer to the author (in exchange for money).

1.  Is the author a customer of a Gold OA publisher?

I think it is.

2. What is the author paying for, as a customer?

I think the author pays for the peer-review service.

3. What offers the Gold OA publisher  for the money?

I think it offers only the peer-review service, because dissemination can be done by the author by submitting to open repositories, like the arxiv.org , for free. There are opinions that  that the Gold OA publisher offer much more, for example the service of assembling an editorial board, but who wants to buy an editorial board? No, the authors pays for the peer-review process, which is managed by the editorial board, true, which is assembled by the publisher. So the end-product is the peer-review and the author pays for that.

4. Is there any other service  else sold to the author by the Gold OA publisher?

Almost 100% automated services, like formatting, citation-web services, hosting the article are very low value services today.

However, it might be argued that the Gold OA publisher offers also the service of satisfying the author’s vanity, as the legacy publishers do.

Conclusion.  The only service that publishers may provide to the authors of research articles is the open, perpetual peer-review.  There is great potential here, but Gold OA sells this for way too much money.

______________________________________

Good news: Royal Society Open Science has what is needed

[Source]

Royal Society Open Science will be the first of the Royal Society’s journals to cover the entire range of science and mathematics. It will provide a scalable publishing service, allowing the Society to publish all the high quality work it receives without the restrictions on scope, length or impact imposed by traditional journals. The cascade model will allow the Royal Society to make more efficient use of the precious resource of peer review and reduce the duplication of effort in needlessly repeated reviews of the same article.

The journal will have a number of distinguishing features:

objective peer review (publishing all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader)
• it will offer open peer review as an option
• articles will embody open data principles
• each article will have a suite of article level metrics and encourage post-publication comments
• the Editorial team will consist entirely of practicing scientists and draw upon the expertise of the Royal Society’s Fellowship
• in addition to direct submissions, it will accept articles referred from other Royal Society journals

Looks great!  That is important news, for two reasons:

  • it has some key features: “objective peer review” ,  “open peer review as an option” , “post-publication comments”
  • it is handled by a learned society.

It “will launch officially later in 2014”.  I believe them.  (And if not then another learned society should take the lead, because it’s just the right time.)

For me, as a mathematician, it is also important that it covers math.

After reading one more time, I realize that in the announcement there is nothing about the OA colour: green or gold?

What I hope is that in the worst case they will choose a PeerJ green (the one with the discrete, humanly pleasant shade of gold, see Bitcoin, figshare, dropbox, open peer-review and hyperbolic discounting).  If not, anyway they will be the first, not the last,  academic society (true?) which embraces an OA system with those mentioned important features.

___________________________________

UPDATE:  Graham Steel asked and quotes  “The APC will be waived for the first year. After this it will be £1000”.

Disappointing!  I am a naive person.

So, I ask once more: What’s needed to make a PeerJ like math journal?

___________________________________

Bitcoin, figshare, dropbox, open peer-review and hyperbolic discounting

Thinking out loud about the subject  of models of OA publication

  1. which are also open peer-review friendly,
  2. which work in the real world,
  3. which offer an advantage to the researchers using them,
  4. which have  small costs for the initiators.

PeerJ  is such an example, I want to understand why does it work and find a way to emulate it’s success, as quickly as possible.

You may wonder what difference is between 2 (works in real world) and  3(gives advantage to the user). If it gives an advantage to the user than it should work in real life, right? I don’t think so, because the behaviour of real people is far from being rational .

A hypothesis for achieving 2 is to exploit hyperbolic discounting.  I believe that one of the reasons PeerJ works is not only that it is cheaper than PLOS, but it also exploits this human behaviour.

It motivates the users to  review and to submit and it also finances the site (buys the cloud time, etc).

How much of the problem 4 can be solved by using the trickle of money which comes from exploiting hyperbolic discounting? Some experiments can be made.

What else? Let’s see, there is more which intrigues me:

  • the excellent figshare   of Mark Hahnel. It’s a  free repository,  which provides a DOI and collects some citation and use data.
  • there is a possibility to make blogs on dropbox. I have to understand well, but it seems that Scriptogr.am offers this service, which is an interesting thing in many ways. For example can one use a dropbox blog for sharing the articles, making it easy to collect reactions to them (as comments), in parallel with using figshare for getting a DOI for the article and for depositing versions of the article?
  • tools like git.macropus.org/twitter-viewer  for collecting twitter reactions to the articles (and possibly write other tools like this one)
  •  what is a review good for? a service which an open review could bring to the user is to connect the user with other people interested in the same thing. Thus, by collecting “social mentions” of the article, the author of the article might contact the interested people.
  • finally, and coming back to the money subject (and hyperbolic discounting), if you think, there is some resemblance in the references of an article and the block chains of bitcoin.  Could this be used?

I agree that these are very vague ideas, but it looks like there may be several sweet spots in this 4 dim space

  • (behavioral pricing , citing as  block chain)
  • (stable links like DOI , free repository)
  • (editor independent blog as open article)
  • (APIs for collecting social mentions as reviews)

__________________________________

What’s needed to make a PeerJ like math journal?

 I want to see or even want to participate into the making of a PeerJ like journal for mathematics. Is anybody interested into that? What is needed for starting such a thing?

Here is the motivation: it works and it has open peer-review. It is not exactly green OA, but it is a valid model.
https://peerj.com/pricing/  You pay a $99 for one article per year, to $299 for unlimited number of articles and time. But one has also to have a reviewing activity in order to keep these publishing plans privileges, one has to submit a review at least once per year (a review can even be a comment to an article). That’s a very clever mechanism which takes into account the human nature 🙂

In my opinion we, mathematicians are in dire need for something like this!

Speaking for myself, I am bored to wait for others to do what they suggested they will do.
(Only crickets noise until now, as a response to my questions here https://chorasimilarity.wordpress.com/2014/02/11/questions-about-epijournals-and-the-spnetwork/  )

Also, I believe that mathematicians  form a rather big community today and they deserve better publication models than the ones they have. Free from ego battles and who’s got the biggest citation count.

We do have the arXiv, which is the oldest (true?) and greatest math and physics repository ever.

But it looks that after an early and very beneficial  adoption of this invention of  physicists, we are loosing the pace.

Moreover, if there is any reason to mention this, I also think that such a PeerJ-like publication vehicle will not harm, in the long term, the interests of the mathematical learned societies.

 

The same post is here too.

Questions about epijournals and the spnetwork

I start the post by asking you to prove me wrong.

Episciences.org (with their epijournal concept) and The Selected Papers Network are the only new initiatives in new ways of publication and refereeing in mathematics  (I deliberately ignore Gold OA).

It looks to me they are dead.

Compare with the appearance of new vehicles of research communication in (other) sciences, like PeerJ, which is almost green OA and which has a system of open peer-review!

Are mathematicians … too naive?

There is only one initiative in mathematics which is really interesting:  the writing of the HOTT book.

I would be glad to be wrong, that is why I ask some questions about them.

1. Episciences.    Almost a year ago, on Feb 17 2013, I wrote the post  Episciences-Math, let’s talk about this , asking for a discussion about the almost opaque creation of epijournals.

What is new in this initiative? Nothing, besides the fact that some of the articles in arXiv will be refereed, which is a great thing in itself.

Their have not started yet. In one of the comments, I am instructed to look, for discussions, at    publishing.mathforge.org.

In the post I wrote:

Finally, maybe I am paranoid, but from the start (I can document by giving links to previous comments) I saw the potential of this project as an excuse for more delay until real changes are done. I definitely don’t believe that your project is designed for that purpose, I am only afraid that your project might be used for that, for example by stifling any public discussion about new OA models in math publishing, because you know, there are these epijournals coming, let’s wait and see.

Here is what I found about this,  almost a year after: progress in 2014?

[Mark C. Wilson] I am surprised at the low speed of change in mathematical publishing since early 2012. The Episciences project is now advertised as starting in 2014, but I recall it being April 2013 originally. No explanation is given for the delay. Forum of Mathematics seems to have  a few papers now, at least. SCOAP3 seems to moving at a glacial pace.

Researchers in experimental fields have reasons to be concerned about changing peer review, but surely arXiv is good enough for most mathematicians. Yet it is very far from being universally used. Gowers’ latest idea (implemented by Scott Morrison) of cataloguing free versions of papers in “important” math journals on a wiki seems useful, and initial results do seem to show that some kind of arXiv overlay would suffice for most needs.

Staying in the traditional paradigm, in 2013 I helped completely revamp an existing electronic journal (analytic-combinatorics.org) and it is now in pretty good shape. We could certainly scale up in number of submissions by a factor of 10 (not sure about 100) without any extra resources. I have had a few emails from Elsevier editors explaining how they get resources to help them do their job. I still remain completely unconvinced that free tools like OJS can’t duplicate this easily. Why is it so hard to get traction with editors, and get them to bargain hard with the “owners”?

[Benoit Kloeckner] Just about Episciences: it is true that the project has been delayed and that the communication about this has been scarce, to say the least. The reason for the delay has been the time needed to develop the software, which includes some unconventional feature (notably importation from arXiv and HaL of pdf and more importantly metadata). The development has really started later than expected and we chose not to rush into opening the project, in order to get a solid software. Things have really progressed now, even if it is not perceptible from the outside. The support of partners is strong, and I am confident the project will open this year, probably closer to now than December.

I thought it is already clear for everybody that “software” is a straw man, the real problem is psychological. Why nobody tries to make a variant of PeerJ for math, or other project which works already in other sciences?

2. Spnetwork.   Do you see a great activity related to the spnetwork project,  hailed by John Baez? I don’t, although I  wish to, because at the moment it was the only “game in [the mathematical] town”.

But maybe I am wrong, so I looked for usage statistics of the spnetwork.

Are there any, publicly available? I was not able to find them.

What I did was to login into the spnetwork and search for comments  with “a” inside. There are 1578, from the start of the spnetwork.  Looked for people with “a” in the name, there are 1422.  By randomly clicking on their comments in the last 20 days,  it appears that about 0 of them made any comment.

________________________________

So, please prove me wrong. Or else, somebody start a PeerJ like site for math!

________________________________

Peer-review, is good or bad?

I shall state my belief about this, along with my advice for you to make your own, informed, opinion:

  • Peer-review as a bottleneck on the road to legacy publication is BAD
  • Peer-review as an authority argument (I read the article because is published in a peer-reviewed journal) is UNSCIENTIFIC
  • Open, perpetual peer-review offers a huge potential for scientific communication, thus is GOOD.
  • It is though the option of the author to choose to submit the article to public attention, this should not be mandatory.
  • Moreover, smart editors should jump on the possibility to exploit open peer-review instead of
  • the old way to throw to the wastebasket the peer-reviews, once the article is accepted or rejected, which is BAD.
  • Finally, there is NO OBLIGATION for youngsters to peer-review, contrary to the folklore that it is somehow their duty to the community to do this. No, this is only a perverse way to keep the legacy publishing going, as long as the publishers use them only as an anonymous filter. On the contrary, youngsters, everybody honest in fact, should be encouraged to use rewarded for using any of the abundant new means of communication for the benefit of research.

This post is motivated by the Mike Taylor’s Why peer-review may be worth persisting with, despite everything and by comments at the post Two pieces of all too obvious propaganda.

See also the post Journal of uncalled advices (and links therein).

An experiment in open writing and open peer-review

I shall try the following experiment in open writing/open peer-review which uses only available soft and tools.

No technical  knowledge is needed to do this.

No new platform is needed for this.

The idea is the following. I take an article (written by me) and I copy-paste it as text + figures in a publicly shared google document with comments allowed.

On top of the document I mention the source (where is the article from) , then I add a CC-BY licence.

This is all. If anybody wishes to comment the article, it can be done precisely, by pointing to the controversial paragraphs.

In the comments are allowed links, of course, therefore there is no limit to the quantity of data which can be put in such a comment.

There could be comment replies.

In conclusion, this is a very cheap way to do both a (limited) way of open writing and to allow open peer-review.

For the moment I started not with articles directly, but with edited content from this open notebook.  I made until now two “archives”

Even better would be to make a copy of the doc and put it in the figshare, to get a DOI. Then you stick the DOI  link in the doc.

Public shared chemlambda archive

Let’s try this experiment.  I prepared and shared publicly the

Chemlambda archive

which is an all-in-one document about the building of chemlambda (or the chemical concrete machine), from the vague beginnings to the moment when the work on Distributed GLC started.

The sources are from this open notebook.

I hope it makes an interesting reading, but what I hope more is that you shall comment on it and

  • help to make it better
  • identify new ideas which have potential
  • improve the presentation
  • ask questions!

Thanks!

Graphic lambda calculus published, and some questions

The  first article on graphic lambda calculus (GLC) arXiv:1305.5786  was published: the reference is M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), 311-360.

This is good news for the dissemination of the article.

You know my opinions about publishing… I wish there would be open, perpetual, peer-reviews which in fact develop as discussions (maybe as creative as the articles) around research subjects.

Still don’t know if to send to publication the article Chemical concrete machine arXiv:1309.6914 , where chemlambda is introduced.  I tried to see what happens if I put it on figshare too, here is it with a doi.

There is the most recent article on the subject

M. Buliga, L. H. Kauffman, GLC actors, artificial chemical connectomes, topological issues and knots

We welcome an open peer-review and discussion about this game-changing model of distributed computing here:

Open peer-review call for arXiv:1312.4333

What do you think about this?

Should we publish? Should we try to go completely open?

____________________________

Open peer-review call for arXiv:1312.4333

Open peer-review is a healthy alternative of the classical peer-review.  If there is any value in the peer-review process — and there is — it comes from it being open and dynamically changing.

Peer-review should be used for communication, for improving one’s and others work.  Not for authority stamps, nor for alpha – omega male monkey business.

With all this in mind, but also with a clear, declared interest into communication of research, I make this experimental call for open peer review to the readers of this blog. The inspiration comes from this kind post by David Roberts.

_______________________

Useful material for the discussion:

Coming from a collaboration which was previously mentioned (Louis Kauffman, a team from ProvenSecure Solutions, me), we want to develop and also explore the possibilities given by the GLC actor model of distributed computing.

A real direction of research is the one of endowing the Net (or parts of it, or, … there are even more strange variants, like no part of it especially) with an artificial chemical connectome, thus mimicking the way real brains (we think that they) work.

If you think “consciousness” then hold your horses, cowboy! Before that, we really have to understand (and exploit to our benefice) all kinds of much more basic aspects, all kinds of (hundreds of) autonomous mechanisms and processes which are part of the brain works, which structure our world (view), which help and also  limit our thinking, which are, most of them, ignored by the logicians but explored already by neuroscience and cognitive science.

So, yes, indeed, we want to change the way we think about distributed computing, make it more biological like, but we don’t want to fall into the trap of thinking we have the ultimate solution toward consciousness, nor do we want to build, or believe we can do it, a Skynet. Instead, we want to take it slowly, in a scientific way.

Here we need your help! The research, up to now reported in arXiv:1312.4333 (with links to other sources) and in this open notebook, is based on some nontrivial ideas which are easy to formulate, but hard to believe.

Peer-review them, please! Show us where we need to improve, contradict us where we are wrong, contribute in an open way! By being open, you will automatically be acknowledged.

Suggestions about how this peer-review can be done are welcome!

UPDATE: Refurio Anachro linked the article to the spnetwork.  And moreover started a thread, with this post, about lambda calculus! Thank you!

Two pieces of all too obvious propaganda

Lately I have not posted about the changes in the academia concerning communication of research. There were many occasions to comment, many pieces of propaganda which I interpret as the beginning of a dark period, but, hey, also as a clear sign that the morning light is near.

Having a bit of time to spare, I shall react to two recent pieces of a slightly more subtle propaganda. Only slightly more subtle, that is my opinion. You don’t have to believe me, make your own opinion instead!

Please consider also the point of view that the following two pieces are involuntary propaganda, accidentally produced by ignorance.

Make your own opinion, that’s the most important.

Piece 1: How to become good at peer review: A guide for young scientists by Violent methaphors.  The post starts by the following

Peer review is at the heart of the scientific method. Its philosophy is based on the idea that one’s research must survive the scrutiny of experts before it is presented to the larger scientific community as worthy of serious consideration.

I saw before this nonsense that peer-review has something to do with the scientific method. It has not, because the scientific method says nothing about peer review. Probably the author makes a confusion between the need to be able to reproduce a scientific result with peer review? I don’t know, but I recommend to first learn what is the scientific method.

Peer review is a recent procedure which has to do with the communication of science through journals.  I will no discuss the value peer review brings to research (a value which exists, certainly), but instead I shall just comment that:

  • as it is done today, peer review is that piece of paper the legacy publisher throws into the wastebasket before making your work, dear researcher, his,
  • peer review is an idea based on authority, not on science, so that you don’t have to understand why a piece of research is valuable, instead you just have to lazily accept it if it appeared in a peer-reviewed journal,
  • peer review needs you, young researcher, because most of everybody else is too busy with other stuff. Nobody will thank you, is your duty (why? nobody really knows, but they want you to believe this).

The second part of the quote mentions that “one’s research must survive the scrutiny of experts before it is presented to the larger scientific community as worthy of serious consideration”, which would be just sad, dinosaurish speaking, if it would come from an old person who did not understood that today there is, or there should be, free access to information. This freedom does not come without obligations: if you want to survive  to this deluge of information, then you have to work hard for this and make responsible choices, instead of lazily relying on anonymous experts and on filtered channels of informations. Your take: do something like religion and believe the authority, or do some science and use your head. Which is your pick?

UPDATE (20.10.14): I can’t explain to myself why Mike Taylor does not detect this, behind the bland formulations.
He does, however, makes good points here.

Piece 2. Unexpected, but I think a bit more subtle is this post at Not even wrong: Latest on abc . The main idea, as far as I understand it, is that Mochizuki work is not mathematics unless accepted by the community. Here “accepted” means to pass a peer-review, which Mochizuki does not oppose, of course, only that apparently he worked too much for the “experts” to be able to digest it. So,  it is Mochizuki fault because there seem to be needed many months of understanding, if not years, from the part of the experts. This is an effort that very few people are willing to make, unfortunately. Somehow this is Mochizuki fault, if I well understand. I posted the following comment

This looks to me as a social problem, not a mathematical one. On one side, there are no “experts” in Mochizuki field, because he made it all. On the other side, the idiotic pressure to publish, which is imposed in academia (the legacy publishers being only opportunistic parasites, in my opinion), makes people not willing to spend time to understand, even if Mochizuki past achievements would imply that there might be worthy to do this.
To conclude, is a social problem, even an anthropological one, like a foreign ape which shows to the local tribe how to design a pulley system, not at all believable to spend time on this. Or it is just nonsense, who knows without trying to understand?

Peter Woit replied by sending me to read a very interesting, well known text, thank you!

For some great wisdom on this topic, I urge everyone who hasn’t done so to read Bill Thurston’s “On proof and progress in mathematics”
http://arxiv.org/abs/math/9404236
For Mochizuki’s proof to be accepted, other members of the community are going to have to understand his ideas, see how they are supposed to work and get convinced that they do work. This is how mathematics makes progress, not just by one person writing an insight down, but by this insight getting communicated to others who can then use it. Right now, this process is just starting a bit, with the best bet for it to move along whatever Yamashita is writing. It would be best if Mochizuki himself could better communicate his ideas (telling people they just need to sit down and devote six months of time to trying to puzzle out 1000 pages of disparate material is not especially helpful), but it’s sometimes the case that the originator of ideas is not the right person to explain them to others.

What is the propaganda here? Well, it is the same, in favor of legacy publishers, but hidden behind some  universal law that a piece of math is not math unless it has been processed by the classical peer-review mill. Please send us small chunks, don’t hit us with big chunks of math, because the experts will not be able to digest them.

______________________

On John Bohannon article in Science

In Science appeared the article Who’s afraid of peer-review?  by John Bohannon.  There were many reactions already to this article, I’ll add mine.

I shall politely pretend that the article is not a piece of propaganda against OA. (Remark, ironically, that in order to enhance it’s dissemination Science did not hide it behind a paywall.)

Then, it’s like a gun, which can be used by anybody for shooting in whatever direction they like.  Pick your line:

  • Gold OA journals are afraid of peer-review (because Bohannon uses a list made of exclusively by Gold OA journals)
  • Traditional peer review is a joke, should be improved by making it more open, and perpetual, (cf Michael Eisen)
  • DOAJ is a joke (because Bohannon used DOAJ as one of the sources for building his list and some of the DOAJ listed journals accepted the flawed articles)
  • DOAJ is not a joke (not all DOAJ listed journals from Bohannon list accepted the flawed articles)
  • Beall’s list is good (a lot of predatory publishers from Beall’s list accepted the flawed articles)
  • Beall’s list is not entirely good (however, some of the publishers listed by Beall did not accept the flawed articles)
  • Study flawed, pay-back by Science after the arsenic life story (cf Mike Taylor)
  • All OA journals are bad quality (yes, and salami slicing publishing is ethical, provided is published by legacy publishers)
  • We should trust only ISI journals (sure, boss, but I like DORA)

What if, when faced with spin, we should instead declare that anybody has the right to make it’s own opinion, based on the abundant evidence available? Instead of looking at DOAJ through authority lens, why not acknowledge that DOAJ is a useful tool, not an authority argument. Same for Beall’s list. They don’t have to show us perfect lists, they just help us with some information. We have to use our brains, even if the legacy publishers don’t like that.  We don’t have to believe what  somebody say, based only on the authority of the person. For example look at Peter Suber’s post, you don’t have to believe him. It’s up to you to read Bohannon’s article, then Suber’s reaction, or Eisen, or Taylor, whatever you want to look at, and then make your own opinion.

It goes the same when it comes to legacy publishers, to ISI lists, to research articles. Read them, use as many information you can gather and make your own opinion. Don’t rely on authority, there’s something better these days, is called free access to information. If you are lazy and you don’t want to make your own mind, well, it’s not my problem, because, like never before in history,  information is not scarce.

Journal of uncalled advices

All the steps of the editorial process used by legacy publishers are obsolete. To see this, is enough to ask “why?”.

  1. The author sends the article to the publisher (i.e. “submits” it). Why? Because in the old days the circulation and availability of research articles was done almost exclusively by the intermediary of the publishers. The author had to “submit” (to) the publisher in order for the article to enter through the processing pipe.
  2. The editor of the journal seeks reviewers based on ___________ [please add your suggestions], which amounts to hunches, friends advice, basically thin air. Why? Because, in the days when we could pretend we can’t search for every relevant bit of information, there was no other way to feed our curiosity but from the publishing pipe.
  3. There are 2 reviewers who make reports. (With the author, that makes 3 readers of the article, statistically more than 50% of the readers the article will have,  once published.) Why? Because the pyramidal way of organization was, before the net era, the most adapted. The editor on top, delegates the work to reviewers, who call back the editor to inform him first, and not the author, about their opinion. The author worked, let’s say, for a year and the statistically insignificant number of 2 other people make an opinion on that work in … hours? days? maybe a week of real work? No wonder then that what exits through the publishing pipe is biased towards immediate applications, conformity of ideas and the glorified version of school homeworks.
  4. The editor, based solely on the opinion of 2 reviewers, decides what to do with the article. He informs the author, in a non-conversational way, about the decision. Why? Because again of the pyramidal organization way of thinking. The editor on top, the author at the bottom. In the old days, this was justified by the fact that the editor had something to give to the author, in exchange of his article: dissemination by the means of industrialized press.
  5. The article is published, i.e. a finite number of physical copies are typed and sent to libraries and particulars, in exchange for money. Why? Nothing more to discuss here, because this is the step the most subjected to critics by the OA movement.
  6. The reader chooses which of the published articles to read based on authority arguments. Why? Because there was no way to search, firsthand, for what the reader needs, i.e. research items of interest in a specific domain. There are two effects of this. (a) The raise of importance of the journal over the one of the article. (b) The transformation of research communication into vanity chasing.  Both effects were (again, statistically) enforced by poor science policy and by the private interests of those favoured by the system, not willing to  rock the boat which served them so well.

Given that the entire system is obsolete, what to do? It is, frankly, not our business, as researchers, to worry about the fate of legacy publishers, more than about, say, umbrella repairs specialists.

But, what to do, in these times of transition?  It is in my power to laugh a bit, at least, and maybe to make others, with real decision power, to think.

That is why I propose a Journal of Uncalled Advices, which would work as the spnetwork, only driven by publishers, as a journal.

  1. The editor searches in the arxiv, or elsewhere, article he likes, or he consider important.
  2. Makes a public call for reviews of the selected articles. He manages the place of the discussion.
  3. At some point a number of technical reports appear (the uncalled advices), collaboratively.
  4. The editor uses again his nose to separate opinion from technical reports and produces (writes) two final (for the journal)  articles about the research article. The opinion part could as well serve as vulgarization of the research article, the technical part could serve to the specialists and to the author.
  5. The two articles are sold by piece, for 6 months and then they are made public.
  6. The reader uses the journal articles as evidence and makes his own mind about the research article.

____________________

UPDATE:  The following older posts are relevant

 

____________________

My post ended here, in case there is something added after the end of the post, it’s an example of uncalled adds.

Xanadu rules for OA publishing

Any new project in OA publishing meets high expectations. That is why most of them fail. Truth is that, until now, and with few exceptions, no such project meets the expectations of us, anonymous creators of net content.

I am interested in research and communication of it, particularly in mathematical research (but math is everywhere and best challenges are now in the sciences of the living, so it does not matter much what kind of research we are discussing about).

Discussions about new OA publication proposals, I noticed, quickly turn to some sensible points, which are usually not considered by the creators. Central among those is: we need more interactive forms of communicating research. The notion of the article as the main communication vehicle is challenged, because if we are willing to allow the article to bear online, perpetual examination and commenting then, at some point, we obtain a complex product, with many contributors, with many levels of complexity, something which is no longer an article, but something else.

For simplicity, let’s call such an object a “document”, which may have various versions in space and in time (i.e. a version 3 in London and a version 2 in Singapore).

Contributors to documents are called “creators”, be them authors, reviewers or commenters.

We have a kind of a mess, right? Something not quite easy to handle by the www, something which will entropically turn to chaos.

Not quite. It would be so in the world of the html link.

There is an old attempt, the Project Xanadu, with it’s tumblers and enfilades, which look to me as an ideal system of meaningful organisation of scientific communication. We don’t need the whole net to devolve in time to the ideas of the ’60’s and then re-evolve with tumblers instead of html links. Instead, for those of us who are nerdy, have long time attention spans and want to communicate original, creative research, the rules of the Project Xanadu could be taken as an example of what would be nice to have.

In the following I reproduce those rules, but with some words replaced, like “server” with “journal” (that’s a bad word, journal, but for the moment I don’t have another), “user” with “creator”. Also, “document is to be understood in the sense explained previously. Finally, I shall strikethrough the rules which I don’t think they apply (or I don’t support, because are the kind of evil thing a legacy publisher would adore).

Here they are:  (source for the original rules)

  1. Every journal is uniquely and securely identified.
  2. Every journal can be operated independently or in a network.
  3. Every creator is uniquely and securely identified.
  4. Every creator can search, retrieve, create and store documents.
  5. Every document can consist of any number of parts each of which may be of any data type.
  6. Every document can contain links of any type including virtual copies (“transclusions“) to any other document in the system accessible to its owner.
  7. Links are visible and can be followed from all endpoints.
  8. Permission to link to a document is explicitly granted by the act of publication.
  9. Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (“transclusions”) of all or part of the document.
  10. Every document is uniquely and securely identified.
  11. Every document can have secure access controls.
  12. Every document can be rapidly searched, stored and retrieved without creator knowledge of where it is physically stored.
  13. Every document is automatically moved to physical storage appropriate to its frequency of access from any given location.
  14. Every document is automatically stored redundantly to maintain availability even in case of a disaster.
  15. Every journal provider can charge their creators at any rate they choose for the storage, retrieval and publishing of documents.
  16. Every transaction is secure and auditable only by the parties to that transaction.
  17. The client-server communication protocol is an openly published standard. Third-party software development and integration is encouraged.

___________________

UPDATE: Among the Xanadu Projects, as listed on this page of Xanadu Australia, there is

Committed (persistent) online publishing

Quick reaction on spnetwork part 4

Triggered by the spnetwork 4 post by Christopher Lee, hosted at John Baez’ Azimuth.

Only a very brief reaction, written from a beach, will come back later to it.

I am intrigued by this:

Think about it: that’s what search engines do all the time—a search engine pulls material out of all the worlds’ walled gardens, and gives it a new life by unifying it based on what it’s about. All selectedpapers.net does is act as a search engine that indexes content by what paper and what topics it’s about, and who wrote it.

There seems to be a huge potential here.

OK, I am thinking about it and I’m having the usual conversation with the regular naysayer, who tells me that in order to switch to the spnetwork, ot to ANY alternative of the actual publishing system, you need to have an incentive for that. What’s wrong with the actual system, besides the double monopoly (monopoly and monopsony, hence a banana republic situation) of greedy publishers hand in hand with managers in academia? Not much (with the condition that you pay the publisher with OTHER PEOPLE MONEY). The researcher writes an article, which is peer-reviewed, everything is verified and working nicely, why change that?

The regular naysayer tells me that the real problem is the huge number of articles. Which one to read and which not? Which one to check to the bones, even if already peer-reviewed? The answer is this: is statistically better to read the articles appeared in good journals.

Any system aiming to improve  the old one should solve this problem of picking articles from the huge pile which is produced every day.

Apparently, the name, “Selected Papers Network”, suggests that Lee’s project tries to solve this. But now here he comes with a really interesting and different idea!

Forget the incentive, let’s think about articles. The truth is that even if there are many articles, too many to read, there are very few readers of an article  chosen at random. As authors, we all want our articles to be read. As readers, we long for fewer, more interconnected articles.

There are too many articles either because the article is written for ISI points, or because there are too many articles writers, or even because the article is not touching the readers who might do something with it, because they read other articles or, rarely, because they are not yet born (sorry, but I can’t stop to mention again the comparison of what is happening now in research publishing with the impressionists revolution, so why not accept that there are already articles which don’t have yet readers, like, say, Van Gogh paintings during his lifetime).

Or, an article, as it is written today, with the manifold stupid conventions which are reppelant now, but have reasons in the past, is a very bad vector of information. There are a lot of articles, each trying to get a bit og brain time, on it’s own, without trying to collaborate with others. In this respect, I believe this is a far consequence of  the cartesian method, which I think is obsolete in some respects, because it is  “designed as a technique for understanding performed by one mind in isolation, severely handicapped by the bad capacity of communication with other isolated minds. It was a very efficient technique, which is now challenged by two effects of its material outcomes:

  • better communication channels provided by the www,
  • mechanical, or should I say digital, applications of the method which largely surpass the capacity of understanding of one human mind, as witnessed for example by the first computer aided mathematical proofs, or for another example by the fact that we can numerically model physical phenomena, without understanding rigorously why the method works.”

As far as I understand the new idea of Christopher Lee, the tagging system proposed by spnetwork could be a part of a solution for the problem of having too may articles not communicating one with another (by grouping them).

Another part of the solution could be using other vehicles than articles for communicating science, I am thinking  about open notebooks. There are not too many open notebooks, but they have the following advantages over articles:

  • more honesty, be it about negative results, apparent dead ends, more clear background data and real motivations for research,
  • more lively, welcoming discussions, than the dry and often hidden peer-review
  • naturally interactive.

Articles are like movies, open notebooks are more like games.

Therefore, to conclude, it seems to me that Christopher Lee’s federated ecosystem could have more chances if it allows open notebooks (besides articles, which are still necessary) to join the party.

Academic Spring and OA movement just a symptom, not cause of change

… a reaction to profound changes which  question the role of universities and scholars. It’s a symptom of an adaptation attempt.

The OA movement, which advances so slowly because of the resistance of the scholars (voluntarily lulled by the propaganda machine of the association between legacy publishing industry and rulers of universities), is just an opening for asking more unsettling questions:

  • is the  research article as we know it a viable vehicle of communication?
  • what is the difference between peer-reviewing articles and writing them?
  • should review be confined to scholars, or informed netizens (for example those detecting plagiarism) have their place in the review system?
  • is an article a definite piece of research, from the moment of publishing it (in whatever form, legacy or open), or it is forever an evolving project, due to contributions from a community of interested peoples, and if the latter is the case, then who is the author of it?
  • is it fair to publish an article inspired (in the moral sense, not the legal one) from information freely shared on the net, without acknowledging it, because is not in the form of an article?
  • is an article the goal of the research, as is the test the goal of studying?

Which is our place, as researchers? Are we like the scholars of medieval universities, becoming increasingly irrelevant, less and less creative, with our modern version of rhetoric and theological studies, called now problem solving and grant projects writing?

If you look at the timing of the end of the medieval universities and the flourishing of the early modern ones, there are some patterns.We see that (wiki source on early modern universities):

At the end of the Middle Ages, about 400 years after the first university was founded, there were twenty-nine universities spread throughout Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.[33] This pace continued until by the end of the 18th century there were approximately 143 universities in Europe and Eastern Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and Spain (23) – this was close to a 500% increase over the number of universities toward the end of the Middle Ages.

Compare with the global spread of the printing press. Compare with the influence of the printing press on the Italian Renaissance (read about Demetrios Chalkokondyles).

The scientific revolution is

Traditionally held to have begun in 1543, when were first printed the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, which gave a new confidence to the role of dissection, observation, and mechanistic view of anatomy,[59] and also De Revolutionibus, by Nicolaus Copernicus. [wiki quote]

Meanwhile, medieval universities faced more and more problems, like [source]

Internal strife within the universities themselves, such as student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements in science and the arts.[36] This era was also affected by the rise of the nation-state. As universities increasingly came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University of Paris) became more and more prominent. Although the older student-controlled universities still existed, they slowly started to move toward this structural organization. Control of universities still tended to be independent, although university leadership was increasingly appointed by the state.[37]

To finish with a quote from the same wiki source:

The epistemological tensions between scientists and universities were also heightened by the economic realities of research during this time, as individual scientists, associations and universities were vying for limited resources. There was also competition from the formation of new colleges funded by private benefactors and designed to provide free education to the public, or established by local governments to provide a knowledge hungry populace with an alternative to traditional universities.[53] Even when universities supported new scientific endeavors, and the university provided foundational training and authority for the research and conclusions, they could not compete with the resources available through private benefactors.[54]

So, just a symptom.

______________

UPDATE:  Robin Osborne’s article is a perfect illustration  of the confusion which reigns in academia. The opinions of the author, like the following one [boldfaced by me]

When I propose to a research council or similar body that I will investigate a set of research questions in relation to a particular set of data, the research council decides whether those are good questions to apply to that dataset, and in the period during which I am funded by that research council, I investigate those questions, so that at the end of the research I can produce my answers.

show more than enough that today’s university is medieval university reloaded.  How can anybody decide a priori which questions will turn out to be good, a posteriori?  Where is the independence of the researcher? How is it possible to think that a research council may have any other than a mediocre glimpse into the eventual value of a line of research, based on bureaucratic past evidence? And for a reason: because research is supposed to be an exploration, a creation of a new territory, it’s not done yet at the moment of grant application. (Well, that’s something everybody knows, but nevertheless we pretend it does not matter, isn’t it sick?)  Instead, conformity reigns.  Mike Taylor spends a post on this article, exposing it’s weakness  as concerns OA.

______________

UPDATE 2: Christopher Lee takes the view somewhat opposite to the one from this post, here:

In cultured cities, they formed clubs for the same purpose; at club meetings, particularly juicy letters might be read out in their entirety. Everything was informal (bureaucracy to-science ratio around zero), individual (each person spoke only for themselves, and made up their own mind), and direct (when Pierre wrote to Johan, or Nikolai to Karl, no one yelled “Stop! It has not yet been blessed by a Journal!”).

To use my nomenclature, it was a selected-papers network. And it worked brilliantly for hundreds of years, despite wars, plagues and severe network latency (ping times of 109 msec).

Even work we consider “modern” was conducted this way, almost to the twentieth century: for example, Darwin’s work on evolution by natural selection was “published” in 1858, by his friends arranging a reading of it at a meeting of the Linnean Society. From this point of view, it’s the current journal system that’s a historical anomaly, and a very recent one at that.

 

I am very curious about what Christopher Lee will tell us about solutions to  escape  wall-gardens and I wholeheartedly support the Selected Papers Net.

But in defense of my opinion that the main problem resides in the fact that actual academia is the medieval university reloaded, this  quote (taken out out context?) is an example of the survivorship bias. I think that the historical anomaly is not the dissemination of knowledge by using the most efficient technology, but sticking to old ways when revolutionary new possibilities appear. (In the past it was the journal and at that time scholars cried “Stop! it is published before being blessed by our authority!”, exactly alike scholars from today who cry against OA. Of course, we know almost nothing today about these medieval scholars which formed the majority at that time, proving again that history has a way to punish stupid choices.)

More details about the Game of Research and Review

What  would you get by combining gamification with visual representations of the peer-review process? A Game of Research and Review.

That’s a follow-up of the post MMORPGames at the knowledge frontier, with more details about how it could work, as a possible solution for making people want by themselves to do the peer-review. (See also the posts Gamifying peer-review?   and We, researchers, just need a medium for social interaction, and some apps .)

 

I propose another rewarding mechanism than points, a more visual one. First, the articles, according to their keywords, produce a 2D landscape, in principle by the same procedure as this old clickable map of mathematics: http://www.math.niu.edu/~rusin/known-math/index/mathmap.html

Then, reviewing an article is like claiming property of a piece of land in this world. The value of the claim itself, depends on others opinion about your review.

Instead getting points, you own (shares, say, of) a territory.

Finally, there should be a sort of market for selling-buying property (i.e. shares of some piece of land), which is also automatically mediated by setting a minimal value of a piece of land as function of how connected it is (for example, if you “own” shares over 5 articles, the minimal value of that composite piece of land increases with the number of connections of these articles with other articles you don’t have, or sell). This gives a mechanism of increasing the value of a piece of land simply by adding articles which connects previously unconnected other articles.

The soft needed for this exists, I suppose. Moreover, a visual representation is much more impressive than a number (of points) and it raises more primitive reactions in the users brains.

SelectedPapers.net launched!

As many people know, academia  fights to break free from the chains of legacy publishers. And apparently it’s on the way to succeed.  Unlike the  fully automated Academic Skynet,  which is still in a very early draft version, the human-edited  SelectedPapers.net  is now launched.

OK. fun aside, this could prove to be an excellent initiative, DEPENDING ON YOU.  The limit of this system is only your imagination. Try it, play with it, disseminate it.

Congratulations to Christopher Lee and John Baez for this project. Here are two posts (at Baez blog) explaining what this is about:

In few words, it is a tag system for articles in arxiv or PubMed (or with a DOI) which is designed to work, in principle, with any existing social network. For the moment it works in association with G+ (i.e. posts on G+ with the hashtags #spnetwork  arxiv:1234.5678  are automatically retrieved by spnetwork).

But even in this stage, one can use a G+ post to connect, say, an arxiv article with something from a third place. I used this trick in this post, in order to signal that  arXiv:1305.5786  has an associate web tutorial page on this blog. It worked smoothly!

So, what about open peer reviews? Other ideas?

__________

UPDATE: Timothy  Gowers  has a post where he explains what he intends to do with/for  the spnetwork. Especially interesting part:

But the other reason for writing the post is that I hope it will encourage others to do similar things: even if 1000 mathematicians each wrote just one review, that would already create a site worth exploring, and in principle it could happen very quickly.

__________

UPDATE 2:  I just put on my web page the following:

READ THIS: I recommend the use of The Selected Papers Network. Look at these two posts by John Baez to understand how it works: spnetwork (Part 1) , spnetwork (Part 2). If you wish to notice me about your recent (or older) arxiv articles which you think I might benefit from and comment about them then send me a mail or connect with me on G+.

__________

UPDATE 3:    Would it be possible to blend the human edited spnetwork with an automatic service like NewSum?    After all, the start of this post is only a half joke.

Feelings about impact factors and journals

Can anybody explain (without falling into ridicule) why, simultaneously:

Btw, this is a link to an excellent article by Björn Brembs, coming just days after another great article, “We have met the enemy and it is us” by Mark Johnston.

I wanted to make a short post on the use of “feelings” in publishing and peer-reviews  since a long time.  Now is an occasion to finally write one, using Björn’s posting as an example. In his article, he reproduces “feelings” of editors, like (my boldfaces):

… we will decline to pursue [your manuscript] further as we feel we have aired many of these issues already in our pages recently …

we feel that the scope and focus of your paper make it more appropriate for a more specialized journal …

Isn’t it striking that such “feelings” always appear when there is no rational argument (to be overtly mentioned) against accepting an article? I think everybody has at least an example from personal experience. Is this true? Check out your files for examples.  If you find a referee report or an editor decision which contains feelings but no arguments, try to read them while listening this classic:

This “feelings” subject is related to the conclusion of Brembs et al. article Deep Impact: Unintended consequences of journal rank , which is

Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

… because a new scholarly communication system has to rely on technical (and not “feelings” based), perpetual (i.e. ever enhancing), open  pre- and post-  peer review system. (Which, incidentally, is also the only service a publisher can  still offer to the author, but, strangely, does not want to acknowledge that.)

What is an author buying from a Gold OA publisher?

Questions/answers  about Gold OA: (please add your answers and other questions)

1. Is the author a customer of a Gold OA publisher?

I think it is.

2. What is the author paying for, as a customer?

I think the author pays for the peer-review service.

3. What offers the Gold OA publisher  for the money?

I think it offers only the peer-review service, because
– dissemination can be done by the author by submitting to arxiv, for example,
– +Mike Taylor  says that the Gold OA publisher offer the service of assembling an editorial board, but who wants to buy an editorial board? No, the authors pays for the peer-review process, which is managed by the editorial board, true, which is assembled by the publisher. So the end-product is the peer-review and the author pays for that.
– almost 100% automated services, like formatting, citation-web services, hosting the article are very low value services today.

However, it might be argued that the Gold OA publisher offers also the service of satisfying the author’s vanity, as the legacy publishers do.

4. Why no Gold OA publisher present itself as a seller of the peer-review service?

Have no idea.

5. Why is the peer-review service valuable?

Because:
– it spares time for the reader, who will select more likely a peer-reviewed  paper to read,
– it is a filter for the technical quality of the articles,
– it helps authors to write better articles, as an effect of the referees comments,
– it is also a tool for influencing the opinions of the community, by spinning up some research subjects and downplaying others.

Also on G+ here.

We, researchers, just need a medium for social interaction, and some apps

… so that we can freely play the game of research. Because is a game, i.e. it is driven by curiosity, desire to learn, does not depend on goals and tasks, it is an extension of a child attitude, lost by the majority of adults. Let the vanity aside and just play and interact with other researchers, on equal foot. Let the creativity manifest freely.

Two    Three  Four examples:

Rap Genius is a very well-loved and well-used online tool for annotating rap songs.  Only, not so surprisingly, people are starting to use it to annotate other things.  Like scientific papers.

  • Olivier Charbonneau writes

    Actually, that’s an interesting take on mass data visualization – imagine creating an algorithm that could parse a dataset of bibliographic information into minecraft (for example) – what would that research “world” look like?

  • Hermann Hesse’s   Das Glasperlenspiel (aka Magister Ludi)
  • Timothy Gowers, some time ago, in this post, writes:

What I think could work is something like a cross between the arXiv, a social networking site, Amazon book reviews, and Mathoverflow.

 

 

 

_________

Context:

I have a question about this idea of mixing games with peer-reviews

I don’t get it, therefore I ask, with the hope of your input. It looks that the Gamifying peer-review post has found some attentive ears, but the Game on the knowledge frontier not. It is very puzzling for me, because:

  • the game on the frontier seems feasible in the immediate future,
  • it has two ingredients – visual input instead of bonus points and peer-review as a “conquest” strategy – which have not been tried before and I consider them potentially very powerful,
  • the game on the frontier idea is more than a proposal for peer-review.

My question is: why is the game on the frontier idea less attractive?

Looking forward for your open comments. Suggestions for improvement of such ideas are also especially welcomed.

_______________

UPDATE:  Olivier Charbonneau writes:

Actually, that’s an interesting take on mass data visualization – imagine creating an algorithm that could parse a dataset of bibliographic information into minecraft (for example) – what would that research “world” look like?

 

MMORPGames at the knowledge frontier

I think we can use the social nature of the web in order to physically construct the knowledge boundary. (In 21st century “physical”  means into the web.)

Most interesting things happen at the boundary. Life on earth is concentrated at it’s surface, a thin boundary between the planet and the void. Most people live near a body of water. Researchers are citizens of the boundary between what is known and the unknown.  Contrary to the image of knowledge as the interior of a sphere, with an ever increasing interface (boundary) where active research is located, no, knowledge, old or new, is always on the boundary, evolving like life is, into deeply interconnected, fractal like niches.

All this for saying that we need an interesting boundary where we, researchers, can live, not impeded by physical or commercial constraints.  We need to build the knowledge boundary into the web, at least as much the real Earth was rebuilt into the google earth.

Game seems to be a way. Because game is both social and an instrument of exploration. We all love games, especially researchers. Despite the folklore describing nerds as socially inept, we were the first adopters of  Role Playing Games, later evolved into virtual worlds of the Massively Multiplayer Online Role Playing Games.  Why not make the knowledge frontier into  one of these virtual worlds?

It looks doable, we almost have all we need. Keywords of research areas could be the countries, places. The physics of this world is ruled by forces with articles citation lists as force-carrying bosons.  Once the physics is done, we could populate this world and play a game of conquest and exploration. A massively multiplayer online game.  Peer-reviews of articles decide which units of places are wild and which ones are tamed. Claim your land (by peer-reviewing articles), it’s up for grabs.  Organize yourselves by interacting with others, delegating peer-reviews for better management of your kingdoms, collaborating for the exploration of new lands.

Instead of getting bonus points, as mathoverflow works, grab some piece of virtual land that you can see! Cultivate it, by linking your articles to it or by peer-reviewing other articles. See the boundaries of your small or big kingdom. Maybe you would like to trespass them, to go into a near place? You are welcome as a trader. You can establish trade with other near kingdoms by throwing bridges between the land, i.e. by writing interdisciplinary articles, with keywords of both lands. Others will follow (or not) and they will populate the new boundary land you created.

After some time, you may be living in complex, multiply-connected kingdom cities, because you are doing peer-reviewed research in an established, rich in knowledge field. On the fringes of such rich kingdoms a strange variety of creatures live. Some are crackpots, living in the wild territory, which grows wilder with the passage of time.  Others are explorers, living between your respectable realm and wild, but evolving into tamer territory.   From time to time some explorer (or some crackpot, sometimes is not easy to tell one from another) makes a break and suddenly a bright avenue connects two far kingdoms. By the tectonic plate movement of this world, ruled by citations, these kingdoms are now one near the other.  Claim new land! Trade new bridges! During this process some previously rich, lively, kingdoms might become derelict. Few people pass by, but there’s nothing lost: like happened in Rome, the marble of ancient temples was used later for building cathedrals.

If you are not a professional researcher, nevermind,  you may  visit this world and contribute. Or understand more, by seeing how complex, how alive research is, how everything is interwoven. Because an image speaks a thousand words, you can really walk around and make an idea of your own about the subject you are curious about.

Thinking more about peer-reviews, which are like property documents, as in real life some are good and some are disputable.  Some are like spells: “I feel that the article is not compelling enough …”. Some are frivolous nonsense: “I find it off-putting when an author  does not use quotation marks as I am used to”. Some are rock-solid: “there’s a gap in the proof” or “I have not been able to find the error in the proof, but here is a counter-example to the author’s theorem 1.2”.

So, how can it be done? We (for example by a common effort at github) could start from what is available, like keywords and citations freely available or easy to harvest, from tools like google scholar profiles, mathscinet, you name it.  The physics has to be written, the project could be initially hosted for almost nothing, we could ask for sponsors. We could join efforts with established international organisms which intend to pursue somehow similar projects. The more difficult part will be the tuning of interactions, so that the game starts to have more and more adopters.

After that, as I said, the knowledge frontier will be up for grabs. Many will love it and some will hate it.

______________

Context: The richness of knowledge comes from this web of interactions between human minds, across time and space. This knowledge is not reserved to the statistically few people doing research. We grow with it, during school, we live within, no matter what we do as adults, we talk about and we are curious about it. Even more, immensely more after knowledge has been liberated by the web.

In a short lapse of time (at the scale of history) it has become obvious that research itself needs to be liberated from outdated habits. Imagine a researcher, before the web.  She was a dual creature: physically placed somewhere on the physical earth, living in some moment in time, but  mentally interconnected with other researchers all over the world, anytime in the history. However, the physical place of living impeded or helped the researcher to reach further in the knowledge world, depending on the availability of virtual connections: books, other physically near researchers, local traditions. We can’t even speculate about how many curious minds did not accessed the knowledge web, due to the physical place and moment in time where they lived, or due to society customs. How many women, for example?

But now we have the web, and we use it, as researchers. It is, in some sense, a physical structure which could support the virtual knowledge web. The www appeared in the research world, we are the first citizens of it.  The most surprising effect of the web was not to allow everybody to access the knowledge boundary. Instead, the most powerful effect was to enhance the access of everybody to everybody else. The web has become social. Much less the research world.

Due to old habits,  we loose the pace. We are still chained by physical demands. Being dual creatures, we have to support our physical living. For example, we are forced by outdated customs to accept the hiding behind paywalls of the results of our research.  The more younger we are, the more is the pressure to “sell” what we do, or to pervert the direction of our work in order to increase our chances of success in the physical world. In order to get access to physical means, like career advancements and grant money.

Old customs die hard. Some time ago a peasant’s child with a brilliant mind had to renounce learning because he needed to  help his family, his sister was seen as a burden, not even in principle considered for eventual higher education. Now young brilliant minds, bored or constrained by the local research overlord or local fashion, rather go doing something rewarding for their minds  outside academia, than slicing a tasteless salami into the atoms of publishable units, as their  masters (used to) advice them.

An account of personal motivations concerning research and publication

Motivated by a g+ mention of two posts of mine, I think I need to explain a little bit the purpose of such posts, also by putting them in the context of my experience. (I don’t know how to avoid this appeal to experience, because it is not at all an authority argument. Authority arguments, I believe, are outside of the research realm, they should be ignored in totality.)

Despite my attraction to physics and painting, I was turned to become a mathematician by a very special kind of professor. When I was little there was the habit of taking private preparatory classes for increasing the chances of admission in a good college. So, at some point, although I claimed not to need such classes, one day when I came back home after a soccer game I met a strange old guy, who was speaking in an extremely lively and polite way with my parents. I was wearing my school uniform which was full of dust gathered in the schoolyard and I was not at all in the mood to speak with old, strange persons. He explained to my parents that he is going to give me one problem to solve, for him to decide whether to accept me or not as a student. He gave me an inequality to prove, then I spent a half hour in my room and found a solution, which I wrote. I gave the solution to the professor, he looked at it and started: “Marius, a normal kid would solve this inequality like that  (he explained it to me). A clever kid would prove the inequality like this (a shorter, more elegant solution). A genius kid would do like this (one line proof). Now, your proof is none of the above, so I take you.”  It was an amazing experience to learn, especially geometry, from him. At some point he announced my parents that he is willing to do the classes with no pay, with the condition that he could come at any time (with a half hour notice). We did mathematics at strange hours for me, like midnight or 5 in the morning, or whenever he wanted.  Especially when geometry was concerned, he was never letting  me write anything until I could explain with words the idea of the solution, then I could start writing the proof. An amazing professor, a math artist, in the dark of a communist country. I have never met anybody as fascinating since.

If someone would had come to tell me that doing research exclusively means to dig one narrow area in order   to write as big as possible a NUMBER of articles  in ISI journals, then I would have thought that’s a disgusting perversion of a lovely quest. Then I would have switched to painting, because at least in that field (as old, no, older than mathematics)  creativity won against vanity since a long time.

I was young then and I wanted to do research in as many areas as I see fit. There was no internet at the time, therefore I was filling notebooks with my work. Most of it it’s  just lost, mostly because of not having anybody around to share my thoughts with, to learn from and to grow into a real researcher in  a welcoming environment (with one exception, the undergraduate experience was a disappointment). I was not willing in fact to show what I do because it was much more rewarding to find out some more about some subject than to loose time to explain it to somebody, moreover now I know to trust my intuition which was telling me that there was no point to waste time for this.

The next important moment in my life as a researcher was the contact with the www, which happened in 1994 at Ecole Polytechnique from Paris, when I was doing a master. I was not interested in the courses, because I already had (a bit better, due to the mentioned exception) ones back home, but, OMG, the www! At that point, after having only one article published (The topological substratum of the derivative) — can you imagine? — which was written at a typewriter, with horrific handmade underlines and other physical constraints of the epoch — so I decided that’s have to be the future of doing research and I completely lost interest into the contrived way of communicating research by articles.

I had to write articles, and I did, only that very frequently I had problems concerning their publication, because I hold the opinion that an interesting article should combine at least three fields and it should open more questions than those solved. Foolish, really, you may say. But most of all I am still amazed how much time it took me to start to express my viewpoints publicly, through the net.

Which I am finally doing now, in this blog.

In this context, I use the personal experience as a tool in order to stress the obvious belief that www is changing the (research) world much more, much faster, than the printing press. I don’t complain about the mean reviewers, but I offer examples which support claims as: the future of peer-review is one which is technical (correct or not?), is open to anybody to contribute constructively, not based on unscientific opinions and authority arguments,  separated from “publication” (whatever this means today) and perpetually subjected to change and improvement with the passage of time.

More on open peer-review in this blog here.

Gamifying peer-review?

Fact is: there are lots of articles on arXiv and only about a third published traditionally (according to their statistics). Contrary to biology and medical science, where researchers are way more advanced in new publishing models (like PLoS and PeerJ, the second being almost green in flavour), in math and physics we don’t have any other option than  arXiv, which is great, the greatest in fact, the oldest, but … but only if it had a functional peer-review system attached. Then it would be perfect!

It is hard though to come with a model of peer-review for the arXiv. Or for any other green OA publication system, I take the arXiv as example only because I am most fond of. It is hard because there has to be a way to motivate the researchers to do the peer-reviews. For free. This is the main type of psychological argument against having green OA with peer-review. It is a true argument, even if peer-review is made for free in the traditional publishing model.  The difference is that the traditional publishing model is working since the 1960’s and it is now ingrained in the people minds, while any new model of peer-review, for the arXiv or any other green OA publication system, has first to win a significant portion of researchers.

Such a new model does not have to be perfect, only better than the traditional one. For me, a peer-review which is technical, open, pre- and post- “publication” would be perfect. PLoS and PeerJ already have (almost) such a peer-review. Meanwhile, us physicists and mathematicians sit on the greatest database of research articles, greener than green and older than the internet and we have still not found a mean to do the damn peer-review, because nobody has found yet a viral enough solution, despite many proposals and despite brilliant minds.

So, why not gamify the peer-review process? Researchers  like to play as much as children do, it’s part of the mindframe requested for being able to do research. Researchers are driven also by vanity, because they’re smart and highly competitive humans which value playful ideas more than money.

I am thinking about Google Scholar  profiles. I am thinking about vanity surfing. How to add peer-review as a game-like rewarding activity? For building peer communities? Otherwise? Any ideas?

UPDATE:  … suppose that instead of earning points for making comments, asking questions, etc, suppose that based on the google scholar record and on the keywords your articles have, you are automatically assigned a part, one or several research areas (or keywords, whatever). Automatically, you “own” those, or a part, like having shares in a company. But in order to continue to own them, you have to do something concerning peer-reviewing other articles in the area (or from other areas if you are an expansionist Bonaparte). Otherwise your shares slowly decay. Of course, if you have a stem article with loads of citations then you own a big domain and probably you are not willing to loose so much time to manage it. Then, you may delegate others to do this. In this way bonds are created, the others may delegate as well, until the peer-review management process is sustainable. Communities may appear. Say also that the domain you own is like a little country and citations you got from other “countries” are like wealth transfer: if the other country (domain) who cites you is more wealthy then the value of the citation increases. As you see, until now, with the exception of “delegation” everything could be done automatically. From time to time, if you want to increase the wealth of your domain, or to gain shares in it, then you have to do a peer-review for an article where you are competent, according to keywords and citations.

MORE: MMORPGames at the knowledge frontier.

Something like this could be tried and it could be even funny.

Writing research articles vs writing blog posts

Is there any difference between writing a research article and writing a blog post on a research subject? This is what I would like to understand.

In my mind research articles should become a subset of … how should I call them, maybe “research posts”.  There are obvious advantages on the side of the research posts, as well as some disadvantages. I think it is revealing that the advantages have an objective flavour, while the disadvantages have more of a subjective one, mostly being related to the bad image the blog posts have among the “serious” researchers.

I have already experimented with this idea:

  • the Tutorial on graphic lambda calculus has served as the template for  arXiv:1302.0778   On graphic lambda calculus and the dual of the graphic beta move,
  • the post Geometric Ruzsa triangle inequalities and metric spaces with dilations became  arXiv:1304.3358  Geometric Ruzsa triangle inequality in metric spaces with dilations, with very few modifications,
  • I just submitted on arXiv  the article arXiv:1304.3694 Origin of emergent algebras, based on the posts The origin of emergent algebras, part II and part III, as well as parts of Emergent algebra as combinatory logic (part I),
  • now I am struggling with writing a shorter (and somehow dumber) version of arXiv:1207.0332  Local and global moves on locally planar trivalent graphs, lambda calculus and \lambda-Scale, because I was too hurried and spoiled by the freedom this blog gives me to do research, so in the first version of the article I just write too much, without enough motivation. Therefore I decided (based on a peer-review which I appreciated) to concentrate first only on what the graphic lambda calculus can do with the gates corresponding to the application, the abstraction and the fan-out:  the lambda calculus part of the graphic lambda calculus, along with the braids formalism part. The emergent algebra part is for later.

The format of the articles  from this list is as much as possible similar with the one of the research posts.  As an example I mention  the use of links inside  the text, including direct links to the (preferably OA) versions of the cited articles. See the post Idiot things that we we do in our papers out of sheer habit by Mike Taylor for more examples of the same “habits” which I already renounced in some of my papers.

The new  habit of giving exactly the link to the article, instead of a numbered citation in the bibliography, as well as giving the link to ANY  source which is used in the research article (as for example a wikipedia page for a first time encounter of a term, along with the invitation for further study from another sources), is clearly one of the advantages a blog post has over a traditionally written article.

However, it is difficult to find the good balance between the extreme freedom of a blog post and the more constraint one of a research article (although the blog of Terence Tao is a very good example – maybe the best I know about – that such a balance may be attainable).

My guess is that at some point open peer-review  and this change of habits concerning writing research articles will meet.________________

UPDATE: See the post Blogging as post-publication peer review: reasonable or unfair?  by Dorothy Bishop, as well as the comment by Phillip Lord which I reproduce here:

Why stop there? If Author self-publishing can provide rapid feedback on “properly” published science, then they can also provide dissemination of that science in the first place.
Scientific publishing has too long been about credit and promotion. It’s time it returned to what it really should be and what it originally was: communication.

Peer-reviews: soundness vs interest

I am very intrigued by the following idea, which is not new (see further), but I have not seen it discussed in the small world of mathematics.  Peer-reviews have two goals which could be separated, for the benefit of a better communication of research among scholars:

  • to filter submitted articles as a function of the soundness of the research work,
  • to assess  the level of interest of the research work.

The first goal is a must, the second one opens the gate to abuse and subjective bias.

I learned about the idea of separating these two goals from the presentation by Maria Kowalczuk which I shared in this post. Afterwards, I looked on the net to find more. It seems this idea was pioneered by PLoS ONE, with PeerJ being the latest adopter.  The following citation is taken from  “Open and Shut?:UK politicians puzzle over peer review in an open access environment” (2011):

* On splitting traditional peer review into two separate processes: a) assessing a paper’s technical soundness and b) assessing its significance — a model pioneered by open-access publisher PLoS ONE, and now increasingly being adopted by traditional publishers …

Q162 Chair: We have heard that pre-publication peer review in most journals can be split, broadly, into a technical assessment and an impact assessment. Is it important to have both? 

Dr Torkar: … It is fairly straightforward to think about scientific soundness because it should be the fundamental goal of the peer review process that we ensure all the publications are well controlled, that the conclusions are supported and that the study design is appropriate. That is fairly straightforward as a very important aspect which should be addressed as part of the peer review process.

The question of the importance of impact is more difficult. When we think about high impact papers we think about those studies which describe findings that are far reaching and could influence a wide range of scientific communities and inform their next-stage experiments. Therefore, it is quite important to have journals that are selective and reach out to a broad readership, but the assessment of what is important can be quite subjective. That is why it is important, also, to give space to smaller studies that present incremental advances. Collectively, they can actually move fields forward in the long term.

Dr Patterson: … [B]oth these tasks add something to the research communication process. Traditionally, technical assessment and impact assessment are wrapped up in a single process that happens before publication. We think there is an opportunity and, potentially, a lot to be gained from decoupling these two processes into processes best carried out before publication and those better left until after publication.

One way to look at this is as follows. About 1.5 million articles are published every year. Before any of them are published, they are sorted into 25,000 different journals. So the journals are like a massive filtering and sorting process that goes on before publication. The question we have been thinking about is whether that is the right way to organise research. There are benefits to focusing on just the technical assessment before publication and the impact assessment after publication … Online we have the opportunity to rethink, completely, how that works. Both are important, but we think that, potentially, they can be decoupled …

Dr Lawrence: … [I]t is not known immediately how important something is. In fact, it takes quite a while to understand its impact. Also, what is important to some people may not be to others. A small piece of research may be very important if you are working in that key area. Therefore, the impact side of it is very subjective.

Dr Read: … Separating the two is important because of the time scale over which you get your answer. The impact is much longer. I guess the technical peer review is a shorter-term issue.

What intrigues me the most is that, even if the idea comes from the publication of medical research, it rather looks easy to implement in a model of math publication. Indeed, is it not the first purpose of peer-review of a math article  to decide the soundness of mathematical results from within?

In mathematics we have the proof,  which is highly optimized for independent check. True or false, sound or flawed, right? In principle at least, the peer-review in mathematics should serve mainly as a filter for sound results. The reality is different, I think experiences like the ones described in this post (browse through the provided links too,  maybe), are by no means exceptional, but rather common.

As for the interest level, it is well known that in mathematics one never ever knows the long term effect of a mathematical result. It is common in mathematics that results have a big latency, that articles may become suddenly relevant  decades and even centuries after their publication. It should be common-sense in mathematics that one cannot rely on the peer-review assessment of level of interest. But is not and, more often than not, under the umbrella of the relevance for the journal publication lurk darker things, like conflict of interests, over-protection of one field of work against stranger researchers intrusion, or exclusion based on club membership. In few words: the second role of peer-review is used for masking power games.

All that being said, let’s contemplate if the idea of keeping only the first role of peer-reviews is feasible. It certainly is, if it works already for PeerJ and PLoS ONE, why would not work for new models of mathematical publication, where it would be easier to apply? As usual, the most difficult part is to start using it.

Questions: how can be implemented an open, perpetual peer-review  with the main goal of assessment of soundness of mathematical research? Would a system based on comments and manifold contributions from blogs, as the Retraction Watch for example, be part of the the soundness decision, or a part of the level of interest decision? Would, for example, the wikipedia model be better for  the soundness part of peer-reviews and “comments in blogs” dreaded by some mathematicians would be better for assessing the local (in time and space) level of interest?

Your informative or critical comments would be great!

“Future of peer review” by Maria Kowalczuk

Via the post “Peer pressure: the changing role of peer review” at BioMed Central blog, which I highly recommend as reading for those interested in the problem of peer review. I embed further the presentation “Future of peer review” by Maria Kowalczuk, because I think it exactly applies to publishing in mathematics. There’s a lot to learn form, or to discuss about.

______________

UPDATE: Here is an almost similar presentation, from dec. 2011, by Iain Hrynaszkiewicz: (pdf)

Peer-review, what is it for?

An interesting discussion started at Retraction Watch, in the comments of the post Brian Deer’s modest proposal for post-publication peer review. Let me repeat the part which I find interesting: post-publication peer review.

The previous post “Peer-reviews don’t protect against plagiarism and articles retraction. Why?”  starts with the following question:

After reading one more post from the excellent blog Retraction Watch, this question dawned on me: if the classical peer-review is such a good thing, then why is it rather inefficient when it comes to detecting flaws or plagiarism cases which later are exposed by the net?

and then I claimed that retractions of articles which already passed the traditional peer-review process are the best argument for an open, perpetual peer-review.

Which brings me to the subject of this post, namely what is peer-review for?

Context. Peer-review is one of the pillars of the actual publication of research practice. Or, the whole machine of traditional publication is going to suffer major modifications, most of them triggered by its perceived inadequacy with respect to the needs of researchers in this era of massive, cheap, abundant means of communication and organization. In particular, peer-review is going to suffer transformations of the same magnitude.

We are living interesting times, we are all aware that internet is changing our lives at least as much as the invention of the printing press changed the world in the past. With a difference: only much faster. We have an unique chance to be part of this change for the better, in particular  concerning  the practices of communication of research. In front of such a fast evolution of  behaviours, a traditionalistic attitude is natural to appear, based on the argument that slower we react, a better solution we may find. This is however, in my opinion at least, an attitude better to be left to institutions, to big, inadequate organizations, than to individuals. Big institutions need big reaction times because the information flows slowly through them, due to their principle of pyramidal organization, which is based on the creation of bottlenecks for information/decision, acting as filters. Individuals are different in the sense that for them, for us, the massive, open, not hierarchically organized access to communication is a plus.

The bottleneck hypothesis. Peer-review is one of those bottlenecks, traditionally. It’s purpose is to separate the professional  from the unprofessional.  The hypothesis that peer-review is a bottleneck explains several facts:

  • peer-review gives a stamp of authority to published research. Indeed, those articles which pass the bottleneck are professional, therefore more suitable for using them without questioning their content, or even without reading them in detail,
  • the unpublished research is assumed to be unprofessional, because it has not yet passed the peer-review bottleneck,
  • peer-reviewed publications give a professional status to authors of those. Obviously, if you are the author of a publication which passed the peer-review bottleneck then you are a professional. More professional publications you have, more of a professional you are,
  • it is the fault of the author of the article if it does not pass the peer-review bottleneck. As in many other fields of life, recipes for success and lore appear, concerning means to write a professional article, how to enhance your chances to be accepted in the small community of professionals, as well as feelings of guilt caused by rejection,
  • the peer-review is anonymous by default, as a superior instance which extends gifts of authority or punishments of guilt upon the challengers,
  • once an article passes the bottleneck, it becomes much harder to contest it’s value. In the past it was almost impossible because any professional communication had to pass through the filter. In the past, the infallibility of the bottleneck was a kind of self-fulfilling prophecy, with very few counterexamples, themselves known only to a small community of enlightened professionals.

This hypothesis explains as well the fact that lately peer-review is subjected to critical scrutiny by professionals. Indeed, in particular, the wave of detected plagiarisms in the class of peer-reviewed articles lead to the questioning of the infallibility of the process. This is shattering the trust into the stamp of authority which is traditionally associated with it.  It makes us suppose that the steep rise of retractions is a manifestation of an old problem which is now revealed by the increased visibility of the articles.

From a cooler point of view, if we see the peer-review as designed to be a bottleneck in a traditionally pyramidal organization,  is therefore questionable if the peer-review as a bottleneck will survive.

Social role of peer-review. There are two other uses of peer-review, which are going to survive and moreover, they are going to be the main reasons for it’s existence:

  • as a binder for communities of peers,
  • as a time-saver for the researchers.

I shall take them one-by-one. What is strange about the traditional peer-review is that although any professional is a peer, there is no community of peers. Each researcher does peer-reviewing, but the process is organized in such a manner that we are all alone. To see this, think about the way things work: you receive a demand to review an article, from an editor, based on your publication history, usually, which qualifies you as a peer. You do your job, anonymously, which has the advantage of letting you be openly critical with the work of your peer, the author. All communication flows through the editor, therefore the process is designed to be unfriendly with communications between peers. Hence, no community of peers.

However, most of the researchers who ever lived on Earth are alive today. The main barrier for the spread of ideas is a poor mean of communication. If the peer-review becomes open, it could foster then the appearance of dynamical communities of peers, dedicated to the same research subject. As it is today, the traditional peer-review favours the contrary, namely the fragmentation of the community of researchers which are interested in the same subject into small clubs, which compete on scarce resources, instead of collaborating. (As an example, think about a very specialized research subject which is taken hostage by one, or few, such clubs which peer-reviews favourably only the members of the same club.)

As for the time-saver role of peer-review, it is obvious. From the sea of old and new articles, I cannot read all of them. I have to filter them somehow in order to narrow the quantity of data which I am going to process for doing my research. The traditional way was to rely on the peer-review bottleneck, which is a kind of pre-defined, one size for all solution. With the advent of communities of peers dedicated to narrow subjects, I can choose the filter which serves best my research interests. That is why, again, an open peer-review has obvious advantages. Moreover, such a peer-review should be perpetual, in the sense that, for example, reasons for questioning an article should be made public, even after the “publication” (whatever such a word will mean in the future). Say, another researcher finds that an older article, which passed once the peer-review, is flawed for reasons the researcher presents. I could benefit from this information and use it as a filter, a custom, continually upgrading filter of my own, as a member of one of the communities of peers I am a member of.

Multiple peer-reviews, a story with a happy-end

I shall tell you the story of this article, from its inception to its publication. I hope it is interesting and funny. It is an old story, not like this one, but nevertheless it might serve to justify my opinion that open peer-review (anonymous or not, this doesn’t matter) is much better than the actual peer-review, in that by being open  (i.e. peer-reviews publicly visible and evolving through contributions by the community of peers), it discourages abusive behaviours which are now hidden under the secrecy, motivated by a multitude of reasons, like conflict of interests, protection of it’s own little group against stranger researchers, racism, and so on .

Here is the story.

In 2001, at EPFL  I had the chance to have on my desk two items: a recent article by Bernard Dacorogna and Chiara Tanteri concerning quasiconvex hulls of sets of matrices and the book A.W. Marshall, I. Olkin, Inequalities: Theory of Majorisation and it’s Applications, Mathematics in science and engineering, 143, Academic Press, (1979). The book was recommended to me by Tudor Ratiu, who was saying that it should be read as a book of conjectures in symplectic geometry.  (Without his suggestion, I would have never decided to read this excellent book.)

At the moment I was interested in variational quasiconvexity (I invented multiplicative quasiconvexity, or quasiconvexity with respect to a group), which is still a fascinating and open subject, one which could benefit (but it does not) from a fresh eye by geometers. On the other hand, geometers which are competent in analysis are a rare species. Bernard Dacorogna, a specialist in analysis with an outstanding and rather visionary good mathematical sense, was onto this subject from some time, for good reasons, see his article with J. Moser,  On a partial differential equation involving the Jacobian determinant, Annales de l’Institut Henri Poincaré. Analyse non linéaire  1990, vol. 7, no. 1, pp. 1-26, which is a perfect example of the mixture between differential geometry and analysis.

Therefore, by chance I could notice the formal similarity between one of Dacorogna’s results and a pair (Horn, Thompson) of theorems in linear algebra, expressed with the help of majorization relation. I quickly wrote the article “Majorization with applications to the calculus of variations“, where I show that by using majorization techniques, older than the quasiconvexity subject (therefore a priori available to the specialists in quasiconvexity), several results in analysis have almost trivial proofs, as well as giving several new results.

I submitted the article to many journals, without success. I don’t recall the whole list of journals, among them were Journal of Elasticity, Proceedings of the Royal Society of Edimburgh, Discrete and Continuous Dynamical Systems B.

The reports were basically along the same vein: there is nothing new in the paper, even if eventually I changed the name of the paper to “Four applications of majorization to convexity in the calculus of variations”.  Here is an excerpt from such a report:

“Usually, a referee report begins with a description of the goal of the paper. It is not easy here, since Buliga’s article does not have a clear target, as its title suggests. More or less, the text examines and exploits the relationships between symmetry and convexity through the so-called majorization of vectors in Rn , and also with  rank-one convexity. It also comes back to works of Ball, Freede and Thompson, Dacorogna & al., Le Dret, giving a few alternate proofs of old results.

This lack of unity is complemented by a lack of accuracy in the notations and the statements. […] All in all, the referee did not feel convinced by this paper. It does not contain a  striking statement that could attract the attention. Thus the mathematical interest does not balance the weak form of the article. I do not see a good argument in favor of the publication by DCDS-B.”

At some point I renounced to submit it.

After a while I made one more try and submit it to a journal which was not in the same class as the previous ones, (namely applied mathematics and calculus of variations). So I submitted the article to Linear Algebra and its Applications and it has been accepted. Here is the published version  Linear Algebra and its Applications 429, 2008, 1528-1545, and here is an excerpt from the first referee report (from LAA)

“This paper starts with an overview of majorization theory (Sections 1-4), with emphasis on Schur convexity and inequalities for eigenvalues and singular values. Then some new results are established, e.g. characterizations of rank one convexity of functions, and one considers applications in several areas as nonlinear elasticity and the calculus of variation. […] The paper is well motivated. It presents new proofs of known results and some new theorems showing how majorization theory plays a role in nonlinear elasticity and the calculus of variation, e.g. based on the the notion of rank one convexity.
A main result, given in Theorem 5.6, is a new characterization of  rank one convexity (a kind of elliptic condition) […]  This result involves Schur convexity.

Some modifications are needed to improve readability and make the  paper more self-contained. […] Provided that these changes are done this paper can be recommended for publication.”

_________________________

PS.  The article which, from my experience, took the most time from first submission to publication is this one:  first version submitted in 1997,  which was submitted as well to many journals and it  was eventually published in 2011, after receiving finally an attentive, unbiased peer-review  (the final version can be browsed here)The moral of the story is therefore: be optimistic, do what you like most in the best of ways and be patient.

PS2. See also the very interesting post by Mike Taylor “The only winning move is not to play“.