Category Archives: discussion

3 days since the server is too busy, so…

UPDATE: a version of the collection is on github.

____________

the collection needs a better place. Alternatively, I could temporarily use github (by making the animations smaller, I can cram the collection into 480MB). Or better with a replacement of animations by the simulations themselves. As you see these simulations occupy 1GB, but they can be mined, in order to extract the right parameters (gravity, force strength, radii and colors, mol source) and then just reuse them in the js.

Anybody willing? I need to explain what pure see is about.

Also, use this working link to my homepage.

Google+ salvaged collection of animations (III): online again!

UPDATE: For example the 2 neurons interacting animation can be remaked online to look like this:

2neurons

First you use the mouse wheel to rescale, mouse to translate. Notice the gravity slider position. This is an animation screencasted from the real thing which takes 8 min to unfold. But in this way you see what is happening beyond the original animation.

Btw, what is such a neuron? It is simply a (vectorial) linear function, which is applied to itself,  written in lambda calculus. These two neurons are two linear functions, with some inputs and outputs connected.

Soon the links will be fixed (internal and external) [done] and soon after that there will be a more complete experience of the larger chemlambda universe. (and then the path is open for pure see)

___

In Oct 2018  I deleted the G+ chemlambda collection of animations, before G+ went offline. Now, a big part of it is online, at this link. For many of the animations you can do now live the reduction of the associated molecule.

The association between posts, animation and source mol file is from best fit.

There are limitations explained in the last post.

There are still internal links to repair and there has to be a way to integrate all in one experience, to pay my dues.

I put on imgur this photo with instructions, easy to share:

Screenshot from 2020-01-12 19:34:17

Use wheel mouse to zoom, mouse to move, gravity slider to expand.

http://imar.ro/~mbuliga/collection.html

The salvaged collection of animations (II)

UPDATE: much better now, although I seriously consider to jump directly to pure see. However is very rewarding to pass over blocks.

collection

(Continues the first post.) I forgot how much the first awk chemlambda scripts were honed, and how much the constants of the animations produced were further picked so to illustrate in a visually interesting way a point of view. The bad part of the animations first produced is that they are big html files, sometimes taking very long to execute.

The all-in-one js solution built by ishanpm, then modified and enhanced by me, works well and fast for graphs with a no of nodes up to 1000, approximatively. The physics is fixed, there are only two controls: gravity (slider) which allows to expand/contract the graphs, and the rewrites slider, which changes the probabilities of rewrites which increase/decrease the number of nodes. Although there is randomness (initially in the ishanpm js solution there was not), it is a weak and not very physical one (considering the idea that the rewrites are caused by enzymes). It is funny that the randomness is not taken seriously, see for example the short programs of formality.

After I revived the collection of animations from G+ (I kept about 300 of them), I still had to associate the animations with the mol files used (many of them actually not in the mol library available) and to use the js chemlambda version (i.e. this one) with the associated mol files. In this way the user would have the possibility to re-done the animations.

It turns out it does not work like this. The result is almost always of much lesser quality than the animation. However, the sources of the animations (obtained from the awk scripts) are available here.  But as I told at the beginning of the post, they are hard to play (fast enough for the goldfish attention), actually this was the initial reason for producing animations, because the first demos, even chosen to be rather short, were still too long…

So this is a more of a work of art, which has to be carefully restored. I have to extract the useful info from the old simulations and embed it into a full js solution. Coming back to randomness, in the original version there are random cascades of rewrites, not random rewrites, one at a time, like in the new js version… and they extinguish the randomly available pockets of enzymes, according to some exponential laws… and so on. That is why the animations look more impressive than the actual fast solution, at least for big graphs.

It is true that the js tools from the quine graphs repository have many advantages: interaction combinators are embedded, there is a lambda calculus to chemlambda parser… With these tools I discovered that the 10 nodes quine does reproduce, that the ouroboros is mortal, that there are many small quines (in interaction combinators too), etc.

And it turns out that I forgot that many interesting mols and other stuff was left unsaid or is not publicly available. My paranoid self in action.

In conclusion probably I’ll make available some 300 commented gifs from the collection and I’ll pass to the scientific part. I’d gladly expose the art part somewhere, but there seems to be no place for this art, technically, as there is no place, technically, for the science part, as a whole, beyond just words telling stories.

There will be, I’m sure.

Open access in 2019: still bad for the career

Have you seen this: https://newsroom.publishers.org/researchers-and-publishers-oppose-immediate-free-distribution-of-peer-reviewed-journal-articles

“The American publishing industry invests billions of dollars financing, organizing, and executing the world’s leading peer-review process in order to ensure the quality, reliability, and integrity of the scientific record,” said Maria A. Pallante, President & CEO of the Association of American Publishers. “The result is a public-private partnership that advances America’s position as the global leader in research, innovation, and scientific discovery. If the proposed policy goes into effect, not only would it wipe out a significant sector of our economy, it would also cost the federal government billions of dollars, undermine our nation’s scientific research and innovation, and significantly weaken America’s trade position. Nationalizing this essential function—that our private, non-profit scientific societies and commercial publishers do exceedingly well—is a costly, ill-advised path.”

Yes, well, this is true! It is bad for publishers, like Elsevier, it is bad for some learned societies which sign this letter, like the ACM.

But it would be a small step towards a more normal, 21st century style of communication among researchers. Because researchers do no longer need scientific publishers of this kind.

What is more important? That a useless industry loose money, or that researchers could discuss normally, without the mediation of this parasite from an older age?

Obviously, researchers have careers, which depend on the quantification of their scientific production. The quantification is made according to rules dictated by academic management. The same management who decides to buy from the publishers something the researchers already have (access).

So, no matter how evil the publishers may be, management is worse. Because suppose I make a social media app which asks 1$ for each word one types into it. Would you buy it, in case you want to exchange messages with your colleagues? No, obviously. No matter how evil I am by making this app, I would have no clients. But suppose now that your boss decides that the main criterion of career advancement is the number of words you typed into this app. Would you buy it, now? Perhaps.

Why, tell me why the boss would decide to make such a decision? There has to be a reason!

Who is the most evil? I or the boss?

There was a coincidence that the same day I learned about the letter against open access, I also read Scott Aaronson post about the utmost important problem of the name “quantum supremacy”.

The post starts with a good career news:

“Yay! I’m now a Fellow of the ACM. […] I will seek to use this awesome responsibility to steer the ACM along the path of good rather than evil.”

Then Scott spends more than 3100 words discussing the “supremacy” word. Very important subject. People in the media are concerned about this.

First Robert Rand comment, then mine, asked about Scott’ opinion  as a new member of the ACM, concerning the open access letter.

The answer has a 100 words, the gist being:

“Anyone who knows the ACM better than I do: what would be some effective ways to register one’s opposition to this?”

A  possible answer for my question concerning bosses is: OA is still bad for the career, in 2019.

 

Lambda calculus to chemlambda parser (2) and more slides

This post has two goals: (1) to explain more about the lambda to chemlambda parser and (2) to talk about slides of presentations which are connected one with the other across different fileds of research.

(1) There are several incremental improvements to the pages from the quine graphs repository. All pages, including the parser one, have two sliders, each giving you control about some parameters.

The “gravity” slider is kind of obvious. Recall that you can use your mose (or pinching gestures) to zoom in or out the graph you see. With the gravity slider you control gravity. This allows you to see better the edges of the graph, for example, by moving the gravity slider to the minimum and then by zooming out. Or, on the contrary, if you have a graph which is too spreaded, you can increase gravity, which will have as aeffect a more compactly looking graph.

The “rewrites weights slider” has as extrema the mysterious words “grow” and “slim”. It works like this. The rewrites (excepting COMB, which are done preferentially anyway) are grouped into those which increase the number of nodes (“grow”) and the other ones, which decrease the number of nodes (“slim”).

At each step, the algorithm tries to pick at random a rewrite. If there is a COMB rewrite to pick, then it is done. Else, the algorithm will try to pick at random one “grow” and one “slim” rewrite. If there is only one of these available, i.e. if there a “grow” but no “slim” rewrite, then this rewrite is done. Else, if there is a choice between two randomly choses “grow” and “slim” rewrites, we flip a coin to choose among them. The coin is biased towards “grow” or “slim” with the rewrites weights slider.

This is interesting to use, for example with the graphs which come from lambda terms. Many times, but not always, we are interested in reducing the number of nodes as fast as possible. A strategy would be to move the slider to “slim”.

In the case of quines, or quine fights, it is interesting to see how they behave under “grow” or “slim” regime.

Now let’s pass to the parser. Now it works well, you can write lambda terms in a human way, but mind that “xy” will be seen as a variable, not as the application of “x” to “y”. Application is “x y”. Otherwise, the parser understands correctly terms like

(\x.\y.\z.z y x) (\x.x x)(\x. x x)\x.x

Then I followed the suggestion of my son Matei to immediately do the COMB rewrites, thus eliminating the Arrow nodes given by the parser.

About the parser itself. It is not especially short, because of several reasons. One reason is that it is made as a machine with 3 legs, moving along the string given by the lexer. Just like the typical 3-valent node. So that is why it will be interesting to see it in action, visually. Another reason is that the parser first builds the graph without fanout FO and termination T nodes, then adds the FO and and T nodes. Finally, the lambda term is not prepared in advance by any global means (excepting the check for balanced parantheses). For example no de Bruijn indices.

Another reason is that it allows to understand what edges of the (mol) graph are, or more precisely what port variables (edge variables) correspond to. The observation is that the edges are in correspondence with the position of the item (lparen, rparen, operation, variable) in the string. We need at most N edge names at this stage, where N is the length of the string. Finally, the second stage, which adds the FO and T nodes, needs at most N new edge names, practically much less: the number of duplicates of variables.

This responds to the question: how can we efficiently choose edge names? We could use as edge name the piece of the string up to the item and we can duble this number by using an extra special character. Or if we want to be secretive, now that we now how to constructively choose names, we can try to use and hide this procedure.

Up to now there is no “decorator”, i.e. the inverse procedure to obtain a lambda term from a graph, when it is possible. This is almost trivial, will be done.

I close here this subject, by mentioning that my motivation was not to write a parser from lambda to chemlambda, but to learn how to make a parser from a programming language in the making. You’ll see and hopefully you’ll enjoy 🙂

(2) Slides, slides, slides. I have not considered slides very interesting as a mean of communication before. But hey. slides are somewhere on the route to an interactive book, article, etc.

So I added to my page links to 3 related presentations, which with a 4th available and popular (?!) on this blog, give together a more round image of what I try to achieve.

These are:

  • popular slides of a presentation about hamiltonian systems with dissipation, in the form baptized “symplectic Brezis-Ekeland-Nayroles”.  Read them in conjuction with arXiv:1902.04598, see further why
  • (Artificial physics for artificial chemistry)   is a presentation which, first, explains what chemlambda is in the context of artificial chemistries, then proceeds with using a stochastic formulation of hamiltonian systems with dissipation as an artificial physics for this artificial chemistry. An example about billiard ball computers is given. Sure, there is an article to be written about the details, but it is nevertheless interesting to infer how this is done.
  • (A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science)  are the most evolved technically slides, presenting the geometrical roots of chemlambda and related efforts. There are many things to pick from there, like: what is the geometrical problem, how is it related to emergent algebras, what is computation, knots,  why standard frames in categorical logic can’t help (but perhaps it can if they start thinking about it), who was the first programmer in chemlambda, live pages where you can play with the parser, closing with an announcement that indeed anharmonic lambda (in the imperfect form of kali, or kaleidoscope) soves the initial problem after 10 years of work. Another article will be most satisfactory, but you see, people rarely really read articles on subjects they are not familiar with. These slides may help.
  • and for a general audience my old (Chemlambda for the people)  slides, which you may appreciate more and you may think about applications of chemlambda in the real world. But again, what is the real world, else than a hamiltonian system with dissipation? And who does the computation?

 

 

Quine graphs (3), ouroboros, hapax and going public

Several news:

I decided that progressively I’m going to go public, with a combination of arXiv, Github and Zenodo (or Figshare), and publication. But there is a lot of stuff I have to publish and that is why this will happen progressively. Which means it will be nice to watch because it is interesting, for me at least,  to answer to the question:

What the … does a researcher when publishing? What is this for? Why?

Seriously, the questions are not at all directed against classical publication, nor are they biased versus OA. When you publish serially, like a researcher, you often tell again and again a story which evolves in time. To make a comparison, it is like a sequence of frames in a movie.

Only that it is not as simple. It is not quite like a sequence of frames,  is like a sequence of pictures, each one with it’s repeating tags, again and again.

Not at all compressed. And not at all like an evolving repository of programs which get better with time.

6 months since my first javascript only

… program, this one: How time flows: Gutenberg time vs Internet time . Before I used js only for the latest stage, written (clumsily, I admit) by other programs. Since then I wrote hapax  and I modified other scripts to fit my needs, mainly, but this corrected a gap in my education 🙂

Oh btw if anybody interested to see/interact on this talk I’d like to propose: [adapted from a pdf (sigh) for my institution management, though they are in the process to reverse to  the pre-internet era  and they managed to nuke all mail addresses @imar.ro a domain which was  rock solid since at least 20 years; that’s why I post it here]

A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science
Graph rewrite systems are used in many research domains, two among many examples are Reidemeister moves in knot theory or Interaction Combinators in computer science. However, the use of graph rewrites systems is often domain dependent. Indeed, for the knot theory example we may use the Reidemeister move in order to prove that the Kauffman bracket is a knot invariant, which means that it does not change after the graph is modified by any rewrite. In the other case given as an example, Interaction Combinators are interesting because they are Turing universal: any computation can be done with IC rewrite rules and the rewrites are seen as the computational steps which modify the graphs in a significant way.

In this talk I want to explain, for a general audience, the ocurence and relations among several important graph rewrite systems. I shall start with lambda calculus and the Church-Turing thesis, then I shall describe Lafont’ Interaction Combinators [1]. After that I shall talk about graphic lambda calculus [2], about joint work with Louis Kauffman [3] on relations with knot theory. Finally I explain how I, as a mathematician, arrived to study graph rewrites systems applications in computer science, starting from emergent algebras [4] proposed in relation with sub-riemannian geometry and ending with chemlambda [5], hapax (demo page [6], presentation slides [7]) and em-convex [8] with the associated graph rewrite system [9] (short of “kaleidoscope”).

During the talk I shall use programs which are based on graph rewrites, which are free to download and play with from public repositories.

[1] Y. Lafont, Interaction Combinators, Information and Computation 137, 1, (1997), p. 69-101
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.5761&rep=rep1&type=pdf

[2] M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), p. 311-360
https://www.complex-systems.com/abstracts/v22_i04_a01/

[3] M. Buliga, L.H. Kauffman, Chemlambda, Universality and Self-Multiplication, The 2019 Conference on Artificial Life 2014 NO. 26, p.490-497
https://www.mitpressjournals.org/doi/pdf/10.1162/978-0-262-32621-6-ch079
[4] M. Buliga, Emergent algebras, arXiv:0907.1520
https://arxiv.org/abs/0907.1520

[5] M. Buliga, Chemlambda, GitHub repository (2017)
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
[6] M. Buliga, Hapax, (2019) demo page http://imar.ro/~mbuliga/hapax.html,
Github repository https://github.com/mbuliga/hapax

[7] M. Buliga, Artificial physics of artificial chemistries, slides (2019)
http://imar.ro/~mbuliga/genchem.html

[8] M. Buliga, The em-convex rewrite system, arXiv:1807.02058
https://arxiv.org/abs/1807.02058

[9] M. Buliga, Anharmonic lambda calculus, or kali (2019),
demo page https://mbuliga.github.io/kali24.html

 

I’d like to make this much more funny than it looks by using these js scripts. Also “kaleidoscope” is tongue-in-cheek, but that’s something only we know. Anyway kali is on the way to be finished, simplified and documented. And somehow different. For a short while, encouraged by these js scripts and similar attempts, I tried to believe that maybe, just maybe there is a purely local way to do untyped lambda, right around the corner. But it seems there isn’t, although it was fun to try again to search it. But then what to do? Maybe to be honest with the subject and say that indeed a purely local system, geometry inspired, exists, it it Turing universal, but it is not lambda calculus (although it can be guided by humans into being one, so that’s not the problem)? Maybe going back to my initial goal, which was to understand space computationally, which I do now? Yeah, I know that lambda calculus is fascinating, even more if untyped,  but em is so much better!