# An extension of hamiltonian mechanics

This is an introduction to the ideas of the article arXiv:1902.04598

UPDATE: If you think about a billiard-ball computer, the computer is in the expression of the information gap. The model applies  also to chemlambda, molecules have a hamiltonian as well and the graph rewrites, aka chemical reactions, have a description in the information gap. That’s part of the kaleidos project 🙂

__

Hamiltonian mechanics is the mechanism of the world. Indeed, the very simple equations (here the dot means a time derivative)

govern everything. Just choose an expression for the function $H$, called hamiltonian, and then solve these equations to find the evolution in time of the system.

Quantum mechanics is in a very precise sense the same thing. The equations are the same, only the formalism is different. There is a hamiltonian which gives the evolution of the quantum system…

Well, until measurement, which is an addition to the beautiful formalism. So we can say that hamiltonian mechanics, in the quantum version, and the measurement algorithm are, together, the basis of the quantum world.

Going back to classical mechanics, the same happens. Hamiltonian mechanics can be used as is in astronomy, or when we model the behavior of a robotic arm, or other purely mechanical system. However, in real life there are behaviors which go beyond this. Among them: viscosity, plasticity, friction, damage, unilateral contact…

There is always, in almost all applications of mechanics, this extra ingredient: the system does not only have a hamiltonian, there are other quantities which govern it and which make, most of the time, the system to behave irreversibly.

Practically every  object, machine or construction made by humans needs knowledge beyond hamiltonian mechanics. Or beyond quantum mechanics. This is the realm of applied mathematics, of differential equations, of numerical simulations.

In this classical mechanics for the real world we need the hamiltonian and we also need to explain in which way the object or material we study is different from all the other objects or materials. This one is viscous, plastic, elsot-plastic, elasto-visco-plastic, there is damage, you name it, these differences are studied and they add to hamiltonian mechanics.

They should add, but practically they don’t. Instead, what happens is that the researchers interested into such studies choose to renounce at the beaustiful hamiltonian mechanics formalism and to go back to Newton and add their knowledge about irreversible behaviours there.

(There is another aspect to be considered if you think about mechanical computers. They are mostly nice thought experiments, very powerfull ideas generators. Take for example a billiard-ball computer. It can’t be described by hamiltonian mechanics alone because of the unilateral contact of the balls with the biliard and of the balls one with another. So we can study it, but we have to add to the hamiltonian mechanics formalism.)

From all this  we see that it may be interesting to study if there is any information content of the deviation from hamiltonian mechanics.

We can measure this deviation by a gap vector, defined by

and we need new equations for the gap vector $\eta$.  Very simple then, suppose we have the other ingredient we need, a likelihood function $\pi \in [0,1]$ and we add that

where $z = z(t) = (q(t), p(t))$. That is we ask that    if the system is in the state $z$ then the velocity $\dot{z}$ and the gap vector $\eta$   maximize the likelihood $\pi$ .

Still too general, how can we choose the likelihood? We may take the following condition

that is we can suppose that the algorithm max  gives a  categorical answer when applied to any of the 2nd or 3rd argument of the likelihood.

(It’s Nature’s business to embody the algorithm max…)

We define then the information content associated to the likelihood as

So now we have a principle of minimal information content of the difference from hamiltonian evolution: minimize

In arXiv:1902.04598 I explain how this extension of hamiltonian mechanics works wonderfully with viscosity, plasticity, damage and unilateral contact.

# Stats of perennial posts

How perennial is this blog? I took the top 20 directly accessed posts in each year, for 2017, 2018 and 2019 up to Feb 10.

Conclusion:from the 665 posts from this blog (666 with this one)

• in each year only 20% of the top 20 posts are from the same year. So this blog is not read as a news source, it ages well.
•  73% of all posts available ever were accessed directly in 2017, 62% in 2018 and already 20% in the first month and 1/2 of 2019. Because 2019 just started, it follows that at least 60% of all posts since 2011 are read every year.

Also, 2015 and 2016 are not well represented in top 20, probably because of the chemlambda collection. Sad, because there are many other things here than chemlambda, for example posts about OA and OS.

Here is the data. Mind that the data probably represents only post read by people who don’t use blockers, as seen via the stats page of the blog. Helas, I would like to know what is the real situation, while in the same time I advice everybody to use blockers, as I do. As an author, I do need a bit a love though, indulge me.

2019 (up to Feb 10):

• 134 posts accessed,   i.e. 20%  of all posts  up to 2019
• 20% from same year, 30% of posts from same year in the top 20
• 2011 (1), 2012 (4), 2013 (5), 2014 (2), 2015 (0), 2016 (1), 2017 (0), 2018 (3), 2019 (4)
1. (2013) Graphic lambda calculus
2. (2012) Conversion of lambda calculus terms into graphs
3. (2011) The Cartesian Theater: philosophy of mind versus aerography
4. (2012) Introduction to graphic lambda calculus
5. (2019) Graphic lambda calculus and chemlambda (I)
6. (2012) Right angles everywhere (I)
7. (2014) Chemlambda
8. (2019) Universality of interaction combinators and chemical reactions
9. (2018) Diagrammatic execution models (Lambda World Cadiz 2018) compared with chemlambda
10. (2012) Right angles everywhere (II), about the gnomon
11. (2019) Graphic lambda calculus and chemlambda (II)
12. (2014) The price of publishing with arXiv
13. (2016) SciHub and patent wars
14. (2013) Teaser: B-type neural networks in graphic lambda calculus (I)
15. (2013) The Y combinator in graphic lambda calculus and in the chemical concrete machine
16. (2019) Kaleidoscope
17. (2013) A machine for computing the Ackermann function in graphic lambda calculus
18. (2013) Dictionary from emergent algebra to graphic lambda calculus (II)
20. (2018) Projects for 2019 and a challenge

2018:

• 407 posts accessed, i.e. 62% of all posts up to 2018
• 20% from same year, 15% of posts from same year in the top 20
• 2011 (3), 2012 (3), 2013 (5), 2014 (3), 2015 (0), 2016 (1), 2017 (1), 2018 (4)
1. (2013) Graphic lambda calculus
2. (2012) Conversion of lambda calculus terms into graphs
3. (2011) The Cartesian Theater: philosophy of mind versus aerography
4. (2014) Chemlambda
5. (2018) Diagrammatic execution models (Lambda World Cadiz 2018) compared with chemlambda
6. (2011) Gromov’s Ergobrain
7. (2013) Cartesian method, scientific method and counting problems
8. (2013) A machine for computing the Ackermann function in graphic lambda calculus
9. (2012) Right angles everywhere (II), about the gnomon
10. (2012) Introduction to graphic lambda calculus
11. (2014) The price of publishing with arXiv
12. (2013) Teaser: B-type neural networks in graphic lambda calculus (I)
13. (2014) Distributed GLC
14. (2018) John Baez’ Applied Category Theory 2019 post uses my animation without attribution [updated]
16. (2011) How not to get bored, by reading Gromov and Tao
17. (2017) Chemical Sneakernet
19. (2016) Open peer review is something others should do, Open science is something you could do
20. (2013) Example: decorations of S,K,I combinators in simply typed graphic lambda calculus

2017:

• 454 posts accessed, i.e.  73% of all posts up to 2017
• 20% from same year, 23% of posts from same year in the top 20
• 2011 (3), 2012 (3), 2013 (5), 2014 (5), 2015 (0), 2016 (0), 2017 (4)
1. (2013) Graphic lambda calculus
2. (2017) The price of publishing with GitHub, Figshare, G+, etc
3. (2011) The Cartesian Theater: philosophy of mind versus aerography
4. (2012) Conversion of lambda calculus terms into graphs
5. (2014) Chemlambda
6. (2014) Distributed GLC
7. (2013) Cartesian method, scientific method and counting problems
8. (2017) Chemlambda for the people (with context)
9. (2014) The price of publishing with arXiv
10. (2012) Introduction to graphic lambda calculus
11. (2012) Right angles everywhere (II), about the gnomon
12. (2011) Gromov’s Ergobrain
13. (2014) How to use chemlambda for understanding DNA manipulations
14. (2013) A machine for computing the Ackermann function in graphic lambda calculus
15. (2013) Unlimited detail is a sorting algorithm
16. (2011) How not to get bored, by reading Gromov and Tao
17. (2013) Hewitt Actor Model, lambda calculus and graphic lambda calculus
18. (2014) Zipper logic
19. (2017) More experiments with Open Science
20. (2017) Back to the drawing board: all strings

# Scientific publishers take their money from the academic managers, blame them too

Starting with “All this is an excellent ad for sci-hub, which avoids most of the serious drawbacks of publishers like Elsevier. It was interesting how that was relegated to a veiled comment at the end, “or finding access in other channels”. But basically if the mainstream publishers can’t meet the need, we do need other channels, and right now sci-hub is the only one that actually works at scale.

Then the discussion goes to “Blame the academic administrators who demand publications in top tier journals – the same ones who charge a ton for access.

Or “ in market terms the clients (researchers) manifest a strong preference for other products than those offered by the publishers. Why do they still exist? Does not make any sense, except if we recognize also that the market is perturbed

Enjoy the thread!  It shows that people think better than, you choose:  pirates who fight  only for the media corporation rights,  gold OA diggers who ask for more money than legacy publishers, etc…

UPDATE: for those who don’t know me, I’m for OA and Open Science. I do what I support. I am not for legacy publishers. I don’t believe in the artificial distinction between green OA, which is said to be for archiving, and gold OA which is said to be for publishing. I’m for arXiv and other really needed services for research communication.

# My first programs, long ago: Mumford-Shah and fracture

A long time ago, in 1995-1997, I dreamed about really fast and visual results in image segmentation by the new then Mumford-Shah functional and in fracture. It was my first programming experience. I used Fortran, bash and all kinds of tools available in linux.

There is still this trace of my page back then, here at the Wayback Machine. (I was away until 2006.) The present day web page is this.

Here is the image segmentation by the M-S functional of a bw picture of a Van Gogh painting.

And here is a typical result of  fracture propagation (although I remember having hundreds of frames available…)

The article is here.

# What’s new around Open Access and Open Science? [updated]

In the last year I was not very much interested into Open Access and Open Science. There are several reasons, I shall explain them. But before: what’s new?

My reasons were that:

• I’m a supporter of OA, but not under the banner of gold OA. You know that I have a very bad impression about the whole BOAI thing, which introduced the false distinction between gold which is publication and green which is archival. They succeeded to delay the adoption of what researchers need (i.e. basically older than BOAI inventions, like arXiv) and the recognition that the whole academic publication system is working actively against the researchers interests. Academic managers are the first to be blamed about this, because they don’t have the excuse that they work for a private entity which has to make money no matter the price. Publishers are greedy, OK, but who gives them the money?
• Practically, for the working researcher, we can now publish in any place, no matter how close or anachronically managed, because we can find anything on Sci-Hub, if we want. So there is no reason to fight for more OA than this. Except for those who make money from gold OA…
• I was very wrong with my efforts and attempts to use corporate social media for scientific communication.
• Bu still, I believe strongly in the superiority of validation over peer-review. Open Science is the future.

I was also interested in the implications for OA and OS of the new EU Copyright Directive. I expressed my concern that again it seems that nobody cares about the needs of researchers (as opposed to publishers and corporations in general) and I asked some questions which interest me and nobody else seems to ask: will the new EU Copyright Directive affect arXiv or Figshare?  The problem I see is related to automatic filters, or to real ways the researchers may use these repositories.  See for example here for a discussion.  In   Sept 2018 I filed requests for answers to arXiv and to Figshare. For me at least the answers will be very interesting and I hope them to be as bland as possible, in the sense that there is nothing to worry about.

So from my side, that’s about all, not much. I feel like except the gold OA money sucking there’s nothing new happening. Please tell me I’m very wrong and also what can I do with my research output, in 2019.

UPDATE: I submitted two days ago a comment at Julia Reda post Article 13 is back on – and it got worse, not better. About the implications for the research articles repositories, the big ones, I mean, the ones which are used millions of times by many researchers. I waited patiently, either for the appearance of the comment or for a reaction. Any reaction. For me this is a clear answer: pirates fight for the freedom of the corporation to share in its walled garden the product of a publisher. The rest is immaterial for them. They pirates not explorers.

# Graphic lambda calculus and chemlambda (IV)

This post continues with chemlambda v2. For the last post in the series see here.

Instead of putting even more material here, I thought it is saner to make a clear page with all details about the nodes and rewrites of chemlambda v2. Down the page there are examples of conflicts.

Not included in that page is the extension of chemlambda v2 with nodes for Turing machines. The scripts have them, in the particular case of a busy beaver machine. You can find this extension explained in the article Turing machines, chemlambda style.

Turing machines appear here differently from the translation technique of Lafont (discussed here, see also these (1), (2) for other relations between interaction combinators and chemlambda). Recall that he proves  prove that interaction combinators are Turing universal by:

• first proving a different kind of universality among interaction nets, to me much more interesting than Turing universality, because purely graph related
• then proving that any Turning machine can be turned into an interaction nets graphical rewrite system.

In this extension of chemlambda v2 the nodes for Turing machines are not translated from chemlambda, i.e. they are not given as chemlambda graphs. However, what’s interesting is that the chemlambda and Turing realm can work harmoniously together, even if based on different nodes.

An example is given in the Chemlambda for the people slides, with the name Virus structure with Turing machines, builts itself

but mind that the source link is no longer available, since I deleted the chemlambda g+ collection. The loops you see are busy beaver Turing Machines, the structure from the middle is pure chemlambda.

# The shuffle trick in Lafont’ Interaction Combinators

For the shuffle trick see The illustrated shuffle trick…    In a way, it’s also in Lafont’ Interaction Combinators article, in the semantics part.

It’s in the left part of Figure 14.

In chemlambda the pattern involves one FO node and two FOE nodes. In this pattern there is first a FO-FOE rewrite and then a FI-FOE  one. After these rewrites we see that now we have a FOE instead of the FO node and two FO instead of the previous two FOE nodes. Also there is a swap of ports, like in the figure.

You can see it all in the linked post, an animation is this:

For previous posts about Lafont paper and relations with chemlambda see:

If the nodes FO and FOE were dilations of arbitrary coefficients a and b, in an emergent algebra, then the equivalent rewrite is possible if and only if we are in a vector space. (Hint: it implies linearity, which implies we are in a conical group, therefore we can use the particular form of dilations in the general shuffle trick and we obtain the commutativity of the group operation. The only commutative conical groups are vector spaces.)

In particular the em-convex axiom implies the shuffle trick, via theorem 8.9 from arXiv:1807.02058  . So the shuffle trick is a sign of commutativity. Hence chemlambda alone is still not general enough for my purposes.

You may find interesting the post Groups are numbers (1) . Together with the em-convex article, it may indeed be deduced that [the use of] one-parameter groups [in Gleason-Yamabe and Montgomery-Zippin] is analoguous to the Church encoding of naturals. One-parameter groups are numbers. The em-convex axiom could be weakened to the statement that 2 is invertible and we would still obtain theorem 8.9. So that’s when the vector space structure appears in the solution of the Hilbert 5th problem. But if you are in a general group with dilations, where only the “em” part of the em-convex rewrite system applies (together with some torsor rewrites, because it’s a group), then you can’t invert 2, or generally any other number than 1, so you get only a structure of conical group at the infinitesimal level. However naturals exist even there, but they are not related to one-parameter groups.