Tag Archives: Hamiltonian mechanics

An extension of hamiltonian mechanics

This is an introduction to the ideas of the article arXiv:1902.04598

UPDATE: If you think about a billiard-ball computer, the computer is in the expression of the information gap. The model applies¬† also to chemlambda, molecules have a hamiltonian as well and the graph rewrites, aka chemical reactions, have a description in the information gap. That’s part of the kaleidos project ūüôā

__

Hamiltonian mechanics is the mechanism of the world. Indeed, the very simple equations (here the dot means a time derivative)

hamiltonian-1

govern everything. Just choose an expression for the function H, called hamiltonian, and then solve these equations to find the evolution in time of the system.

Quantum mechanics is in a very precise sense the same thing. The equations are the same, only the formalism is different. There is a hamiltonian which gives the evolution of the quantum system…

Well, until measurement, which is an addition to the beautiful formalism. So we can say that hamiltonian mechanics, in the quantum version, and the measurement algorithm are, together, the basis of the quantum world.

Going back to classical mechanics, the same happens. Hamiltonian mechanics can be used as is in astronomy, or when we model the behavior of a robotic arm, or other purely mechanical system. However, in real life there are behaviors which go beyond this. Among them: viscosity, plasticity, friction, damage, unilateral contact…

There is always, in almost all applications of mechanics, this extra ingredient: the system does not only have a hamiltonian, there are other quantities which govern it and which make, most of the time, the system to behave irreversibly.

Practically every  object, machine or construction made by humans needs knowledge beyond hamiltonian mechanics. Or beyond quantum mechanics. This is the realm of applied mathematics, of differential equations, of numerical simulations.

In this classical mechanics for the real world we need the hamiltonian and we also need to explain in which way the object or material we study is different from all the other objects or materials. This one is viscous, plastic, elsot-plastic, elasto-visco-plastic, there is damage, you name it, these differences are studied and they add to hamiltonian mechanics.

They should add, but practically they don’t. Instead, what happens is that the researchers interested into such studies choose to renounce at the beaustiful hamiltonian mechanics formalism and to go back to Newton and add their knowledge about irreversible behaviours there.

(There is another aspect to be considered if you think about mechanical computers. They are mostly nice thought experiments, very powerfull ideas generators. Take for example a billiard-ball computer. It can’t be described by hamiltonian mechanics alone because of the unilateral contact of the balls with the biliard and of the balls one with another. So we can study it, but we have to add to the hamiltonian mechanics formalism.)

From all this  we see that it may be interesting to study if there is any information content of the deviation from hamiltonian mechanics.

We can measure this deviation by a gap vector, defined by

hamiltonian-2

and we need new equations for the gap vector \eta.  Very simple then, suppose we have the other ingredient we need, a likelihood function \pi \in [0,1] and we add that

hamiltonian-3

where z = z(t) = (q(t), p(t)). That is we ask that    if the system is in the state z then the velocity \dot{z} and the gap vector \eta   maximize the likelihood \pi .

Still too general, how can we choose the likelihood? We may take the following condition

hamiltonian-4

that is we can suppose that the algorithm max  gives a  categorical answer when applied to any of the 2nd or 3rd argument of the likelihood.

(It’s Nature’s business to embody the algorithm max…)

We define then the information content associated to the likelihood as

hamiltonian-5

So now we have a principle of minimal information content of the difference from hamiltonian evolution: minimize

hamiltonian-6

 

In arXiv:1902.04598 I explain how this extension of hamiltonian mechanics works wonderfully with viscosity, plasticity, damage and unilateral contact.

[see also this]

More detailed argument for the nature of space buillt from artificial chemistry

There are some things which were left uncleared. For example, I have never suggested to use networks of computers as a substitute for space, with computers as nodes, etc. This is one of the ideas which are too trivial. In the GLC actors article is proposed a different thing.

First to associate to an initial partition of the graph (molecule) another graph, with nodes being the partition pieces (thus each node, called actor, holds a piece of the graph) and edges being those edges of the original whole molecule which link nodes of graphs from different partitions. This is the actors diagram.
Then to interpret the existence of an edge between two actor nodes as a replacement for a spatial statement like (these two actors are close). Then to remark that the partition can be made such that the edges from the actor diagram correspond to active edges of the original graph (an active edge is one which connects two nodes of the molecule which form a left pattern), so that a graph rewrite applied to a left pattern consisting of a pair of nodes, each in a different actor part, produces not only a change of the state of each actor (i.e. a change of the piece of the graph which is hold by each actor), but also a change of the actor diagram itself. Thus, this very simple mechanism produces by graph rewrites two effects:

  • “chemical” where two molecules (i.e. the states of two actors) enter in reaction “when they are close” and produce two other molecules (the result of the graph rewrite as seen on the two pieces hold by the actors), and
  • “spatial” where the two molecules, after chemical interaction, change their spatial relation with the neighboring molecules because the actors diagram itself has changed.

This was the proposal from the GLC actors article.

Now, the first remark is that this explanation has a global side, namely that we look at a global big molecule which is partitioned, but obviously there is no global state of the system, if we think that each actor resides in a computer and each edge of an actor diagram describes the fact that each actor knows the mail address of the other which is used as a port name. But for explanatory purposes is OK, with the condition to know well what to expect from this kind of computation: nothing more than the state of a finite number of actors, say up to 10, known in advance, a priori bound, as is usual in the philosophy of local-global which is used here.

The second remark is that this mechanism is of course only a very
simplistic version of what should be the right mechanism. And here
enter the emergent algebras, i.e. the abstract nonsense formalism with trees and nodes and graph rewrites which I have found trying to
understand sub-riemannian geometry (and noticing that it does not
apply only to sub-riemannian, but seems to be something more general, of a computational nature, but which computation, etc). The closeness,  i.e. the neighbourhood relations themselves are a global, a posteriori view, a static view of the space.

In the Quick and dirty argument for space from chemlambda I propose the following. Because chemlambda is universal, it means that for any program there is a molecule such that the reductions of this molecule simulate the execution of the program. Or, think about the chemlambda gui, and suppose even that I have as much as needed computational power. The gui has two sides, one which processes mol files and outputs mol files of reduced molecules, and the other (based on d3.js) which visualizes each step. “Visualizes” means that there is a physics simulation of the molecule graphs as particles with bonds which move in space or plane of the screen. Imagine that with enough computing power and time we can visualize things in as much detail as we need, of course according to some physics principles which are implemented in the program of visualization. Take now a molecule (i.e. a mol file) and run the program with the two sides reduction/visualization. Then, because of chemlambda universality we know that there exist another molecule which admit chemlambda reductions which simulate the reductions of the first molecule AND the running of the visualization program.

So there is no need to have a spatial side different from the chemical side!

But of course, this is an argument which shows something which can be done in principle but maybe is not feasible in practice.

That is why I propose to concentrate a bit on the pure spatial part. Let’s do a simple thought experiment: take a system with a finite no of degrees of freedom and see it’s state as a point in a space (typically a symplectic manifold) and it’s evolution described by a 1st order equation. Then discretize this correctly(w.r.t the symplectic structure)¬† and you get a recipe which describes the evolution of the system which has roughly the following form:

  • starting from an initial position (i.e. state), interpret each step as a computation of the new position based on a given algorithm (the equation of evolution), which is always an algebraic expression which gives the new position as a¬† function of the older one,
  • throw out the initial position and keep only the algorithm for passing from a position to the next,
  • use the same treatment as in chemlambda or GLC, where all the variables are eliminated, therefore renounce in this way at all reference to coordinates, points from the manifold, etc
  • remark that the algebraic expressions which are used¬† always consists¬† of affine (or projective) combinations of¬† points (and notice that the combinations themselves can be expressed as trees or others graphs which are made by dilation nodes, as in the emergent algebras formalism)
  • indeed, that¬† is because of the evolution equation differential¬† operators, which are always limits of conjugations of dilations,¬† and because of the algebraic structure of the space, which is also described as a limit of¬† dilations combinations (notice that I speak about the vector addition operation and it’s properties, like associativity, etc, not about the points in the space), and finally because of an a priori assumption that functions like the hamiltonian are computable themselves.

This recipe itself is alike a chemlambda molecule, but consisting not only of A, L, FI, FO, FOE but also of some (two perhaps)  dilation nodes, with moves, i.e. graph rewrites which allow to pass from a step to another. The symplectic structure itself is only a shadow of a Heisenberg group structure, i.e. of a contact structure of a circle bundle over the symplectic manifold, as geometric  prequantization proposes (but is a mathematical fact which is, in itself, independent of any interpretation or speculation). I know what is to be added (i.e. which graph rewrites which particularize this structure among all possible ones). Because it connects to sub-riemannian geometry precisely. You may want to browse the old series on Gromov-Hausdorff distances and the Heisenberg group part 0, part I, part II, part III, or to start from the other end The graphical moves of projective conical spaces (II).

Hence my proposal which consist into thinking about space properties as embodied into graph rewriting systems, inspred from the abstract nonsense of emergent algebras, combining  the pure computational side of A, L, etc with the space  computational side of dilation nodes into one whole.

In this sense space as an absolute or relative vessel does not exist more than the  Marius creature (what does exist is a twirl of atoms, some go in, some out, but is too complex to understand by my human brain) instead the fact that all beings and inanimate objects seem to agree collectively when it comes to move spatially is in reality a manifestation of the universality of this graph rewrite system.

Finally, I’ll go to the main point which is that I don’t believe that
is that simple. It may be, but it may be as well something which only
contains these ideas as a small part, the tip of the nose of a
monumental statue. What I believe is that it is possible to make the
argument  by example that it is possible that nature works like this.
I mean that chemlambda shows that there exist a formalism which can do this, albeit perhaps in a very primitive way.

The second belief I have is that regardless if nature functions like this or not, at least chemlambda is a proof of principle that it is possible that brains process spatial information in this chemical way.

__________________________________________________________________

A symplectic Brezis-Ekeland-Nayroles principle

We submitted to the arXiv the article

Marius Buliga, Gery de Saxce, A symplectic Brezis-Ekeland-Nayroles principle

You can find here the slides of two talks given in Lille and Paris a while ago,  where the article has been announced.

UPDATE: The article appeared, as  arXiv:1408.3102

This is, we hope, an important article! Here is why.

The Brezis-Ekeland-Nayroles principle appeared in two articles from 1976, the first by Brezis-Ekeland, the second by Nayroles. These articles appeared too early, compared to the computation power of the time!

We call the principle by the initials of the names of the inventors: the BEN principle.

The BEN principle asserts that the curve of evolution of a elasto-plastic body minimizes a certain functional, among all possible evolution curves which are compatible with the initial and boundary conditions.

This opens the possibility to find, at once the evolution curve, instead of constructing it incrementally with respect to time.

In 1976 this was SF for the computers of the moment. Now it’s the right time!

Pay attention to the fact that a continuous mechanics system has states belonging to an infinite dimensional space (i.e. has an infinite number of degrees of freedom), therefore we almost never hope to find, nor need the exact solution of the evolution problem. We are happy for all practical purposes with approximate solutions.

We are not after the exact evolution curve, instead we are looking for an approximate evolution curve which has the right quantitative approximate properties, and all the right qualitative exact properties.

In elasto-plasticity (a hugely important class of materials for engineering applications) the evolution equations are moreover not smooth. Differential calculus is conveniently and beautifully replaced by convex analysis.

Another aspect is that elasto-plastic materials are dissipative, therefore there is no obvious hope to treat them with the tools of hamiltonian mechanics.

Our symplectic BEN principle does this: one principle covers the dynamical, dissipative evolution of a body, in a way which can be reasonably easy amenable to numerical applications.

_______________________________________

 

Hamiltonian systems with dissipation, slides

If you want to see my other, applied math side, then here are the slides for a talk here at Lille, scheduled for Thu 12.06.2014.

Hamiltonian systems with dissipation

This is not what I worked these weeks, but a preparation. Soon will be finished something called “the symplectic Brezis-Ekeland-Nayroles principle”, written in collaboration with Gery de Saxce. Together with Gery, we work on the rigorous formulation of a generalization of some parts of convex analysis, since a number of years already, see the articles which contain the word “bipotential” here.

UPDATE: If you are in Paris next Tue June 17 then you may come to the seminaire D’Alembert where, due to the kind invitation of Djimedo Kondo,¬† I shall give a variant of the talk on Hamiltonian systems with dissipation, with some updates on the work done with Gery.

_____________________________