Category Archives: vision

More detailed argument for the nature of space buillt from artificial chemistry

There are some things which were left uncleared. For example, I have never suggested to use networks of computers as a substitute for space, with computers as nodes, etc. This is one of the ideas which are too trivial. In the GLC actors article is proposed a different thing.

First to associate to an initial partition of the graph (molecule) another graph, with nodes being the partition pieces (thus each node, called actor, holds a piece of the graph) and edges being those edges of the original whole molecule which link nodes of graphs from different partitions. This is the actors diagram.
Then to interpret the existence of an edge between two actor nodes as a replacement for a spatial statement like (these two actors are close). Then to remark that the partition can be made such that the edges from the actor diagram correspond to active edges of the original graph (an active edge is one which connects two nodes of the molecule which form a left pattern), so that a graph rewrite applied to a left pattern consisting of a pair of nodes, each in a different actor part, produces not only a change of the state of each actor (i.e. a change of the piece of the graph which is hold by each actor), but also a change of the actor diagram itself. Thus, this very simple mechanism produces by graph rewrites two effects:

  • “chemical” where two molecules (i.e. the states of two actors) enter in reaction “when they are close” and produce two other molecules (the result of the graph rewrite as seen on the two pieces hold by the actors), and
  • “spatial” where the two molecules, after chemical interaction, change their spatial relation with the neighboring molecules because the actors diagram itself has changed.

This was the proposal from the GLC actors article.

Now, the first remark is that this explanation has a global side, namely that we look at a global big molecule which is partitioned, but obviously there is no global state of the system, if we think that each actor resides in a computer and each edge of an actor diagram describes the fact that each actor knows the mail address of the other which is used as a port name. But for explanatory purposes is OK, with the condition to know well what to expect from this kind of computation: nothing more than the state of a finite number of actors, say up to 10, known in advance, a priori bound, as is usual in the philosophy of local-global which is used here.

The second remark is that this mechanism is of course only a very
simplistic version of what should be the right mechanism. And here
enter the emergent algebras, i.e. the abstract nonsense formalism with trees and nodes and graph rewrites which I have found trying to
understand sub-riemannian geometry (and noticing that it does not
apply only to sub-riemannian, but seems to be something more general, of a computational nature, but which computation, etc). The closeness,  i.e. the neighbourhood relations themselves are a global, a posteriori view, a static view of the space.

In the Quick and dirty argument for space from chemlambda I propose the following. Because chemlambda is universal, it means that for any program there is a molecule such that the reductions of this molecule simulate the execution of the program. Or, think about the chemlambda gui, and suppose even that I have as much as needed computational power. The gui has two sides, one which processes mol files and outputs mol files of reduced molecules, and the other (based on d3.js) which visualizes each step. “Visualizes” means that there is a physics simulation of the molecule graphs as particles with bonds which move in space or plane of the screen. Imagine that with enough computing power and time we can visualize things in as much detail as we need, of course according to some physics principles which are implemented in the program of visualization. Take now a molecule (i.e. a mol file) and run the program with the two sides reduction/visualization. Then, because of chemlambda universality we know that there exist another molecule which admit chemlambda reductions which simulate the reductions of the first molecule AND the running of the visualization program.

So there is no need to have a spatial side different from the chemical side!

But of course, this is an argument which shows something which can be done in principle but maybe is not feasible in practice.

That is why I propose to concentrate a bit on the pure spatial part. Let’s do a simple thought experiment: take a system with a finite no of degrees of freedom and see it’s state as a point in a space (typically a symplectic manifold) and it’s evolution described by a 1st order equation. Then discretize this correctly(w.r.t the symplectic structure)  and you get a recipe which describes the evolution of the system which has roughly the following form:

  • starting from an initial position (i.e. state), interpret each step as a computation of the new position based on a given algorithm (the equation of evolution), which is always an algebraic expression which gives the new position as a  function of the older one,
  • throw out the initial position and keep only the algorithm for passing from a position to the next,
  • use the same treatment as in chemlambda or GLC, where all the variables are eliminated, therefore renounce in this way at all reference to coordinates, points from the manifold, etc
  • remark that the algebraic expressions which are used  always consists  of affine (or projective) combinations of  points (and notice that the combinations themselves can be expressed as trees or others graphs which are made by dilation nodes, as in the emergent algebras formalism)
  • indeed, that  is because of the evolution equation differential  operators, which are always limits of conjugations of dilations,  and because of the algebraic structure of the space, which is also described as a limit of  dilations combinations (notice that I speak about the vector addition operation and it’s properties, like associativity, etc, not about the points in the space), and finally because of an a priori assumption that functions like the hamiltonian are computable themselves.

This recipe itself is alike a chemlambda molecule, but consisting not only of A, L, FI, FO, FOE but also of some (two perhaps)  dilation nodes, with moves, i.e. graph rewrites which allow to pass from a step to another. The symplectic structure itself is only a shadow of a Heisenberg group structure, i.e. of a contact structure of a circle bundle over the symplectic manifold, as geometric  prequantization proposes (but is a mathematical fact which is, in itself, independent of any interpretation or speculation). I know what is to be added (i.e. which graph rewrites which particularize this structure among all possible ones). Because it connects to sub-riemannian geometry precisely. You may want to browse the old series on Gromov-Hausdorff distances and the Heisenberg group part 0, part I, part II, part III, or to start from the other end The graphical moves of projective conical spaces (II).

Hence my proposal which consist into thinking about space properties as embodied into graph rewriting systems, inspred from the abstract nonsense of emergent algebras, combining  the pure computational side of A, L, etc with the space  computational side of dilation nodes into one whole.

In this sense space as an absolute or relative vessel does not exist more than the  Marius creature (what does exist is a twirl of atoms, some go in, some out, but is too complex to understand by my human brain) instead the fact that all beings and inanimate objects seem to agree collectively when it comes to move spatially is in reality a manifestation of the universality of this graph rewrite system.

Finally, I’ll go to the main point which is that I don’t believe that
is that simple. It may be, but it may be as well something which only
contains these ideas as a small part, the tip of the nose of a
monumental statue. What I believe is that it is possible to make the
argument  by example that it is possible that nature works like this.
I mean that chemlambda shows that there exist a formalism which can do this, albeit perhaps in a very primitive way.

The second belief I have is that regardless if nature functions like this or not, at least chemlambda is a proof of principle that it is possible that brains process spatial information in this chemical way.

__________________________________________________________________

Advertisements

Quick and dirty argument for space from chemlambda

One of the least understood ideas of chemlambda is related to this question: which is the space where these artificial molecules live?

There are two different possible applications of chemlambda, each having a different answer for this question. By confusing these two applications we arrive at the confusion about the conception of space in chemlambda.

Application 1 concerns real chemistry and biology. It is this: suppose there exist real chemical molecules which in reaction with real other substances (which play the role of the enzymes for the moves, invisible in chemlambda). Then, from the moment these real molecules and real enzymes are identified, we get *for free* a chemical computer, if we think small. If we think big, then we may hope that the real molecules are ubiquitous in biochemistry and once identified the chemical reactions which represent the chemlambda moves, then we get for free a computational interpretation of big parts of biochemistry. Thinking big, this would mean that we arrive to grasp a fundamental manifestation of computation in biochemistry, which has nothing at all to do with numbers, or bits, or boolean gates, or channels and processes, all this garbage we carry from the experience (very limited historically) we have with computation until now.

In this application 1 space is no mystery, is the well known 3d space, the vessel where real molecules roam. The interest is here not in “what is space”, but “is life in some definite clear way a computational thing?”.

Application 2 resembles more to physics than biochemistry. It aims to answer to the question what is space? Ironically from neuroscience we know that clearly living brains don’t relate with space in any way which involves coordinates and crunching numbers. However, the most fundamental physics never escaped the realm of coordinates and implicit assumptions about backgrounds.

Until now. The idea proposed by application 2 of chemlambda is that space is nothing but a sort of a program.

I try to make this clear by using emergent algebras, and will continue this path, but here is the quick and dirty argument, which appears not to use emergent algebras,  that chemlambda can explain space as a program.

(it does use them but this is a detail, pay attention to the main line.)

OK, so the artificial molecules in chemlambda are graphs. As graphs, they don’t need any space to exist, because everybody knows that a graph can be described in various ways (is a data structure) and only embeddings of a graph in a space need ahem … space.

Just graphs, encoded in .mol files, as used by the chemlambda visualiser I work on these days.

What you see on the screen when you use the visualiser is chemlambda as the main engine and some javascript salt and pepper, in order to impress our visually based monkey brains.

But, you see, chemlambda can do any computation, because it can do combinatory logic. The precise statement is that chemlambda with the reduction strategy which I call *the most stupid” is an universal computer.

That means that for any chain of reductions of a chemlambda molecule, there is another chemlambda molecule whose reductions describe the first mentioned reductions AND the javascript (and whatnot) computation which represent the said first chain of reductions on the screen.

What do you think about this bootstrapping?

__________________________

From a stain on the wall to five visual languages

Do you know about the “stain on the wall”  creativity technique of Leonardo da Vinci? Here is a quote [source used]:

I will not forget to insert into these rules, a new theoretical invention for knowledge’s sake, which, although it seems of little import and good for a laugh, is nonetheless, of great utility in bringing out the creativity in some of these inventions.    This is the case if you cast your glance on any walls dirty with such stains or walls made up of rock formations of different types.  If you have to invent some scenes, you will be able to discover them there in diverse forms, in diverse landscapes, adorned with mountains, rivers, rocks, trees, extensive plains, valleys, and hills. You can even see different battle scenes and movements made up of unusual figures,  faces with strange expressions,  and myriad things which you can  transform into a complete and proper form constituting part of similar walls and rocks. These are like the sound of bells, in whose tolling, you hear names and words that your imagination conjures up.

I propose to you  five graphical formalisms, or visual languages, towards the goal of “computing with space”.

They all come from a “stain on the wall”,  reproduced here  (is the beginning of the article What is a space? Computations in emergent algebras and the front end visual system, arXiv:1009.5028),  with  some links  to more detailed explanations and related material which I invite you to follow.

Or better, to threat them as  a stain on the wall. To share, to dream about, to create, to discuss.

In mathematics “spaces” come in many flavours. There are vector spaces, affine spaces, symmetric spaces, groups and so on. We usually take such objects as the stage where the plot of reasoning is laid. But in fact what we use, in many instances,are properties of particular spaces which, I claim, can be seen as coming from a particular class of computations.

There is though a “space” which is “given” almost beyond doubt, namely the physical space where we all live. But as it regards perception of this space, we know now that things are not so simple. As I am writing these notes, here in Baixo Gavea, my eyes are attracted by the wonderful complexity of a tree near my window. The nature of the tree is foreign to me, as are the other smaller beings growing on or around the tree.  I can make some educated guesses about what they are: some are orchids, there is a smaller, iterated version of the big tree. However, somewhere in my brain, at a very fundamental level, the visible space is constructed in my head, before the stage where I a capable of recognizing and naming the objects or beings that I see.

__________________________________

The five visual languages are to be used with the decentralized computing model called Distributed GLC.  They point to different aspects, or they try to fulfil different goals.

They are:

__________________________________

 

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

The front end visual system performs like a distributed GLC computation

In this post I want to explain why the Distributed GLC  model of computation can be seen as a proof of principle that it is possible to describe rigorously some complex functioning of the brain as computation.

If you are not aware about this as being a problem, then please learn that the matter whether what brains do is computation is very controversial. On one side there are rigorous notions of computation (expressed in terms of Turing Machines, or in terms of lambda calculus, for example) which are used with full competence in CS. On the other side, in (some parts of) neuroscience the word “computation” is used in a non-rigorous sense, not because the neuroscience specialists are incapable of understanding computation in the rigorous CS sense, but because in real brains the matters are far more complex to make sense than in regards to paper  computers. Nevertheless, (some) CS specialists believe (without much real evidence) that brains compute in the CS sense, and (some) neuroscience specialists believe that their vague notions of computation deserve to bear this name, even it does not look like computation in CS rigorous sense.

OK, I shall concentrate on a particular example which I think it is extremely interesting.

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” . From section 1, quotes indexed by me with (1), (2), (3).

(1) Vision […]  is sustained by a dense, hierarchically nested and heterarchically juxtaposed tangle of cyclical processes.”

(2) In this chapter we focus upon the interface between the light field and those parts of the brain nearest to the transduction stage. We call this the “visual front end”.

(3) Of course, the exact limits of the interface are essentially arbitrary, but nevertheless the notion of such an interface
is valuable.

Comments:

  • (2) is the definition of the front end
  • (3) is a guard against a possible entry path of the homunculus in the brain
  • (1)  has these very nice expression “dense tangle of cyclical processes”, will come back to this!

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Comments and reformulations, indexed by (I), … (IV)

  • (I) the front end is a syntactical transformer, it processes structure [from (a), (b)]
  • (II) there is no semantics [from (b)]; semantical interpretations are not part of the front end [from (d)]
  • (III) the front end does not compute, in the sense that there is no categorical like chasing diagrams type of computing [not formulated in terms of signals processed by gates?] [from (d)]
  • (IV) there is a clear mechanism, based on something like a “dense tangle of cyclical processes” which processes the total input (from the light field) from the immediate past [from (e) and (1)]

These (I)-(IV) are exactly the specifications of a distributed computation with GLC actors, namely:

  • a distributed, asynchronous, rigorously defined computation
  • based on local graph rewrites which are purely syntactic transformers,  a correspondent of both “dense tangle of cyclical processes” and also of “processes structure”
  • there is no semantics,  because there are no names or values which decorate the arrows of the GLC graphs, nor they travel through the nodes of such graphs. There is no evaluation procedure needed for the computation with GLC actors
  • the computation with GLC actors is done starting from an initial graph (structure) , which may use also external constructs (the cores are equivalents of the light field which triggers chemical reaction in the retina, which are then processed by the front end)

This is no coincidence! One of the reasons of building GLC was exactly the one of making sense of the front end visual system.

In conclusion:

  • yes, there is a way to rigorously describe what the front end does as computation in the CS sense, although
  • this notion of computation has some unique features: no evaluation, graph reduction based asynchronous, distributed, purely local. No semantics needed, no global notions or global controllers, neither in space, nor in time.

Before aiming to explain consciousness

… you need to explain awareness, in particular all these things ignored by non-geometrical minds.

If the following is in any way a result of computing, it would be “computing with space”, I think and hope.

Enjoy reading Experimental Phenomenology: Art & Science , by Jan Koenderink,  published by  The Clootcrans Press!  Quotes from the beginning of the e-book:

The contents of this eBook are the slides of an invited talk held by me in Alghero (Sardinia) in the VSAC (Visual Science of Art Conference) 2012. The talk was scheduled for an hour and a half, thus there are many slides.

Judging from the responses (discounting polite remarks such as “nice pictures”, and so forth) most of the audience didn’t get the message. Most hinted that they were surprised that I apparently “didn’t believe in reality”, thus showing that the coin didn’t drop.

The topic of the talk are the relations between life, awareness, mind, science and art. The idea is that these are all ways of creating alternative realities. The time scales are vastly different, ranging all the way from less than a tenth of  a second (the microgenesis of visual awareness), to evolutionary time spans (the advent of a new animal species). The processes involved play on categorically different levels, basic physicochemical process (life), pre-conscious processes (awareness), reflective thought (mind), to the social level (art and science). Yet the basic processes, like taking perspective (predator versus gatherer in evolution, sense modality in awareness, language in reflective thought, style in art, geometry versus algebra in science), selection, analogy, consolidation, construction, are found on all levels, albeit (of course) in different form.

_____________________________________

Diorama, Myriorama, Unlimited detail-orama

Let me tell  in plain words  the explanation by  JX about how a UD algorithm might work (is not just an idea, is supported by proof, experiments, go and see this post).

It is too funny! Is the computer version of a diorama. Is an unlimited-detail-orama.

Before giving the zest of the explanation of JX, let’s thinks: do you ever saw a totally artificial construction which, when you look at it, it tricks your mind to believe you look at an actual, vast piece of landscape, full of infinite detail? Yes, right? This is a serious thing, actually, it poses a lot of questions about how much can be  compressed the 3D visual experience of a mind boggling  huge database of 3D points.

Indeed, JX explains that his UD type algorithm has two parts:

  • indexing: start with a database of 3D points, like a laser scan. Then, produce another database of cubemaps centered in a net of equally spaced “centerpoints” which cover the 3D scene. The cubemaps are done at screen resolution, obtained as a projection of the scene on a reasonably small cube centered at the centerpoint. You may keep these cubemaps in various ways, one of these is by linking the centerpoint with the visible 3D points. Compress (several techniques suggested).   For this part of the algorithm there is no time constraint, it is done before the real-time rendering part.
  • real-time rendering: input where the camera is, get only the points seen from closest  centerpoint, get the cubemap, improve it by using previous cubemaps and/or neighbouring cubemaps. Take care about filling holes which appear when you change the point of view.

Now, let me show you this has been done before, in the meatspace.  And even more, like animation! Go and read this, is too funny:

  • The Daguerre Dioramas. Here’s (actually an improved version of) your cubemap JX: (image taken from the linked wiki page)

Diorama_diagram

  • But maybe you don’t work in the geospatial industry and you don’t have render farms and huge data available. Then you may use a Myriorama, with palm trees, gravel, statues, themselves rendered as dioramas. (image taken from the linked wiki page)

Myriorama_cards

  • Would you like to do animation? Here is it, look at the nice choo-choo train (polygon-rendered, at a a scale)

ExeterBank_modelrailway

(image taken from this wiki page)

Please, JX, correct me if I am wrong.