Input and output of a GLC actors computation

This is a suggestion for using a GLC actors computation ( arXiv:1312.4333) which is easy and it has a nice biological feeling.

The following architecture of a virtual being emerged after a discussion with Louis Kauffman. It amounts to thinking that the being  has sensors, a brain, and effectors:

  • sensors are modelled as an IN actor. This actor has a core (which is the outside medium which excites the sensor). The sensor is excited as an “interaction with cores” concerning the IN actor (and it’s core)
  • the brain is a network of actors which start to interact with the IN actor (after the interaction with cores). These interactions can be of any kind, but I think mainly as interactions by graph reduction.
  • the effectors are modelled as a OUT actor. This actor has also a core, which is the outside medium which is changed by the GLC computation. At a certain point in the computation the brain actors interact (by graph reductions) with the OUT actor and the result of the computation is again an interaction with cores. This time in the OUT actor. It is like the OUT actor deposit the result of the computation in it’s core.

With images, now. The architecture of the being is the following:

actor_read_1

The actor IN has a core (the outside medium) and a mask (which is the sensor):

actor_read_2

The sensing means interaction of the IN actor with it’s core (which is one of the behaviours of a GLC actor):

actor_read_4

The red-green graph has no meaning, is just a graph in the chemlambda drawing convention.

Now, the architecture is designed such that there will be Brain actors – IN actor interactions (by graph reductions, another behaviour of GLC actors)

actor_read_5

… followed by interactions between the brain actors. This is where the bulk of the computation is done.

actor_read_6

This computation sets the stage for interactions between the Brain actors and the OUT actor.

actor_read_7

And finally the OUT actor interacts with it’s core, producing a change into it (i.e. a change in the outside world)

actor_read_8

__________________________

Comments:

  • it is a mater of good design
  • this is the simplest proposal for reading the result of a GLC computation. There are other possibilities to think about
  • it is not supposed that the IN, Brain or OUT actors survive after the computation. (It would be great, though!)
  • the Brain actors are doing a computation for which they were designed, not any computation. So, they look as the equivalent of a program
  • the IN actor (which could be many IN actors as well) does something alike reading data (from it’s core)
  • and the OUT actor does something like writing data (in it’s core).
  • Finally, the whole architecture has to be designed such that what has been described works by it’s own, without any external control.

Before aiming to explain consciousness

… you need to explain awareness, in particular all these things ignored by non-geometrical minds.

If the following is in any way a result of computing, it would be “computing with space”, I think and hope.

Enjoy reading Experimental Phenomenology: Art & Science , by Jan Koenderink,  published by  The Clootcrans Press!  Quotes from the beginning of the e-book:

The contents of this eBook are the slides of an invited talk held by me in Alghero (Sardinia) in the VSAC (Visual Science of Art Conference) 2012. The talk was scheduled for an hour and a half, thus there are many slides.

Judging from the responses (discounting polite remarks such as “nice pictures”, and so forth) most of the audience didn’t get the message. Most hinted that they were surprised that I apparently “didn’t believe in reality”, thus showing that the coin didn’t drop.

The topic of the talk are the relations between life, awareness, mind, science and art. The idea is that these are all ways of creating alternative realities. The time scales are vastly different, ranging all the way from less than a tenth of  a second (the microgenesis of visual awareness), to evolutionary time spans (the advent of a new animal species). The processes involved play on categorically different levels, basic physicochemical process (life), pre-conscious processes (awareness), reflective thought (mind), to the social level (art and science). Yet the basic processes, like taking perspective (predator versus gatherer in evolution, sense modality in awareness, language in reflective thought, style in art, geometry versus algebra in science), selection, analogy, consolidation, construction, are found on all levels, albeit (of course) in different form.

_____________________________________

Open peer-review call for arXiv:1312.4333

Open peer-review is a healthy alternative of the classical peer-review.  If there is any value in the peer-review process — and there is — it comes from it being open and dynamically changing.

Peer-review should be used for communication, for improving one’s and others work.  Not for authority stamps, nor for alpha – omega male monkey business.

With all this in mind, but also with a clear, declared interest into communication of research, I make this experimental call for open peer review to the readers of this blog. The inspiration comes from this kind post by David Roberts.

_______________________

Useful material for the discussion:

Coming from a collaboration which was previously mentioned (Louis Kauffman, a team from ProvenSecure Solutions, me), we want to develop and also explore the possibilities given by the GLC actor model of distributed computing.

A real direction of research is the one of endowing the Net (or parts of it, or, … there are even more strange variants, like no part of it especially) with an artificial chemical connectome, thus mimicking the way real brains (we think that they) work.

If you think “consciousness” then hold your horses, cowboy! Before that, we really have to understand (and exploit to our benefice) all kinds of much more basic aspects, all kinds of (hundreds of) autonomous mechanisms and processes which are part of the brain works, which structure our world (view), which help and also  limit our thinking, which are, most of them, ignored by the logicians but explored already by neuroscience and cognitive science.

So, yes, indeed, we want to change the way we think about distributed computing, make it more biological like, but we don’t want to fall into the trap of thinking we have the ultimate solution toward consciousness, nor do we want to build, or believe we can do it, a Skynet. Instead, we want to take it slowly, in a scientific way.

Here we need your help! The research, up to now reported in arXiv:1312.4333 (with links to other sources) and in this open notebook, is based on some nontrivial ideas which are easy to formulate, but hard to believe.

Peer-review them, please! Show us where we need to improve, contradict us where we are wrong, contribute in an open way! By being open, you will automatically be acknowledged.

Suggestions about how this peer-review can be done are welcome!

UPDATE: Refurio Anachro linked the article to the spnetwork.  And moreover started a thread, with this post, about lambda calculus! Thank you!

Two pieces of all too obvious propaganda

Lately I have not posted about the changes in the academia concerning communication of research. There were many occasions to comment, many pieces of propaganda which I interpret as the beginning of a dark period, but, hey, also as a clear sign that the morning light is near.

Having a bit of time to spare, I shall react to two recent pieces of a slightly more subtle propaganda. Only slightly more subtle, that is my opinion. You don’t have to believe me, make your own opinion instead!

Please consider also the point of view that the following two pieces are involuntary propaganda, accidentally produced by ignorance.

Make your own opinion, that’s the most important.

Piece 1: How to become good at peer review: A guide for young scientists by Violent methaphors.  The post starts by the following

Peer review is at the heart of the scientific method. Its philosophy is based on the idea that one’s research must survive the scrutiny of experts before it is presented to the larger scientific community as worthy of serious consideration.

I saw before this nonsense that peer-review has something to do with the scientific method. It has not, because the scientific method says nothing about peer review. Probably the author makes a confusion between the need to be able to reproduce a scientific result with peer review? I don’t know, but I recommend to first learn what is the scientific method.

Peer review is a recent procedure which has to do with the communication of science through journals.  I will no discuss the value peer review brings to research (a value which exists, certainly), but instead I shall just comment that:

  • as it is done today, peer review is that piece of paper the legacy publisher throws into the wastebasket before making your work, dear researcher, his,
  • peer review is an idea based on authority, not on science, so that you don’t have to understand why a piece of research is valuable, instead you just have to lazily accept it if it appeared in a peer-reviewed journal,
  • peer review needs you, young researcher, because most of everybody else is too busy with other stuff. Nobody will thank you, is your duty (why? nobody really knows, but they want you to believe this).

The second part of the quote mentions that “one’s research must survive the scrutiny of experts before it is presented to the larger scientific community as worthy of serious consideration”, which would be just sad, dinosaurish speaking, if it would come from an old person who did not understood that today there is, or there should be, free access to information. This freedom does not come without obligations: if you want to survive  to this deluge of information, then you have to work hard for this and make responsible choices, instead of lazily relying on anonymous experts and on filtered channels of informations. Your take: do something like religion and believe the authority, or do some science and use your head. Which is your pick?

UPDATE (20.10.14): I can’t explain to myself why Mike Taylor does not detect this, behind the bland formulations.
He does, however, makes good points here.

Piece 2. Unexpected, but I think a bit more subtle is this post at Not even wrong: Latest on abc . The main idea, as far as I understand it, is that Mochizuki work is not mathematics unless accepted by the community. Here “accepted” means to pass a peer-review, which Mochizuki does not oppose, of course, only that apparently he worked too much for the “experts” to be able to digest it. So,  it is Mochizuki fault because there seem to be needed many months of understanding, if not years, from the part of the experts. This is an effort that very few people are willing to make, unfortunately. Somehow this is Mochizuki fault, if I well understand. I posted the following comment

This looks to me as a social problem, not a mathematical one. On one side, there are no “experts” in Mochizuki field, because he made it all. On the other side, the idiotic pressure to publish, which is imposed in academia (the legacy publishers being only opportunistic parasites, in my opinion), makes people not willing to spend time to understand, even if Mochizuki past achievements would imply that there might be worthy to do this.
To conclude, is a social problem, even an anthropological one, like a foreign ape which shows to the local tribe how to design a pulley system, not at all believable to spend time on this. Or it is just nonsense, who knows without trying to understand?

Peter Woit replied by sending me to read a very interesting, well known text, thank you!

For some great wisdom on this topic, I urge everyone who hasn’t done so to read Bill Thurston’s “On proof and progress in mathematics”
http://arxiv.org/abs/math/9404236
For Mochizuki’s proof to be accepted, other members of the community are going to have to understand his ideas, see how they are supposed to work and get convinced that they do work. This is how mathematics makes progress, not just by one person writing an insight down, but by this insight getting communicated to others who can then use it. Right now, this process is just starting a bit, with the best bet for it to move along whatever Yamashita is writing. It would be best if Mochizuki himself could better communicate his ideas (telling people they just need to sit down and devote six months of time to trying to puzzle out 1000 pages of disparate material is not especially helpful), but it’s sometimes the case that the originator of ideas is not the right person to explain them to others.

What is the propaganda here? Well, it is the same, in favor of legacy publishers, but hidden behind some  universal law that a piece of math is not math unless it has been processed by the classical peer-review mill. Please send us small chunks, don’t hit us with big chunks of math, because the experts will not be able to digest them.

______________________

Distributed GLC, discussion (II)

Continues from   Distributed GLC, discussion (I) , and contains further notes about the distributed computing model with GLC actors from GLC actors, artificial chemical connectomes, topological issues and knots , arXiv:1312.4333, written with Louis Kauffman .

The first part of this post concerns more explanations about the fact that we don’t need to use signal passing through gates in this model. This has been explained in  Distributed GLC, discussion (I) ,  here I want to insist by saying that an implication of this is that evaluation (in the sense used for  expressions, terms, etc, from lambda calculus) is not needed for computation!

This has been already mentioned in an older post  I-don’t-always advantage of the chemical concrete machine . I borrow from it some parts:

WHEN_FAN_OUT

Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.

But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer.

This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)

Btw, maybe is good to write few words about the common things and differences between GLC and chemlambda.

  • they are both graph rewriting systems
  • the graphs they are using are the same, based on 4 trivalent nodes and one univalent one, only the drawing convention is different
  • they are Turing universal, because both contain combinatory logic and untyped lambda calculus (without eta reduction!)
  • but lambda calculus is just one part of the things they can both do!
  • most of the graph rewrites (moves) are the same
  • but there are some moves which are different: the FAN-IN move from chemlambda applies to a node which is called here “fan-in”, replacing the emergent algebra moves which apply to the node \varepsilon from GLC. Also the GLOBAL FAN-OUT move from GLC is replaced by DIST moves in chemlambda
  • therefore chemlambda has only local moves, while GLC has also some global moves
  • moreover, the replacement of the GLOBAL FAN-OUT by DIST moves does make them different formalisms.

I hope that until now it is clear that lambda calculus is one sector (i.e. one part) of GLC and chemlambda and that, explicitely, both GLC and chemlambda can do other things than lambda calculus.

Mind you that I am not claiming that lambda calculus can’t do everything that GLC or chemlambda can (actually I simply think that this is a bad formulated statement). What is proved is that GLC and chemlambda can be applied to several formalisms, the contrary implications are to be proved (but this is part of a larger discussion, like the one concerning the Actor Model of Hewitt compared with the Turing Machine, and so on, I don’t want to enter into this).

Now I am going to borrow something from an older post, in order to show you in a very short way that indeed, GLC and chemlambda are different.

The following pictures are taken from the post Metabolism of loops (a chemical reaction network in the chemical concrete machine)

chem_eval_4

With the drawing conventions from chemlambda, but using only the GLC moves, we see in the upper side of this figure, the graph associated to the combinator \Omega = (\lambda x . (xx)) (\lambda x . (xx)). In lambda calculus, we may go forever trying to reduce this. The figure shows the same, we turn in place after the application of a graphic beta move, followed by a GLOBAL FAN-OUT.

This is the situation in GLC. Now let’s do the same in chemlambda:

chem_eval_2

There are multiple ways to perform moves, some of them can be done in parallel. There is no effort here to recast this into a GLC actors version, this is simply a drawing which tries to convey what can happen by applying moves, starting from the graph of \Omega.

But, rather amazingly, there is a way to get out the loop of moves, look at the lower part of the figure. Because of the DIST and FAN-IN moves, we went out from the lambda calculus sector. We can “reduce” the \Omega combinator to a pair of loops and a weird fan-out node with one exit arrow connected to it’s input arrow.

______________________

Distributed GLC, discussion (I)

This post continues from GLC actors, what are for and why are interesting? , which introduced the article  GLC actors, artificial chemical connectomes, topological issues and knots , arXiv:1312.4333, written with Louis Kauffman. The article describes  a common project  of the authors and members of ProvenSecure Solutions, especially until now and in no particular order Stephen P. King, Rao Bhamidipati, Jim Whitescarver, Ken Williams, Keayi Cora,  Paul Dube, Allen Francom, Arek Mateusiak, Roman Anderson.

There are three main ideas which are behind GLC actors distributed computing. In this post I want to write in more detail about the first one.

The model is not  based on signals circulating through wires, passing through gates.

There is nothing flowing through the arrows of a GLC graph!  A stumbling block in the path of absorbing this idea is that   Graphic lambda calculus (i.e. GLC) does have a sector which is equivalent with untyped lambda calculus (without eta reduction). The post Conversion of lambda calculus terms into graphs describes an algorithm for associating a GLC graph to a lambda calculus term. The starting point of the algorithm is the syntactic tree of the term.

The syntactic tree is something which is aptly described as a graph with arrows and nodes, such that through the arrows (or wires) flow variables or terms and with the nodes (application and abstraction) which are like gates which take as inputs variables or terms, process them and spill on the output terms.

To see a graph in this way is equivalent to decorating it’s arrows according to the following rules:

lambda_decor

After all, the graphs obtained this way are very much resembling to the lambda graphs studied by Wadsworth and Lamping, the differences being small, among them that these graphs are an oriented version of the lambda graphs and also there is something “wrong” with the orientations of arrows of the abstraction node (one input, two outputs, thus not the graph of an operation).  Even the graphic beta move is very much like the beta reduction as described by Lamping.

But this is misleading. GLC graphs cover much more than lambda graphs, i.e. the lambda calculus sector of GLC contains only a part of all GLC graphs. Moreover, the graphic beta moves applies everywhere, not only on (the equivalent of) lambda graphs.

A graph which represents a lambda calculus term can be seen as a fancy version of a syntactic tree, thus we may imagine that the graph is made by wires and gates, with signals (variables or terms) which flow in the direction indicated by the arrows.

But there are plenty of other GLC graphs which can’t be seen like this. Look for example at the following figure (which appears as figure 3 in arXiv:1312.4333  )

other_beta

If you try to see the graph from (a), left hand side, as one with signals circulating through gates, then you are in trouble, can you see why? (something about infinite loops and fixed points) You may try to decorate it as explained in the first figure of the post, but you will not succeed without introducing spurious relations between the terms used for decoration.  The same remark applies to the graph from (b), right hand side.

The same remark applies to the graph from the upper part of the following figure, taken from the post Packing and unpacking arrows in graphic lambda calculus .

reg_1

(In this old figure, at the end I use “elimination of loops”, i.e. loops without nodes can be erased or added.)

The thing is that we don’t even need to think in terms of flows of signals.

This is related to something mentioned in a previous post, Model of computation vs programming language in molecular computing,  which is about a discussion on a G+ post by John Baez, which may have looked a bit strange. In fact, what I wrote there was with GLC in mind, see this:

6. Among the specifications for a “programming language for chemistry” are the following: [ … ]

  • (b) purely syntactic (in particular no names management, nor input-output signalling),
  • (c) (maybe an aspect of (b), but not sure)  no use of evaluation in computation (strategy).

Let me come back to the fact that you can’t decorate any graph in GLC according to the rules from the first figure of the post. You may say, well, then let’s concentrate only on those GLC graphs which can be decorated, after all we can decorate all GLC graphs which represent lambda calculus terms.

That would be a big mistake. (This is a beaten path, moreover.) Because there is no local check on a graph which would allow to decide if the said graph can be decorated. It would be therefore in contradiction with the second idea, the one of “locality”. The contradiction would be so severe, that it would destroy the GLC calculus.

Recall that  the only reason for that big mistake would be that  you want to keep your image about signals flowing through arrows and gates. Just a bad thinking habit.

___________________

GLC actors, what are for and why are interesting?

We decided to go open with our project, documented here in the posts from the category distributed GLC. There is a dynamically changing and evolving team, formed by the authors of this article

GLC actors, artificial chemical connectomes, topological issues and knots

and Stephen P. King, Jim Whitescarver, Rao Bhamidipati, as well as others from ProvenSecure Solutions, which is behind it. This project have several goals, the first one is to explore the possibility of implementing this computing model in reality.

The implications of the proposal (i.e. “what if this is possible?”) are huge.  I would like to write instead, in an understated manner, why we think that the means to achieve those implications is interesting. From personal experience with explanations around this subject, it looks like the  really difficult part, at this stage of the project, at least, is the one of understanding what is different in this model.

The reason behind this difficulty is not that the understanding demands highly technical knowledge. Instead, as Louis Kauffman describes it, the project is based on ideas which look completely non obvious at first, which, once understood, they become straightforward.

The most important idea is that the model is not one based on signals circulating through wires, passing through gates. Nor on messages exchanged through channels (though there is a nuance here, coming from the second idea, concerning “locality”).

What then? you might ask. Is this a communication model where the entities involved in communication are not actually communicating? No, on the contrary, the entities (i.e. the GLC actors) are communicating all the time, but not like in a phone call, instead their communication is alike a chemical reaction between  molecules which are close in space.

(The GLC and chemlambda are good for this, because there is no signal circulating through the arrows of these graphs and the nodes are not gates which are processing these signals!)

Think about this. A chemical reaction network (CRN) is obviously a good description of a collection of chemical reactions. But is nothing else than a description. The molecules involved in the CRN are not “aware” there is a CRN. They interact in a purely local way, they are not goal driven, nor there is a upper being which set them the task to interact.  The “communication” between two molecules is not based on signal passing, like the mathematical description of a CRN is.  It is more like a graph rewrite procedure. Pure syntax, at this level (of molecules), but nevertheless self-sustaining, autonomous, purely local and of course distributed (in space).

The second idea is this “locality”, which means that the interaction between the actors should be not controlled from an upper level, the actors should be autonomous and reactive entities, and that their reactions should be a consequence of the local environment only. In the case of the GLC actors, this is a consequence of GLC being based on local graph rewrites (i.e. involving only rewrites acting on a bounded number of nodes and arrows), with the exception of the GLOBAL FAN-OUT move, where chemlambda comes to help. Chemlambda is purely local.

The third idea is to use Hewitt Actor Model as a replacement for space. On the net, we do need a way to define what means for two GLC actors (i.e. the net correspondents of two molecules) to be in proximity. The GLC actors are designed in such a way so to have these proximity relations as links between actors (with no exterior control or decision over this, past the preparation stage) and to see communication between actors as the graph rewrites. In practice, most of the graph rewrites occur between two actors. As an exception, the GLOBAL FAN-OUT (from GLC) becomes in chemlambda a procedure which is alike binary cell fission in real life. Indeed, this is not explained in detail in the paper, only a small example is given, but compare the picture taken from this post

(about how the W combinator suffers a GLOBAL FAN-OUT with the pure local means of chemlambda) with this picture from the binary cell fission wiki page,

Binary_Fission

______________________