WWW with Metabolism

While I was trying  to convince biochemists  (I’m still trying)  to use the Chemical concrete machine for a variety of goals, from bio-computing to understanding brains, Stephen Paul King came with an awesome suggestion, which evolved into the following idea:

The WWW is an artificial, human-made network and the Chemical concrete machine (chemlambda) is artificial, human-made, computing friendly chemistry. Let’s use the chemical concrete machine to awake the net by giving it a metabolism.

Together with Louis Kauffman, we are trying to make some fine mathematics with real world implications out of it. Care to join? Then send me or Stephen a message.

Here is a list of arguments in favor of this idea:

  • it is much simpler to use a made-up, simplified chemistry on a network much simpler than brains
  • both the WWW and the chemical concrete machine (which is Turing universal) belong to the same (computational) universe
  • in silico experiments  with WWW + chemlambda  correspond to in vivo experiments with wet neural networks
  • it is scalable
  • may have lots of real life  CS applications
  • it’s mathematically friendly, come on pure mathematicians, you are needed
  • it’s based on lambda calculus, so it’s already incredibly cool, as adepts of functional programming might confirm.

___________________________

Oh, don’t forget the logo of the chemlambda and graphic lambda calculus:

chemlambda4

where you can see two lambdas arranged into a double helix. It’s better than this [source]:

Cyberdyne_logowhich features a Y.

______________________

UPDATE: see the more recent post   Fraglets, bionets, and the www with metabolism  fro relevant research already done related to www with metabolism, which could be very useful.

Advertisements

Arxiv version of the chemical concrete machine

I’m working on a  draft of the Chemical concrete machine paper,  I look forward for receiving your input concerning it. As you may see, the last part (section 4) is not yet finished, but it will be, tomorrow, along with any correction or suggestion I shall receive. There is not yet an acknowledgement, but will be added soon, hopefully mentioning all persons who helped me with these ideas. Also, of course, the bibliography is not in final form.

As you see, it is yet in an intermediary form between the paper style and the web tutorial style, I have not decided yet if is good to behave like the net does not exist, or to use a more colloquial form of exposition.

I welcome and ask for your corrections, suggestions, whatever.

UPDATE:   I think that’s a stable version, appeared as   arXiv:1309.6914

Why do we ever need to learn?

I’m talking about math related subjects. Why do we need to learn to count? Why is not obvious to count up to 5, for example? Why is not obvious to add 2 with 3? Why do we need to learn to evaluate a boolean expression? Which is the part not obvious from the definition? Why do we need to exercise? If, in lambda calculus, the beta reduction is defined as (\lambda x. A) B \rightarrow A[x:=B] then why is it not instantaneously obvious that if Y = (\lambda f . (\lambda x.(f(xx)))(\lambda x. (f(xx)))) then for any term A we can reduce YA to A(YA)?

If you don’t like examples from mathematics learning, then I am puzzled, because mathematician = learner. It’s an ancient greek branch of a pythagorean sect, the one of people dedicated to learn.

Learning is a process which needs time to unfold. What happens in our brains during this time? It must be that we rely too much, without acknowledging this in mathematical approaches, on a collection of more simple, not goal oriented, brain activities, like for example pattern recognition. Wait, what IS a pattern? And what means “recognition”?

Let’s take it another way. I shall make the dumbest hypothesis. When we learn a new (for us) mathematical definition, we need to transform it in “brain language”. We need to “compile” it into something which happens physically in the brain. (Btw, I am completely skeptical that whatever happens in the brain can be described by a language. Main argument: vision, which occupies large parts of brain activities of all creatures which possess one.) So, the mathematical definition (say, of the truth value of a boolean expression) has to be translated into something else, comprising a procedure to parse the expression, passing by all sort of patterns recognition and all kinds of high level human brain activity. Or even better, we write a program for solving this problem. The computer “learns”, i.e. compiles the program and then it works as nice as a mechanical clock. For humans, not so much, because we may forget how to do it, then recall some ideas “behind the definition”, we say, and reconstruct the procedure from scratches, like memories about the day we learned it first time, past mistakes and common sense reasoning.

Brains, let’s take this as part of the dumbest hypothesis, are just networks of neurons, which send one to another some signals and which modify themselves and their connectivity according to some simple rule, based on mechanical clock (physics) and chemistry rules, satisfied by any part of the physical universe, not only by the brain. So, whatever happens when we learn a mathematical concept, we transform it into some very simple thing in the brain. Forget about categories, truth values, group operations, definitions and theorems. No, there must be something which mimics a pattern of behaviours associated with the implicit expectations of people writing definitions, theorems, problems and exercises. Should we write a program for any of the items mentioned in the questions from the first paragraph, then we would see that we have to take care for a lot of things not mentioned in definitions, like well managing the variable names, again some mechanized form of pattern recognition, along with more or less precise knowledge about the programming language we use and it’s idiosyncrasies.

More we try to approach the realm of obvious, more abstract it becomes. And in our brains, learning to ride a bike or to evaluate boolean expressions is equally concrete, it’s physical.

________________

Looking for alternatives

They say “on the internet nobody knows you’re a dog“, I like this a lot. It tells me that communication is enhanced by renouncing at vanity. Unfortunately this seems to me the most important stumbling block in the path towards a better research communication system. Because researchers, statistically speaking, have been selected since at least 40 years by vanity criteria. That is why we see better communication among young researchers, not only because they are young, but because their vanity is still low, making them more available to receiving and giving new ideas.

No technical improvement, no OA schema other than the most idealistic ones, will enhance communication. You don’t believe me? Look then at what is available, does it work? Slowly but surely we shall pass to OA entirely, but so slow, it’s almost at historical scale. Recently it was officially discovered that gold OA is worse than green OA. Great discovery, but it was obvious from the beginning. I believe sooner than later, but don’t worry, not too soon, we shall renounce at the article format for communicating research.

Which brings me to the real subject of this post. I am not sure if this blog/open notebook is the right format for communication. I am looking for collaborations with the rare, but surely existing  creative people which, as me, are more curious than vain.  There is a great amount of lurking around this blog.  I can see there’s a lot of interest both in the graphic lambda calculus and in the chemical concrete machine. The lack of input I get from these lurkers worries me. This blog documents the evolution of some ideas from the initial vague start to some products, like the ones mentioned here. Don’t you understand that these are just byproducts of a very enjoyable exploration? Which byproducts could be much, much more funny if they are the result of a dialogue?

That is why I am looking for alternatives which might enhance this potential collaboration with other creative dogs around the world.

I-don’t-always advantage of the chemical concrete machine

… over other computing formalisms, is contained in the following wise words:

WHEN_FAN_OUT

Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.

But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer.

This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)

In graphic lambda calculus, the parent of the chemical concrete machine, I proved that combinatory logic can be done by using local moves available and one global move, called GLOBAL FAN-OUT.  This global move is what is resembling the most with the behaviour of a usual FAN-OUT gate:  A graph A connected to the input of a FAN-OUT gate is replaced by two copies of the graph.

That’s bad, I think, so in the chemical concrete machine I arrived to prove that GLOBAL FAN-OUT can be replaced, as concerns graphs (or molecules, in the chemical concrete machine formalism) which represent combinators, with successions of local DIST moves (and some other local moves) .

It is possible exactly because there are no variable names. Moreover, there’s something almost biological in the succession of moves: we see how combinators reproduce.

As an illustration, the following is taken from the post  Chemical concrete machine, detailed (V) :

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )

bckw_2

Let’s see how the “molecule” K behaves, when connected to a FAN-OUT gate (green node with one input and two outputs):

bckw_6

The “reproduction” of the molecule B is more impressive:

bckw_3

In the formalism of the chemical concrete machine, \delta^{+} is a distributivity move (or “enzyme” which facilitates the move in one direction, preferentially), and \phi^{+} is a FAN-IN move (facilitated in one direction).

___________________________

See more about this in the Chemical concrete machine tutorial.

___________________________

This makes me believe that, as long as we don’t reason in terms of states (or any other variables), it is possible to have FAN-OUT gates in quantum computation.

A space program in graphic lambda calculus

I want to explain what I intend to do further in the program of “computing with space”, seen in the restricted sense of making sense of space in graphic lambda calculus. The explanation itself, if well done, would amount to achieving the program, therefore, at this stage, it can’t be too exact.

________________________

I think that a good definition of space is relational, based on it’s computational content, and not on “points” or other geometric objects and axioms over those. Obviously, a good definition of space should not be “spatial” itself, otherwise would be self-referential.

My belief, expressed mathematically, is that space (places) and types are simply just decorations over graphs in graphic lambda calculus, following the rules from the post Types and places .

bckw_114

If A is a place and \sigma is a type, then A : \sigma means “there is a \sigma at the place A“, for example A : CAR means there is a car at A. (This interpretation is what I use to make sense in my mind, beyond the mathematical formalism, therefore subject to changing).  The name A is just a name (of a variable or term), it does not have any other meaning than the one which can be grasped in relation with the other names for variables and terms. Recall that in graphic lambda calculus there are no names for variables and terms.

In the post   Better than extended beta move  I introduced two macros which I think they are better than the application and abstraction gates. Here they are again, for an arbitrary “scale parameter” \varepsilon \in \Gamma, where \Gamma is a commutative group (think, for example, about the free abelian group over an alphabet, if you don’t like to think about \Gamma as the group of reals):

space_301

Graphic lambda calculus can be reformulated by using these “scaled” application and abstraction gates as fundamental gates. This has as effects:

  • elimination of the move ext2, replaced by the definition of application and abstraction gates as the “scaled” gates at scale \varepsilon = 1
  • condensation of the graphic beta move and the move R2 into one scaled version of the graphic beta move, which is better than extended beta move .

The subtle part is that, if we look at the decorations, then the types decoration is left unchanged by the moves, but the place decoration is globally affected by the moves (i.e. “logical” moves, like the graphic beta move, change the space).

The other subtle part consists into extending the

to these scaled application and abstraction gates. You shall see that rather amazing things happen, when we look at decorations with places.

I am going to use lessons learned from the Chemical concrete machine.  If we use the scaled application and abstraction gates (and the FAN-OUT gate, of course) instead of the four gates of the original graphic lambda calculus, this makes 3 gates instead of four. Is then enough place to add a FAN-IN gate, as in the chemical concrete machine (while renouncing at identifying it with an \varepsilon gate, as indicated in the chemical concrete machine formalism). This way we shall have four gates again, along with some natural distributivity moves, namely the ones from the chemical concrete machine and the one called the “mystery move” in the series on the dictionary from emergent algebra to graphic lambda calculus. In this way, as long as we stay in the “scaled version of combinatory logic sector” (which will contain BOTH the emergent algebras and the combinatory logic, interacting in beautiful ways), we will not need the GLOBAL FAN-OUT move.

After these mildly technical things will be done, interpretations with words will follow.

What’s going on in this UD algorithm?

In the post My idea about UD, completely unrelated to the content of it, b@b made a comment where he gives a link to a post of “BruceRDell” reproduced here:

Okay, I see you understand basic principles. Now let’s move to the more complex things. I can’t share exact UD algorithm (you must understand me, I’ve spent years mastering it) but I can express in a code what I have already said in the interviews. The thing I want to say is: you must use sorting and eliminate raycasting.
Sorting algorithm I introduce here is very VERY simple. You can optimize it a lot using octrees and more elegant heuristics than that. But keep it simple.
You’ve mentioned this http://rghost.net/48541594 dataset, I had some problems downloading it, but I did it that and can say it is good for testing, you may use it with my program.

Here is the code I am talking about. If you’re smart enough, you’ll get benefit from it.

Magio, in this comment on My idea about UD, says:

I’m probably way too late for this. But anyway I made this quick edit of the program:
http://pastebin.com/HiDW5tMB     [updated version:http://pastebin.com/kW1t5s1c  ]

___________________

Question: why does it work?

UPDATE:  It’s not clear if it really works, it’s unknown the quantitative improvement given by the random permutation trick. If you care, the BruceRDell  thing is a prank.

___________________

I would like to understand the following:

  1. what is the role of the random permutation?
  2. why does the main loop work?
  3. is this already done? (send links to papers, if any)

So, let’s have a polite and informative discussion in the comments, if you agree. We all write in some variant of broken english, therefore I don’t see why anybody can’t participate.

___________________

Not interested in: is the real Bruce Dell? Is the real UD algorithm? Only interested to understand the guts of it.

I finish with the video from b@b’s comment: