Two items

Two items which are compatible with previous discussions here:

  1. Eurisko is guilty of dialogue with the human
  2. Invariants change

Let’s take it one by one.

Eurisko is guilty of dialogue with the human. The source is this, the video is this. During the video of the demo, the Eurisko code is run with void results. Why? Apparently because back then the machine was much more slower than the virtual machine used now and it had much less RAM. Therefore, back then, the human (Doug Lenat) who created the AI Eurisko, had enough time to read all the output of Eurisko, to think about which part is interesting and to add the interesting part in the Agenda. Guilty of dialogue! Today, even if the AI is trained on vastly bigger data, we don’t talk with it. You would say we do, all these chats, but no! the AI does not get retrained on items the user considers interesting… there is no collaboration, there is no dialogue. Discussed many times, oldest probably in the Morlocks and eloi post, quote:

“From historical reasons maybe the morlocks (technical nerds) are trained/encouraged/selected to hate discussions, human exchanges and interactions in general. Their dream technology is one like in (1), i.e. one which does not talk with the humans, but quietly optimize (from the morlock pov) the eloi environment.”

Invariants change. Source is The problem with invariants is that they change over time. Quote:

  • Our systems change over time. In particular, we will always make modifications to support new functionality that we could not have foreseen earlier in the lifecycle of the system.
  • Our code often rests on a number of invariants, properties that are currently true of our system and that we assume will always be true.
  • These invariants are implicit: the assumptions themselves are not explicitly represented in the source code. That means there’s no easy way to, say, mechanically extract them via static analysis.
  • A change can happen that violates an assumed invariant can be arbitrary far away from code that depends on the invariant to function properly.

What this means is that these kinds of failure modes are inevitable.”

What is the problem here? Well if you think we live in the deterministic, symmetric and semantic universe from Which way true?, then of course that you are a hypocrite who knows well that this can’t be true, but you have to make believe it is. Quote:

“There has to be a formal mistake in this formal game. Despite all claims that the game is well written and everything is top done, there is a secret hope that hacks are possible because the devs were sloppy somewhere. And why not? After all, by symmetry we learn all the time that any particular piece of the universe which looked varied, it is not. Our uniqueness it is not. Why? Because somebody says so? In the simple symmetric and explainable universe everybody knows that the rules are imposed, not natural. “

Which way true?

Two possibilities:

  1. The universe is deterministic, symmetric and we can describe it globally (has global semantics). Then we find that it is simpler than it seems. There are Theories of Everything, which are far more simple than the universe itself. The variety that we see around is in fact shallow, actually the variety is only apparent from our particular and limited point of view. The universe is far simpler than it looks.
  2. The universe is nondeterministic, non symmetric and there is no global semantics. There is variation everywhere, at any scale. The universe does never and nowhere admit a complete description. The behaviour of any of it’s parts can only be predicted in average, or statistically, and for a limited time frame. Symmetries that we see are not global, therefore they are very precious, we can exploit them for limited parts of the universe, under limited conditions. There are no Theories of Everything which are simpler than the universe. The non symmetric universe translates into infinite variations where the lack of symmetry is used. We have randomness, friction, dissipation, but the universe is forever interesting.

Which way do you think is true?

If the universe were a game, then the two possibilities would be:

  1. The game has to be simple, because is made by programmers. The variety is limited, after a while the gamers start to see under the skin of the game. They understand that the simple mechanisms result in a poor game universe, which the devs mask with lots of ad hoc tricks. Updates are needed all the time, to make the experience look interesting, otherwise everything starts to look boring.
  2. The game is like a casino. Lots of games available, all are conceptually simple and always the casino wins eventually, but the social drama is endless and varied. Heroes and losers exist. All sorts of recipes to win the games exists. None works forever. Very addictive. Rare fortunes are made, most fortunes are lost.

Which way do you think is true?

If you would want to hack the game, then the two possibilities would be:

  1. There has to be a formal mistake in this formal game. Despite all claims that the game is well written and everything is top done, there is a secret hope that hack are possible because the devs were sloppy somewhere. And why not? After all, by symmetry we learn all the time that any particular piece of the universe which looked varied, it is not. Our uniqueness it is not. Why? Because somebody says so? In the simple symmetric and explainable universe everybody knows that the rules are imposed, not natural.
  2. In the casino, of course you attack it scientifically. There are probability rules. Some bad habits of casino employees, which are not random enough, could be exploited. In the casino universe the scientific attitude pays well.

Which way do you think is true?

EncEnc

If you see this then share with attribution. The genetic code is cracked, up to 4! alternatives. As I suspected, is a version between chemlambda v2 and this chemSKI.

Chora is a mask. Two masks make a node, two nodes make a mask, rewrite is mask eversion.

This is so elegant that I can’t believe when I see it.

Based initially on the encoding tried in this picture

which is different from the permutation cube.

UPDATE: Up to transposition (A and U are inversed), it seems the same as the table 2 (biperiodic table) from M X He, S V Petoukhov, P E Ricci, Genetic code, Hamming distance and stochastic matrices, which I found from searching for the relations between Hamming [8,4] and genetic code. (Hamming [8,4] code matches perfectly with an encoding of the 8 nodes I use, built from geometric reasons.)

SNK is sink (with sign is the S or South combinator) and SOU is source (with sign is the N or North combinator).

G = ++
C = −−
A = +−
U = −+
Arg, R
CGG CGU AGG CGA CGC AGA
−++ 231 FI  −+− 213 L   +++ 000 SNK −++ 231 FI  −+− 213 L   +++ 000 SNK
−++ 231 FI  −++ 231 FI  −++ 231 FI  −+− 213 L   −+− 213 L   −+− 213 L  
Ser, S
UCU UCG UCC UCA AGU AGC
−−− 111 SOU −−+ 321 FOX −−− 111 SOU −−+ 321 FOX ++− 123 D   ++− 123 D  
+−+ 312 A   +−+ 312 A   +−− 132 FOE +−− 132 FOE −++ 231 FI  −+− 213 L  
Leu, L
CUU CUC UUA CUA UUG (START) CUG (START)
−−− 111 SOU −−− 111 SOU −−+ 321 FOX −−+ 321 FOX −−+ 321 FOX −−+ 321 FOX
−++ 231 FI  −+− 213 L   ++− 123 D   −+− 213 L   +++ 000 SNK −++ 231 FI 
Ala, A
GCA GCU GCC GCG
+−+ 312 A   +−− 132 FOE +−− 132 FOE +−+ 312 A  
+−− 132 FOE +−+ 312 A   +−− 132 FOE +−+ 312 A  
Gly, G
GGU GGA GGG GGC
++− 123 D   +++ 000 SNK +++ 000 SNK ++− 123 D  
+++ 000 SNK ++− 123 D   +++ 000 SNK ++− 123 D  
Pro, P
CCA CCU CCC CCG
−−+ 321 FOX −−− 111 SOU −−− 111 SOU −−+ 321 FOX
−−− 111 SOU −−+ 321 FOX −−− 111 SOU −−+ 321 FOX
Val, V
GUU GUG GUC GUA
+−− 132 FOE +−+ 312 A   +−− 132 FOE +−+ 312 A  
+++ 000 SNK +++ 000 SNK ++− 123 D   ++− 123 D  
Thr, T
ACU ACG ACC ACA
+−− 132 FOE +−+ 312 A   +−− 132 FOE +−+ 312 A  
−−+ 321 FOX −−+ 321 FOX −−− 111 SOU −−− 111 SOU
START
AUG (MET) CUG (LEU) UUG (LEU)
+−+ 312 A   −−+ 321 FOX −−+ 321 FOX
−++ 231 FI  −++ 231 FI  +++ 000 SNK
STOP
UAA UGA UAG
−++ 231 FI  −++ 231 FI  −++ 231 FI 
+−− 132 FOE ++− 123 D   +−+ 312 A  
Ile, I
AUU AUC AUA
+−− 132 FOE +−− 132 FOE +−+ 312 A  
−++ 231 FI  −+− 213 L   −+− 213 L  
Asn, N
AAU AAC
++− 123 D   ++− 123 D  
−−+ 321 FOX −−− 111 SOU
Asp, D  
GAU GAC
++− 123 D   ++− 123 D  
+−+ 312 A   +−− 132 FOE
Cys, C
UGU UGC
−+− 213 L   −+− 213 L  
+++ 000 SNK ++− 123 D  
Gln, Q
CAA CAG
−++ 231 FI  −++ 231 FI 
−−− 111 SOU −−+ 321 FOX
Glu, E
GAA GAG
+++ 000 SNK +++ 000 SNK
+−− 132 FOE +−+ 312 A  
His, H
CAU CAC
−+− 213 L   −+− 213 L  
−−+ 321 FOX −−− 111 SOU
Tyr, Y
UAU UAC
−+− 213 L   −+− 213 L  
+−+ 312 A   +−− 132 FOE
Lys, K
AAA AAG
+++ 000 SNK +++ 000 SNK
−−− 111 SOU −−+ 321 FOX
Phe, F
UUU UUC
−−− 111 SOU −−− 111 SOU
+++ 000 SNK ++− 123 D  
Met, M
AUG (START)
+−+ 312 A  
−++ 231 FI 
Trp, W
UGG
−++ 231 FI 
+++ 000 SNK

UPIM: “we found little need in molecular computing for such potentially nonterminating programs”

This quote is from section 2.1 of the UPIM report 10.

A recent previous post is dedicated to the research project from the 2000’s called

Universally Programmable Intelligent Matter

by Bruce J. Maclennan.

This is a remarkable project which predates my proposal for molecular computers by using chemical reactions which are like interaction combinators graph rewrites.

However, fair is to also mention what my Open Science project brings.

From the beginning my project identifies graph quines as a good proposal for life.

That is why nonterminating computations may be indeed the most relevant interest for molecular computing.

“Experiments involve, among others, quine graphs. These are new proposals for artificial life. They can reproduce, die and they have a metabolism. Interesting lambda calculus computations, far more complex than simple circuits logic, can be done with local random rewrites algorithms. Lambda calculus, or more precisely graph rewrites systems inspired from it, can be taken as first principles when designing molecular computers. The simulations from the collection of animations show many examples of complex behaviour typical for living creatures, thus suggesting that real life, at molecular level, is the same kind of computation as (graphical versions of) lambda calculus.” [source]

Besides, it is of course fun to play it, covers interaction combinators, SKI calculus, lambda calculus, and has hundreds of experiments and molecules/programs proposed.

RNA based combinators

Thank you for noticing me about arXiv:2008.08814 An RNA-Based Theory of Natural Universal Computation.

The author proposes to partially use combinators which are RNA encoded, in such a way that the combinatory logic rewrites are implemented by pervasive RNA editing rules. The idea is like in Molecular computers, namely that for any rewrite there is an enzyme which detects the rewrite pattern and then does the rewrite using a small repertoire of cleavage and ligation.

The author concentrates on the BCKW system, which works well until the rewrite for W combinator. At this point the problem of duplication appears, which makes the author to propose the following replacement of duplications:

  • use RNA pseudoknots for term representation, in the sense that parantheses matching correspond to double-stranded portions of the pseudoknot
  • instead of duplication, tag a term with a label and use the same double strand matching idea to reference the term in another or the same term (pseudoknot)

This is very interesting to pursue, perhaps in combination with chemSKI with tokens which somehow proposes another mechanism for duplication, or with Combinatory Chemistry: Towards a Simple Model of Emergent Evolution arXiv:2003.07916 where duplication is delegated to the environment.

Partially explored also in Chemlambda strings, Zipper logic, as well as here in Zipper Logic and RNA pseudoknots. The idea is different though, look how I and K combinators look:

See also the first, apparently, project which proposes the use of combinators in chemistry, UPIM, mentioned locally here.

Universally Programmable Intelligent Matter with chemSKI

I was notified about this project from the 2000’s

Universally Programmable Intelligent Matter

by Bruce J. Maclennan.

Compare the earlier Maclennan:

Universally programmable intelligent matter (UPIM) is made from a small set of molecular building blocks that are universal in the sense that they can be rearranged to accomplish any purpose that can be described by a computer program. In effect, a computer program controls the behavior of the material at the molecular level. In some applications the molecules self-assemble a desired nanostructure by “computing” the structure and then becoming inactive). In other applications the material remains active so that it can respond, at the molecular level, to its environment or to other external conditions. An extreme case is when programmable supra-molecular clusters act as autonomous agents to achieve some end.

… with the later Buliga:

Define a molecular computer as one molecule which transforms, by random chemical reactions mediated by a collection of enzymes, into a predictable other molecule, such that the output molecule can be conceived as the result of a computation encoded in the initial molecule.

Compare the earlier Maclennan:

… with the later chemSKI rewrite

In relation to chemSKI with tokens:

Oh I’m so relieved that I was not the first to dream about such things.

Numbers in Pure See notation

Consider any graph with trivalent nodes from the eight ones of the permutation cube (A, L, D, FOX, FI, FOE, S, N), as well as 1-valent nodes like I, K, T, and with free half edges which will receive 1 valent FRIN or FROUT nodes.

Denote such a graph by G.

Equivalently, thing about G as being a mol file. Or, following the conventions of Pure See, think about G as being a list of Pure See commands, because each of the trivalent nodes A, L, D, FOX, FI, FOE are in correspondence with a permutation of (from, see, as) and S is a fanout and N is a fanin.

Now, let’s make all recursive, or perhaps fractal?

Suppose that the free half edges of G are decorated with 3 tags: from, see, as.

So, from far away, G is like a trivalent node, provided we group the free edges by tag.

The next thing we do is to put a mask on the graph G with decorated free edges.

Each such mask is a permutation of three elements (from, see, as).

We write:

G.from a G.see b G.as c

or simply

G a b c

to describe a node which is G, whose “from” decorated half edges are the 1st port, which is connected with the rest of the world with the edge with label “a”, whose “see” decorated half edges are the 2nd port, connected with the edge with label “b”, etc

As in Pure See, we may have

G.see b G.from a G.as c

which is denoted by

(1-G) b a c

Why?

Because now we have a node G whose 1st port is the “see” decorated half edges and the second port is the “from” decorated half edges, otherwise all the rest is the same.

So we permuted the ports by using the permutation (213) or if you wish (see, as, from).

This corresponds by the isomorphism of the group of permutations of 3 elements with the anharmonic group, to the element of the anharmonic group

1-z

hence the notation

1-G

Read Space, combinators and life for a more detailed explanation of the correspondence.

All in all we thus may have

G, 1-G, 1/G, (G-1)/G, G/(G-1), 1/(1-G)

which are our numbers! Along with 0, 1 and infinity, which are related to termination nodes (with masks) as explained in Pure See.

Then we may multiply two numbers G H, take differences G – H, or other algebraic operations of these, as described here.

We can generate integers (which are not quite the Church integers, but alike) and fractions, like 2 and 1/2. And -1.

This is basically what is described in em-convex, except that here we use a graph G instead of a generic “z” or “epsilon”.

Mind that we don’t use the rewrites of Pure See, only the notation!

So this is the description of numbers which I use. It is, relative to Pure See, fractal or recursive, because we start from a pure graph which is a D node, then we mask it and we produce the other 5 nodes, then we produce larger and larger graphs by this number mechanism.

All are nested from… see.. as.. like statements, or mol files with numbers like notation for the type of the node.

From weak to strong

More asymmetric, better it computes with only local rules:

Mazza IC ~ Lafont IC < dirIC < chemlambda < Schonfinkel SKI < chemski

… with all sorts of variations or rediscoveries.

All IC or chemlambda versions satisfy Pure See which is linear.

Turingchem is a version of Lafont proposal to do Turing Machines with interaction nets. It is even more asymmetric than chemski, but it does not have enough interesting graph structure and it relies instead on a supply of node names (ie on arbitrary states and tape symbols). Same is true for another branch, automata. However, they are local.

The same phenomenon, namely to limit the graph structure but to admit arbitrary node names, is encountered in “enhancements” of interaction nets where one has an unbounded supply of labels for nodes, or in term rewrite systems where for example one uses de Bruijn notation. This technique leads to global rewrites. A path not taken by Nature.

Schonfinkel has the best proposal, apparently. Which is, in its class, the most asymmetric.

Why?

I think I know why but

UPDATE (2024-03-07): For the importance of the lack of symmetry, see “Where there is life there is no symmetry“.

The North combinator

The North combinator N is the partner of the South combinator S.

You can see them here, at the end of the article.

S is the SKI combinator usual one, which can be described in lambda calculus as

S = \x.\y.\z.((x z) (y z))

The N combinator does not have a description in lambda calculus, though at a graphical level it is constructed also from application nodes, abstractions nodes and a fanin, while S is build from application, abstraction and a fanout.

Here are both in the same picture.

Down image, the S combinator: the red nodes are abstractions, the green nodes are applications, the orange one is a fanout.

Up image, the N combinator. Same nodes, but now we use a pink node which is a fanin.

They cancel each other, only with beta rewrites.

The N and S are each of them what I call in emergent algebras a chora, if we use the Pure See identification of nodes. We can split these chora. There is no reason to not identify S with fanout and N with fanin.

We now have all the eight nodes of the permutations cube

where

000 = N 

111 = S

Btw the cube resembles a hemhem crown:

with it’s 3 nodes from up the image, 3 nodes from down the image, in 3 pairs, and moreover with the 2 different nodes from the extremes (these would be in out cube the S and N).

This is only a small part of new things.

My unsolicited prediction for AI in 2024

Here is my prediction for 2024 in AI, written here to compare in a year.

LLM will scale to become a new interface, via natural written or spoken language, cloud based, not local.

The effect will be comparable with the passage from cards to keyboard, or with the passage from pc to smart phone.

I can’t fathom the social consequences. Maybe:

  • the internet will be really dead this time. What will be the new name of these new content delivery networks? Edge cloud? I don’t know.
    Everything will rather be created on demand, or hallucinated so to say, because it will be cheaper and faster than caching.
  • maybe not in a year, but soon enough about 30% of the jobs would consist in mediating between humans and AI, in both senses. What will be the name used, perhaps reinforcement learning with human feedback (RLHF)? Nobody who is afraid of losing the job because chatgpt will loose that job, they will reconvert to RLHF consultants, or another name.
  • in academia, scaling replaces research. Much easier to scrap, AI aided, for ideas and scale what gives fast returns than have new ideas. Environment hostile to research as consequence, very rare new ideas, better than ever according to micromanagement criteria.

In my Gutenberg-net analogy from april-may 2019 I say about 2024:

“the Net takes a new direction, freed from the bad influence of the old generation influencers (2024)”

https://mbuliga.github.io/gutenberg-net.html

I don’t believe LLM are this, more to the contrary this is a damping mechanism which aims to preserve old ideas influence.

It is scaling, not creation.

Added: or, if I feel bold, I’d predict:

  • AI rewritten human culture. Nobody goes to sources. Inconsistencies creep makes everything from before the AI era meaningless.
  • General opinion: people are not so smart because “smart” BS is easy to generate. Identity crisis, no purpose in life for those gifted with small talents, hostility and hate for talented people.
  • Conformal media AI written and generated. Nothing interesting happens in the media. Everything is addictive in the media.
  • Everything interesting not on the net. The damping mechanism will create a huge blow, eventually.
  • A new generation will try a new direction, now that the old direction became stalled, incomprehensible and hostile. No understanding for the old ways, in comical or tragical ways, depending on your age.

Asemantic computing wins?

As Large Language Models are both random and asemantic, it seems that asemantic computing wins the world.

In this post I group some quotes from famous people, collected during early days (2013-2014) when “asemantic computing” was called simply “no semantics“.

First please do not forget to see what “asemantic computing” means, by looking either at the newest:

(doi) (figshare) M. Buliga, Argument from AI summary: How does asemantic computing differ from traditional distributed computing? figshare. Journal contribution. (2023)

or at:

M. Buliga, Asemantic computing, in: chemlambda. (2022). chemlambda/molecular: Molecular computers which are based on graph rewriting systems like chemlambda, chemSKI or Interaction Combinators (v1.0.0). Zenodo.

Now, quotes:

Rodney Brooks, Intelligence without representation (1987), (link to pdf) (saved pdf). Section 5.1. [Mentioned in the Nothing vague in the non semantic point of view. ]

Brooks cites as reference [10] the following:

M.L. Minsky, ed., Semantic Information Processing (MIT Press, Cambridge, MA, 1968)

Brooks quote:

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky

[10] M.L. Minsky, ed., Semantic Information Processing (MIT Press, Cambridge, MA, 1968)

gives a similar account of how human behavior is generated. […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior. Indeed, we advocate careful engineering of all the interactions within the system. […]
We do claim however, that there need be no explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard representations in too many ways. There are no variables (e.g. see

[1] P.E. Agre and D. Chapman, Unpublished memo, MIT
Artificial Intelligence Laboratory, Cambridge, MA (1986)

[Agre mentioned here at Phil Agre’s orbiculus]

for a more thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon

[14] H.A. Simon, The Sciences of the Artificial (MIT Press,
Cambridge, MA, 1969)

noted that the complexity of behavior of a system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

V. Braitenberg, Vehicles, Experiments in synthetic psychology, MIT Press (1986) (archive.org link) (saved pdf) From the end of Vehicles 3 section:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Local Operations: The Embodiment of Geometry, Basic Research Series (1992), pp. 1 – 23 (link to pdf) (saved pdf)

[Mentioned in The front end visual system performs like a distributed GLC computation]

Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Louis Kauffman amusing answer to such quotes (taken from Nothing vague in the non semantic point of view )

Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
I think that we might ask Cookie and Parabel about this.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Cookie. Precisely, you are not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Cookie. lakfdjl
Parabel. Are you becoming inchoate?
Cookie. &Y*
Parabel. Y
Cookie.
Parabel.

Best,
Lou”

This is somehow premonitory, how about a discussion today, involving Cookie, Parabel and a LLM?

Bug bug bug

So now I found and fixed two bugs. One is more of a lapse, there is a rewrite which should not be there in the more recent version. Another is a modification which messed with terms with free variables.

As I don’t have the desire to feed microsoft via github, better if you just send me a message asking for the latest news, or perhaps with suggestions if you think you found another bug.

Why would you need that, yeah is better to tell me that, too.

Because either I keep it for myself, or something is done in a collaborative way. Or go your own ways, which are verified to not work, but you want to do them anyway? Relax and peace.

UPDATE: the correct chemSKI with tokens available from my page here.

Nulkukel applied to something

UPDATE: see the nulkukel and the SKIworm at the updated chemSKI with tokens page.

John Tromp‘ nulkukel is the minimal SK expression for the Y combinator:

S S K (S (K (S S (S(S S K)))) K)

In the official chemSKI, we may try then Y I, which is

((((S S) K) ((S (K ((S S) (S ((S S) K))))) K)) I)

(which you input in the textarea “λSKI> mol”). Everything works as expected.

But then nulkukel applied to something, ie Y x, is:

((((S S) K) ((S (K ((S S) (S ((S S) K))))) K)) x)

Input this and after a while the reduction will stop for no apparent reason.

There is a bug in the official chemSKI.

Now is solved, but it is not yet shared.

With the bug solved, the nulkukel applied to something works and produces a gun of pairs application and S (as a fanout), just like it should.

Although very very slowly, ie by using lots of reductions.

A far more shorter, only two nodes chemSKI graph, does the same, namely

A input y x^S x output y

just like in chemlambda, mentioned many times, for example the chemlambda collection #259.

Will say more about the Y combinator in chemSKI, and about recursion, later.

Universal Artificial Chemistry

Mentioned before that in my opinion the most interesting thing in Lafont’ Interaction Combinators is his universality at the graph rewrite level. Cite from the Molecular computers:

“It is very interesting how Lafont proved the Turing universality of his Interaction Combinators. First he introduces interaction systems, which are based on a general form of the patterns involved in the graph rewrites. Interaction systems are therefore at the level of structure to structure. Then he shows that Turing machines can be seen as particular interaction systems. Finally, the bulk of the article is dedicated to the proof that Interaction Combinators are universal in the sense that any interaction system can be translated into Interaction Combinators. Turing universality is therefore a corollary, because in particular the interaction systems which model Turing machines can be done with Interaction Combinators.

I would name this property “Lafont universality”, or “graph rewriting universality”. For confluent graph rewriting systems, like interaction systems, Lafont universality is equivalent with Turing universality, because conversely there is a clear algorithm for graph rewriting for interaction systems. But it is very interesting and inspiring that Lafont universality is a pure graph rewriting property.”

I think that the most interesting feature of graph rewriting artificial chemistries is that you can put them together!

What is better? Turing machines turingchem? Lambda calculus as chemlambda parser? Combinators as chemSKI? Interaction Combinators as dirIC? Each of those are classes of chemical reactions. Put them together, mix them as you wish!

Turn them into conservative reactions with tokens. Chemlambda too.

And then, look for a smaller set of reactions which are graph rewrite universal!

Pure See is one of them, but it is not done graphically.

However, there is an Universal Artificial Chemistry…

Soon…

unichem?

other name?

Artificial chemistries: in vivo versus in vitro

Artificial chemistries may also be in vivo, or in vitro:

  • in vitro: lab like operations or analogies. Well stirred solutions (ie multisets), global operations like heating/cooling, chemical reaction networks (see CRNs are the stderr shadow),
  • in vivo: individual molecules in a random environment, lack of external control, like in a living organism.

While in vitro chemistries are very much the fashion, the in vivo chemistries are very intriguing!

Here is a list of such chemistries, according to my knowledge.

Fontana and Buss Algorithmic Chemistry.

Multisets of lambda calculus terms in normal form. (No mechanism to compute the normal form!)

Chemical reactions:

  • A + B -> AB

Later moved to study chemistry as a process calculus: Kappa language, basically async graph-rewriting.

Berry and Boudol The chemical abstract machine. Based on the Γ-language of Banatre and Metayer.

Computes! Multisets of process calculus terms and operations.

Chemical reactions in vitro:

  • p + q <-> p | q (cool or heat)
  • a.p + a’.q -> p + q (complementary ions reaction)
  • membranes, airlocks…

Buliga Chemical concrete machine. Graph rewriting of individual molecules.

Chemical reactions in vivo:

  • LHS + Enzyme -> RHS + Enzyme

Later chemlambda and Molecular computers with interaction combinators like graph rewriting systems

“Define a molecular computer as one molecule which transforms, by random chemical reactions mediated by a collection of enzymes, into a predictable other molecule, such that the output molecule can be conceived as the result of a computation encoded in the initial molecule. “

10.5281/zenodo.16018

See it live here.

Kruszewski and Mikolov Combinatory Chemistry: Towards a Simple Model of Emergent Evolution

Multisets of SKI calculus terms.

Chemical reactions:

  • IA -> A + I
  • KAB -> A + K + B
  • SABC + C -> (AC)(BC) + S
  • A + B -> AB

Buliga chemSKI

Individual molecules (graphs) with conservative graph rewrites.

Chemical reactions:

  • LHS + Tokens -> RHS + Tokens

How do the zip, slip, and smash rewrites work in ZSS, and what are their applications in computation?

Again I used phind.com to summarize a github article: Zipper logic revisited https://doi.org/10.6084/m9.figshare.14769666.v1

Then phind suggested the question from the title and answered it.

Seems pretty clear. Here is the archived conversation.

In the following we have the summary and the answer to the question.

Answer | Phind V7 Model

The text is a transcript of a talk by Marius Buliga titled “ZSS: Zipper logic revisited” given in September 2020. The talk discusses the problem of computing with tangles and Reidemeister moves, and proposes an improvement to zipper logic, a method of universal computation using tangles and zippers.

The problem arises from the use of a graph rewrite system that includes tangle diagrams as graphs and Reidemeister moves as graph rewrites. The question is whether we can perform universal computation with this system.

Buliga explains that the usual way to perform computation with tangles is different from the problem at hand. In the usual method, a circuit is represented as a tangle, where crossings and maybe caps and cups are operators that satisfy the algebraic equivalent of the R moves. Each circuit represents a computation, and two circuits are considered equivalent if they can be transformed into each other using R moves.

Buliga introduces the concept of emergent algebras, a combination of algebraic and topological information. Emergent algebras can perform the Reidemeister 1 and 2 moves, but not the R3. However, by passing to the limit with the parameter that decorates the operations (or crossings), emergent objects and properties can be produced.

The text then discusses the concept of universal computation, which can be defined in three ways: lambda calculus, Turing machines, and Lafont’s Interaction Combinators. Buliga reformulates the problem of whether we can compute with R moves and presents a theorem stating that if there is a parser from lambda calculus to knot diagrams such that any lambda calculus rewrite is parsed to a pair of knot diagrams equivalent under the Reidemeister moves, then all lambda terms are sent to diagrams of the unknot.

The text concludes with the introduction of Zip-Slip-Smash (ZSS), a graph rewrite system that extends the Reidemeister rewrites with supplementary ones (zip, slip, smash) which allow the reconnection of edges, and tangle graphs with some 3 valent and 1 valent nodes. ZSS is an improved version of Zipper logic.

How do the zip, slip, and smash rewrites work in ZSS, and what are their applications in computation?

Answer | Phind V7 Model

The zip, slip, and smash rewrites in ZSS are graph rewrite rules that allow the reconnection of edges and manipulation of tangle graphs with 3 valent and 1 valent nodes.

The zip rewrites are used to create zippers, which are elements that have two non-identical parts. There are two types of zip rewrites: the first one uses only white half-zippers, and the second one uses only black half-zippers. These rewrites can be used to make diagrams and rewrites that resemble zippers https://github.com/mbuliga/zss/blob/master/zss.md .

The slip rewrites are introduced to resolve graphical ambiguity in the drawing of a half-zipper. They state that ends (of any color) which pass over a crossing are the same as half-zippers, and also that ends which pass under the crossing slip under and the crossing is destroyed https://github.com/mbuliga/zss/blob/master/zss.md .

The smash rewrites create half-zippers by imagining that a crossing is smashed with a hammer, breaking down into two half-zippers. All rewrites in ZSS are bi-directional, meaning that you can also turn a pair of half-zippers into a crossing. The smash rewrites turn a FIN fanin into a FI 3 valent node, in the presence of a L node. Conversely, a pair of half-zippers (black and white) connected in a certain way turns into a crossing https://github.com/mbuliga/zss/blob/master/zss.md .

In terms of their applications in computation, ZSS is universal because it implements directed Interaction Combinators (dirIC). The zip, slip, and smash rewrites, along with the Reidemeister rewrites for crossings, allow for the reconstruction of Lafont’s Interaction combinators by grouping A, L, and FI, FOE nodes. This makes ZSS a powerful tool for computation https://github.com/mbuliga/zss/blob/master/zss.md .

computing with space | open notebook