Teaser: B-type neural networks in graphic lambda calculus (I)

Turing introduced his A-type and B-type neural networks in his 1948 technical report Intelligence Machinery.    Read more about this from Turing’s Neural Networks of 1948, by Jack Copeland and Diane Proudfoot.

This post is the first in a series dedicated to a new project which I intend to launch after the summer break (or earlier, let’s see how much I can hold on). One of the goals of the project is to design, as a proof of principle, neural networks in graphic lambda calculus.

As a teaser, I want to describe how to build B-type neural networks, more precisely how to build a richer version of those.

By a B-type NN I mean a dynamical graph which is made by neurons and connections. Let’s take them one by one.

1. Neurons .  In graphic lambda calculus a neuron is a particular type of graph which has the following form: (in a simpler form neurons appeared from the inception of the graphic lambda calculus)

The body of the neuron is any graph $A \in GRAPH$ with several inputs and one output.

The axon is a tree of $\Upsilon$ gates, but not the original ones, but those from the freedom sector of the graphic lambda calculus. In a moment I shall recall the notations from the freedom sector.

Likewise, the dendrites are “half-arrows” from the same sector. This is needed because the body of the neuron does not have to be in the freedom sector.

2. Notations from the freedom sector. Here they are:

The “yellow” gates from the freedom sector behave exactly like the original ones, but in this sector we can cut and paste at will the wires between these gates, by three graphic beta moves:

HOWEVER, the previous figure does NOT describe a connection. These are for the next time.

11 thoughts on “Teaser: B-type neural networks in graphic lambda calculus (I)”

1. Jon, I don’t understand the figure, at first reading. Do you have a pdf of [7]? Might be useful.

1. I have an old hard copy (350 pages, spiral bound).

Just found a current web page —

My work went off on a different tangent …

2. Excellent, thank you. I follow your blog, there’s a lot to discover there.

1. Thank you! Wait to see the connections, in Turing’s B-type NN there are booleans 1 and 0 running through the wires, but here, in graphic lambda, there are no variables, so there has to be another mechanism. I intended to write this also today, but there was no time to properly explain in one post.