Tag Archives: neural network

Teaser: B-type neural networks in graphic lambda calculus (II)

The connections in a B-type neural network can be trained.  The following  quote and figure are taken from the article  Turing’s Neural Networks of 1948, by Jack Copeland and Diane Proudfoot:

Turing introduced a type of neural network that he called a ‘B-type unorganised machine’, consisting of artificial neurons, depicted below as circles, and connection-modifiers, depicted as boxes. A B-type machine may contain any number of neurons connected together in any pattern, but subject always to the restriction that each neuron-to-neuron connection passes through a connection-modifier.

connecmod

A connection-modifier has two training fibres (coloured green and red in the diagram). Applying a pulse to the green training fibre sets the box to pass its input–either 0 or 1–straight out again. This is pass mode. In pass mode, the box’s output is identical to its input. The effect of a pulse on the red fibre is to place the modifier in interrupt mode. In this mode, the output of the box is always 1, no matter what its input. While it is in interrupt mode, the modifier destroys all information attempting to pass along the connection to which it is attached. Once set, a connection-modifier will maintain its function unless it receives a pulse on the other training fibre. The presence of these modifiers enables a B-type unorganised machine to be trained, by means of what Turing called ‘appropriate interference, mimicking education’.

Let’s try to construct such a connection in graphic lambda calculus.  I shall use the notations from the previous post  Teaser: B-type neural networks in graphic lambda calculus (I).

3. Connections.   In lambda calculus, Church booleans are the following terms: TRUE = \lambda x . \lambda y .x and FALSE = \lambda x. \lambda y. y (remark that TRUE is the combinator K).  By using the algorithm for transforming lambda calculus terms into graphs in GRAPH, we obtain the following graphs:

switch_5

They act on other graphs (A, B) like this:

switch_6

The graphs are almost identical: they are both made by a 2-zipper with an additional termination gate and a wire. See the  post   Combinators and zippers  for more explanations about TRUE, or K.

I am going to exploit this structure in the construction of a connection. We are going to need the following ingredients: a 2-zipper, an INPUT BOX (otherwise called “switch”, see further) and an OUTPUT BOX,

which is almost identical with a switch (it is identical as a graph, but we are going to connect it with other graphs at each labelled edge):

switch_2

I start with the following description of objects and moves from the freedom sector of graphic lambda calculus (the magenta triangles were used also in the previous post).  I call the object from the middle of the picture a switch.

switch_1

As you can see, a switch can be transformed into one of the two graphs (up and down parts of the figure).  We can exploit the switch in relation with the TRUE and FALSE graphs. Indeed, look at the next figure, which describes graphs almost identical with the TRUE and FALSE graph (as represented by using zippers), with an added switch:

switch_4

Now we are ready for describing a connection like the one from the B-type neural networks (only that better, because it’s done in graphic lambda calculus, thus much more expressive than boolean expressions). Instead of training the connection by a boolean TRUE of FALSE input (coming by one of the green or red wires in the first figure of the post), we replace the connection by an OUTPUT BOX (should I call it “synapse”? I don’t know yet) which is controlled by a switch. The graph of a connection is the following:

switch_3

The connection between an axon and a dendrite is realized by having the axon at “1” and the dendrite at “3”. We may add a termination gate at “2”, but this is irrelevant somehow. At the top of the figure we have a switch, which can take any of the two positions corresponding, literary, to TRUE or FALSE. This will transform the OUTPUT BOX into one of the two possible graphs which can be obtained from a switch.

You may ask why did I not put directly a switch instead of an OUTPUT BOX. Because, in this way, the switch itself may be replaced by the OUTPUT BOX of another connection. The second reason is that by separating the graph of the connection into a switch, a 2-zipper and an OUTPUT BOX, I proved that what is making the switch to function is the TRUE-FALSE like input, in a rigorous way. Finally, I recall that in graphic lambda calculus the green dashed ovals are only visual aids, without intrinsic significance. By separating the OUTPUT BOX from the INPUT BOX (i.e. the switch) with a zipper, the graph has now an unambiguous structure.

Advertisements

Teaser: B-type neural networks in graphic lambda calculus (I)

Turing introduced his A-type and B-type neural networks in his 1948 technical report Intelligence Machinery.    Read more about this from Turing’s Neural Networks of 1948, by Jack Copeland and Diane Proudfoot.

This post is the first in a series dedicated to a new project which I intend to launch after the summer break (or earlier, let’s see how much I can hold on). One of the goals of the project is to design, as a proof of principle, neural networks in graphic lambda calculus.

As a teaser, I want to describe how to build B-type neural networks, more precisely how to build a richer version of those.

By a B-type NN I mean a dynamical graph which is made by neurons and connections. Let’s take them one by one.

1. Neurons .  In graphic lambda calculus a neuron is a particular type of graph which has the following form: (in a simpler form neurons appeared from the inception of the graphic lambda calculus)

neuron_1

The body of the neuron is any graph A \in GRAPH with several inputs and one output.

The axon is a tree of \Upsilon gates, but not the original ones, but those from the freedom sector of the graphic lambda calculus. In a moment I shall recall the notations from the freedom sector.

Likewise, the dendrites are “half-arrows” from the same sector. This is needed because the body of the neuron does not have to be in the freedom sector.

2. Notations from the freedom sector. Here they are:

dendrite

The “yellow” gates from the freedom sector behave exactly like the original ones, but in this sector we can cut and paste at will the wires between these gates, by three graphic beta moves:

dendrite_2

HOWEVER, the previous figure does NOT describe a connection. These are for the next time.

Topographica, the neural map simulator

The following speaks for itself:

 Topographica neural map simulator 

“Topographica is a software package for computational modeling of neural maps, developed by the Institute for Adaptive and Neural Computation at the University of Edinburgh and the Neural Networks Research Group at the University of Texas at Austin. The project is funded by the NIMH Human Brain Project under grant 1R01-MH66991. The goal is to help researchers understand brain function at the level of the topographic maps that make up sensory and motor systems.”

From the Introduction to the user manual:

“The cerebral cortex of mammals primarily consists of a set of brain areas organized as topographic maps (Kaas et al. 1997Vanessen et al. 2001). These maps contain systematic two-dimensional representations of features relevant to sensory, motor, and/or associative processing, such as retinal position, sound frequency, line orientation, or sensory or motor motion direction (Blasdel 1992Merzenich et al. 1975Weliky et al. 1996). Understanding the development and function of topographic maps is crucial for understanding brain function, and will require integrating large-scale experimental imaging results with single-unit studies of individual neurons and their connections.”

One of the Tutorials is about the Kohonen model of self-organizing maps, mentioned in the post  Maps in the brain: fact and explanations.

Maps in the brain: fact and explanations

From wikipedia

Retinotopy describes the spatial organization of the neuronal responses to visual stimuli. In many locations within the brain, adjacent neurons have receptive fields that include slightly different, but overlapping portions of the visual field. The position of the center of these receptive fields forms an orderly sampling mosaic that covers a portion of the visual field. Because of this orderly arrangement, which emerges from the spatial specificity of connections between neurons in different parts of the visual system, cells in each structure can be seen as forming a map of the visual field (also called a retinotopic map, or a visuotopic map).

See also tonotopy for sounds and the auditory system.

The existence of retinotopic maps is a fact, the problem is to explain how they appear and how they function without falling into the homunculus fallacy, see my previous post.

One of the explanations of the appearance of these maps is given by Teuvo Kohonen.

Browse this paper (for precise statements) The Self-Organizing map , or get a blurry impression from this wiki page. The last paragraph from section B. Brain Maps reads:

It thus seems as if the internal representations of information in the brain are generally organized spatially.

Here are some quotes from the same section, which should rise the attention of a mathematician to the sky:

Especially in higher animals, the various cortices in the cell mass seem to contain many kinds of “map” […] The field of vision is mapped “quasiconformally” onto the primary visual cortex. […] in the visual areas, there are line orientation and color maps. […] in the auditory cortex there are the so-called tonotopic maps, which represent pitches of tones in terms of the cortical distance […] at the higher levels the maps are usually unordered, or at most the order is a kind of ultrametric topological order that is not easy interpretable.

Typical for self-organizing maps is that they use (see wiki page) “a neighborhood function to preserve the topological properties of the input space”.

From the connectionist viewpoint, this neighbourhood function is implemented by lateral connections between neurons.

For more details see for example Maps in the Brain: What Can We Learn from Them? by Dmitri B. Chklovskii and Alexei A. Koulakov. Annual Review of Neuroscience 27: 369-392 (2004).

Also browse Sperry versus Hebb: Topographic mapping in Isl2/EphA3 mutant mice by Dmitri Tsigankov and Alexei A. Koulakov .

Two comments:

1. The use of a neighbourhood function is much more than just preserving topological information. I tentatively propose that such neighbourhood functions appear out of the need of organizing spatial information, like explained in the pedagogical paper from the post Maps of metric spaces.

2. Just to reason on discretizations (like hexagonal or other) of the plane is plain wrong, but this is a problem encountered in many many places elsewhere. It is wrong because it introduces the (euclidean) space on the back door (well, this and using happily an L^2 space).