Academic Home

Thoughts on the Edge of Chaos

M.M.Taylor and R.A. Pigeau


1. Overview

2. Basic Ideas: Information and structure, Attractors and Repellors

3. Basic Ideas: Catastrophe

4. Structure and Chaos

5. Six Kinds of Replication

6. Categories and Logic

7. Surprise and importing structure

8. Replication


Categories and Logic

We do not believe that logic maps can occur in material systems, because the microstates (states within a category) cannot be segregated into disjoint regions about whose category membership there is no uncertainty. For any precision of measurement, there will always exist boundary states for which the local divergence within the precision of measurement is positive. The designers of computer logic circuits know this well, and design in such a way that the probability of being in such a boundary state is both vanishingly small and highly unstable (the divergence is large so that movement away from the boundary state is rapid once the state is externally perturbed). In other words, they design the boundary around a cusp catastrophe, using the formal equivalent (and in the original computers, the physical model) of a flip-flop to ensure that the microstate orbits tend toward fixed-point attractors far from the boundaries of the logical category (Fig.9).


The "triflop" in Figure 9 (bottom) is a trivial case of what we call a "polyflop", a system in which many flip-flop-like entities feed their outputs recursively to the inputs of others, sometimes positively (creating associative groups) and sometimes negatively (creating mutual exclusion groups in which one and only one member can have a high output at any one moment).

We believe that all interesting statements about cognitive systems must be made in the context of semi-chaotic maps. In a silicon context, the semi-chaotic map will approximate a logic map, whereas in a carbon context it will approximate a more ordinarily chaotic map.


Although the logic map cannot describe operations that are performed in a material substrate, it can be useful as a basis for discussion; it is, after all, the foundation for much of modern mathematics and philosophy. One can talk as if it were possible to define states and operations on those states whose results are completely predictable, whether or not such operations can ever be executed in reality. We shall argue below that appropriate introduction of catastrophe functions can allow logical operations to be approximated with an arbitrary degree of fidelity, even in the semi-chaotic map of a noisy carbon-based neural substrate.

A state in the logic map includes not only the present values of variables, but also the operations that will be performed on them. The variables naturally include anything that corresponds to a program of operations, so that the entire future of the system is given by its present state. This is, of course, true of all the maps, but it is worth reiterating because one often talks about states on which operations are imposed from outside. We are talking (for now) about informationally isolated systems.

Catastrophe: simulating logic in semi-chaos

A logical system demands that the result of an operation be predictable: that the forward trajectory of a state go at least to a known category. Many states in a semi-chaotic map have that property, but many do not, for any finite uncertainty of measurement. These latter states are near the boundaries between categories, and their uncertainties include parts of the repelling boundary. The uncertainty of a state is determined not only by the precision with which an outside observer can measure the state, but also by internal or intrinsic noise in the substrate for the dynamic system. In a silicon substrate, the noise is very small compared to the distance between the category centres, whereas in a neural substrate the noise may be substantial. No matter what the substrate, there is a physical limit to the precision with which a state can be known, imposed by the Heisenberg Uncertainty principle.

How can a semi-chaotic system perform logical operations with a reasonable chance that the result will be predictable? The key is to transform the state space in such a way that the boundary regions of uncertainty become negligible in size as compared to the stable attractor basins of the various categories. In place of the single-valued state space, replace the boundary regions with a fold catastrophe surface, as shown in Fig 2. When the underlying (covert) state is in the region covered by the fold, the overt or externally visible state is on one of the branches of the fold. Which branch it is on depends on its history. If the underlying state drifts into the region of the fold, the overt state stays on the contiguous branch, and does not move to the other branch until the underlying state drifts off the other side of the fold region. At that point, the over state shifts abruptly to the other branch, and will not return to the original branch until the underlying state drifts completely across the fold region.

Inasmuch as the history of the state affects its future in the region of the fold, that history should legitimately be considered an element of the state space. It is very convenient, however, to separate the state space into overt and covert components. Sometimes, for example, we will identify the covert component with the data of perception, while the overt component may be the result of perception. Consider a reversing figure such as the Necker Cube, in which the physical data remain constant while presumably the neural data change. The output is a perception either of the cube seen from above or of the cube seen from below (other states are possible, but rarer). Taylor and Aldridge (1974) were able to model the fluctuation statistics of such a reversing figure as a random walk of an underlying variable across such a fold catastrophe surface, and were able to identify abrupt alterations in the overt statistics with unit changes in the parameters of the fold.

The fold catastrophe should be considered as a cross-section through a cusp, in which the control axis along the cusp can be considered as some kind of stress imposed by context or by external requirements for categorization of the input data. Fig.4 shows a crude example of the way the cusp catastrophe might affect responses to a pattern of three strokes that could possibly be identified as "A" or "H".

Decisions are an important aspect of logic. One cannot analyze what to do about something unless one has decided what the something is. As one accumulates more data, or as the stress increases, the likelihood that a decision will be made increases. The cusp catastrophe illustrates these effects (Fig 10)

Fig 10a. As time goes on the data may change and pressure for a decision increase(Modulator input). If the data input changes to move the point beyond the edge of the part of the fold on which the perception point has been, the perceptual output will change. At some point the relevant action must be taken. Even after this, though, the perceptual data, and thus the possible perceptual category, may change.
Fig 10b. Re-evaluating a perceptual decision two ways: By reducing the modulator input, variations in the data (new incoming information) can be given fair weight; or if the incoming data strongly oppose the current categorical perception, the perception may change abruptly.

The bifurcation apparent in the transition across the cusp from a single-valued function to a fold catastrophe will be a recurrent theme in the different views of cognition that are coordinated in this paper. It is a general property of the approach to chaos induced by some kind of stress, and it will be seen in the branching structures of the cognitive snowflake.


Top of Page
Previous pagenext page