3 min read

Notes on (some of) Braitenberg's 'Vehicles'

A couple of months ago I started reading Valentino Braitenberg's classic Vehicles, where he describes a series of increasingly complicated robotic 'vehicles' that demonstrate progressively more intelligent-looking behaviour. I was doing this with an eye to finding some links to Brian Cantwell's idea of the 'middle distance' needed for representation.

I wrote these notes a month or so ago and was meaning to cover more vehicles but seem to have stalled for now, so I may as well post what I have.

Vehicle 1 - 1 sensor (e.g. temperature), one motor. The higher the temperature, the faster the motor goes. Slows down in cold regions, speeds up in warm ones.

Does this one hold any interest from a middle distance perspective? Not really, it’s more like a thermostat, completely locked to the current temperature. Does this movement of the vehicle ‘represent’ the temperature or just ‘respond to it’?

Vehicle 2 - 2 sensors, 2 motors

2a. Connect each sensor to the motor on the same side. Say there’s a heat source to the right. The right wheel nearer the source will speed up more than the left wheel. So it turns away from the source.

2b. Connect each sensor to the opposite motor. Again take the heat source to the right. Now the left wheel will speed up more than the right wheel. So it moves towards the source.

These two are still fully causally locked to their environment, they are in constant connection to the temperature via their sensor. They do have more interesting behaviour though. There is more internal state… both parts are causally locked to the input, but they are also inescapably tied to each other.

It’s pretty similar to a simple bimetallic strip thermostat I suppose (common example when philosophers talk about representation)… some internal structure allows it to move in a somewhat complicated way, but it’s all causally locked to the source, all the time.

Same for Vehicle 3 which introduces inverse proportionality (motor slows down when sensor input is higher, so spends longer near the signal)

Vehicle 4 introduces more complicated dependence of the motor on sensor input intensity. E.g. maybe it is maximum for a certain value and decreases above and below that.

Type 4a is smooth influence. Type 4b is interesting because it introduces discontinuities, including the option of the motor not being activated at all at certain speeds. So this is the first vehicle that can be out of causal contact, which is important in BCS’s 'middle distance' understanding of representation. It doesn’t yet have the internal state to do anything.

(Now I'm wondering whether 'totally out of causal contact' is such a big deal... I mean there isn't so big a difference between a step function that goes to 0 and one that just gets close to it...)

Figure 8 has a nice graph of possible activation functions, including a step function, (what neural net people would call) a ReLU and a more complicated thing.

Artificial neurons generally have some activation function like this, so this is some kind of 'middle distance' behaviour.

Vehicle 5 introduces logic, using the step activation function from Vehicle 4b and connecting multiple threshold devices together. They can either ‘activate’ by sending +1 unit to the next device, or ‘inhibit’ by sending -1.

> … it is important for you to know that in one of these threshold devices the output does not appear immediately upon activation of the input, but only after a short delay, say one tenth of a second. During this time the gadget performs its little calculation, which consists of comparing the quantity of its activation with its threshold.

So this is already a kind of temporal memory, it’s responding to something that happened a little while back (deferring, in Derrida-speak).

OK, there’s a section on how the memory works.

> … there is room for memory in a network of threshold devices, if it is large enough. Imagine a threshold device connected to a sensor for red light. When it is activated by the red light, it activates another threshold device which is in turn connected back to the first device. Once a red light is sighted, the two devices will activate one another forever. Take a wire from the output of one of the two threshold devices and connect it to a bell: the ringing of the bell then signals the face that at some time in the past this particular vehicle sailed in the vicinity of a source of red light.

So the vehicle doesn’t need to be in current contact with the sensor for the bell to keep ringing. We’ve now got enough resources for something like BCS’s super-sunflower.

There’s an example here of using the environment as a memory store, by leaving marks on a sandy beach and then coming back and reading them off. E.g. this way it could calculate the difference between two large numbers, where the numbers are two big for it to store but the difference isn’t.

(this is making me think of Husserl’s ‘authentic’ and ‘inauthentic’ numbers…)

There are also two examples in Figure 10 which I don’t understand :(

Vehicle 6 is about artificial selection of vehicles that don’t fall off the table.