I was watching a talk by Brian Cantwell Smith a couple of years ago, and for some reason the bit that stuck in my mind was where he briefly quotes John Haugeland on digitality:
... he said digitality is an engineering notion, root and branch. It's a method for coping with the vagaries and vicissitudes, the noise and drift, of earthly existence.
Then I was flicking through On the Origin of Objects again the other day and noticed a citation to a paper by Haugeland, Analog and Analog. Sounded promising... and yep, it's the one with this quote. It's also enjoyable to read and very dense with insight. I'm not done with it yet, there's still a lot more I could get from thinking about this paper, but hopefully I've absorbed enought to explain some of the key points below.
Haugeland starts with three main features of digital systems: copyability, complexity and medium independence. I'll just quote his descriptions of these:
Flawless copying (and preservation) are quite feasible. For instance, no copy of a Rembrandt painting is aesthetically equal to the original, and the paintings themselves are slowly deteriorating; by contrast, there are millions of perfect copies of (most of) Shakespeare’s sonnets, and the sonnets themselves are not deteriorating. The difference is that a sonnet is determined by a sequence of letters, and letters are easy to reproduce–because modest smudges and squiggles don’t matter. The same goes for musical scores, stacks of poker chips, and so on.
Interesting cases tend to be complex: composites formed in standard ways from a kit of standard components–like molecules from atoms. Complexity can also be diachronic, in which case the standard components are actually standard steps or “moves” constituting a sequential pattern. For example, digits of only ten standard kinds suffice, with a sign and decimal point, for writing any Arabic numeral; moreover, a sequence of steps of a few standard sorts will suffice for any multiplication in this notation. Likewise, the most elaborate switching networks can be built with the simplest relays; and all classical symphonies are scored with the same handful of basic symbols.
There can be exactly equivalent structures in different media. Thus, the sonnets could be printed in italics, chiselled in stone, stamped in Braille, or transmitted in Morse code–and nothing would be lost. The same computer program can run on vacuum tube or solid-state hardware; poker chips can be plastic disks, dried beans, or matchsticks.
So, he asks, what's going on with these features? What's the underlying property of digitality that gives rise to these?
I'll get to his answer in a minute, but I found it easier to think about the examples he gives first. Take some 'tokens' that are just line segments of various lengths, and consider the following procedures for deciding whether they are tokens of the same type:
- Line segments are the same type if they are exactly the same length. This very much sounds like it should not be digital, it's a prototypical example of a continuous thing.
- Line segments are the same type if they both fall between two consecutive inch marks when measured. This does split the line segments into distinct types but doesn't quite sound digital either, as there's too much opportunity for slop and confusion around the inch markers given that you can only measure to finite accuracy. Digital systems are normally engineered to avoid ambiguous cases.
- Line segments are the same type if they both fall in the upper half of any inch mark, and 'ill-formed' (have no type) otherwise. This sounds like a prototypical example of digitality, with clearly distinguished cases. No measurement is ever ambiguous between two types. If you get a measurement around the half-inch marks, it could be ambiguous whether it's ill-formed or acceptable, but that doesn't give the bad outcome of getting the type wrong.
- Line segments are the same type if they are exactly the same length. Only integer lengths are acceptable; all others are ill-formed. This has separated cases, but how are you going to tell whether you've got an acceptable one or an ill-formed one with any finite measurement? So this isn't really the sort of thing we're thinking of when we think of a digital measurement.
Now this is the definition Haugeland gives:
We can define a digital device as:
1. A set of types;
2. A set of feasible procedures for writing and reading tokens of those types; and
3. A specification of suitable operating conditions; such that
4. Under those conditions, the procedures for the write-read cycle are positive and reliable.
OK, so what do positive and reliable mean here? Despite the formal-definition-like structure of this, and the mathsy line segment examples above, Haugeland is not trying to give a mathematical definition here. Instead he's describing reasonable procedures in practical terms. The original version of the 'vagaries and vicissitudes' part Smith quotes is this:
… But digital, like accurate, economical, heavy-duty, is a mundane engineering notion, root and branch. It makes sense as a practical means to cope with the vagaries and vicissitudes, the noise and drift, of earthly existence. The definition should reflect this character.
So 'positive' and 'reliable' are descriptions of reasonable procedures. 'Positive' means that you can succeed completely at the thing. Here's how Haugeland explains positivity:
A positive procedure is one which can succeed absolutely and without qualification–that is, not merely to a high degree, with astonishing precision, or almost entirely, but perfectly, one hundred percent! Clearly, whether something is a positive procedure depends on what counts as success. Parking the car in the garage (in the normal manner) is a positive procedure, if getting it all the way in is all it takes to succeed; but if complete success requires getting it exactly centered between the walls, then no parking procedure will be positive.
'Reliability' then means that you can succeed every time at the positive procedure.
So, how does the definition of digitality map on to our previous example of digitality, line segment example 3? We've already defined the set of types. It's also pretty obvious how to come up with a procedure for reading and writing: we write line segments by drawing lines with a ruler, and read them by measuring them. I guess the operating conditions are 'throw out anything that comes back ill-formed'. Given that, the resulting procedure will be positive, in that any acceptable line segment will be of one type only. And you can do it reliably. So yep, it's digital.
Now let's see what fails with the other examples. With example 1 we fail at being able to definitively assign line segments to a type when measuring with finite accuracy: there are infinite types any line segment could belong to. Example 2 fails around the inch marks, this time with just two types that the line segment could belong to. With example 4, it's impossible to determine whether your token belongs to a type or not.
Finally, we can get to seeing how this definition explains copyability, complexity and medium independence. Copyability follows pretty straightforwardly. If you write a particular type you also have a feasible procedure for reading it, and that in turn can be your specification for writing a copy. For a line segment, you can write a line, read it with the procedure in example 3 to find it has a type of, say, 8, and then write another line that you know will be measured to have type 8.
I found the explanation for complexity harder to follow, as it's spread out over a few remarks, but I think it's something like this. Tokens can be combined in large numbers and complex ways because they make it possible for the the outcome of the process to always reliably be the same thing. This makes managing complexity much more tractable. You can print a vast string of 1s and 0s to disk, and someone else can follow the same procedure and get the same vast string.
Note that digitality of the tokens isn't enough on its own to reliably always get the same result. The rules for combining and transforming tokens also need to be digital. For example, with Arabic numerals there are reliable, positive procedures for concatenating them together to represent larger numbers, and reliable, positive procedures for transforming these numbers to other numbers with various arithmetic operations.
Medium independence comes from the digital procedures being enough on their own, without caring about any further properties of the tokens. We can implement the line segment definitions with rulers and bits of string, or walls and laser measures. This extends to complex combinations of the tokens, if those combinations are also defined digitally:
Digital devices are precisely those in which complex form is reliably and positively abstractable from matter–hence the medium-independence feature.
There's a whole second section in this paper on analogue systems. I'll be briefer on this because I haven't thought about it much yet, but there are a lot of interesting ideas here too.
It's tempting to think that 'analogue' just means 'not digital', but when we use it practically we seem to mean something more. We use 'analogue' to refer to things like film cameras and old TVs, but not to trees and bowls of ramen. So we need to understand what underpins this sense of 'analogue'.
Haugeland introduces an idea of 'second-order digitality' that analogue systems do satisfy:
Speaking freely for a moment, the essential point about (atomic) digital types is that there tend to be relatively few of them, and they are all clearly distinct and separated. Though the types of analog schemes are themselves not like this (the “blend smoothly” into one another), the dimensions along which they vary are relatively few and clearly distinct. Thus for photographs there are exactly three orthogonal dimensions: horizontal, vertical, and gray scale. A string model of a rail network has exactly one string piece for each rail link and exactly one know for each junction (none of which blend together).
Ramen noodles are not even second-order digital in this way: they're just a big mess of stuff with no neat organising principles. (Haugeland's example is the metabolic system of the rat.)
Haugeland describes analogue systems using the same definition as digital systems, but swapping out the positive procedures for approximation procedures. Instead of an all-or-nothing positive criterion (e.g. parking the car in the garage) you have some procedure for getting as close as you can to being correct (e.g. parking the car in the exact middle of the garage). This is second-order digital because there are only a few distinct things you are trying to approximate. In comparison, for something like ramen noodles there's no small distinct set of state variables where you can approximate each one and be done with it. These systems are "even messier and touchier than pure analog".
There's so much in this paper and yet I feel like there's still so much more to think about. I see why Smith talks about digitality being undertheorised. A few quick questions and thoughts:
- In the talk I quoted at the start, Smith also touches on another interesting feature of digitality. We often implement it at some level, "to provide insurance against the mess below", and then build stuff on top of it that's continuous. Pictures on a digital screen, for example. I guess this stuff can be 'analogue' (like the pictures, or a simulation of a differential equation), or "messier and touchier than pure analog" (like the billions of parameters in a large neural net). Probably worth rewatching this other talk of Smith's on digitality here.
- How does digitality relate to Smith's idea of the middle distance? Or Derrida on iterability?
- What even is this field? I keep finding scattered things around the internet about how we manage to implement cleaned-up formal systems on top of the mess of the world. But it's so scattered that I turn up extraordinarily relevant things like this Haugeland paper basically by luck. What am I still missing??