4 min read

Thoughts on "Better without AI"

I read David Chapman's Better without AI last week. Here are some quick, somewhat half-baked thoughts.

I’m a big fan of Chapman's stuff in general and have read huge chunks of his online writing, but in this case I’m coming in with more of an outside perspective. I don’t know that much about the topic, and hadn’t seen any drafts or joined in any Twitter conversations.

Partly I haven't joined in because in the past I had an aversion to thinking about AI. I generally hate having to think about the Current Thing, even when the Current Thing is actually important and I agree that it should be the Current Thing. I despised the internet during the early months of covid, where everything fun disappeared and was replaced with nonstop discussions of flattening the curve and what mask to wear, and I've had a similar reaction to AI discussions.

I've softened on this in the last year, though. Anthropic's Toy Models of Superposition was one of the first times where I had a genuinely excited "wow!" reaction to an AI-related thing. I only read and understood this at a very superficial level, but I started to internalise that there's some fascinating fine structure inside neural nets that can be investigated in detail, rather than a gigantic endless boring slurry of linear algebra. (Neel Nanda and Tom Lieberum's reverse engineering of how a toy network learned to do modular arithmetic in a crazy way was in the same "wow!" category, though I read that even less carefully.)

On a more practical level, I'm applying to jobs right now, and suddenly I have to think about what I'd be OK with working on. It seems obvious to me that certain kinds of powerful AI could be very bad news, but I haven't thought very carefully about it beyond that. So I'm trying to upgrade my opinions here.

Right, so now for thoughts on the book itself. I would summarise the main thesis of the book as something like this:

  • There's one field, AI ethics, that has a large share of the narrative momentum around AI, that is mainly interested in questions of fairness and bias in near-term AI systems
  • There's another field, AI safety, popularised by the LessWrong rationalists, that has another large share of the narrative around AI, that is mainly interested in in questions of what happens if AI becomes incomprehensibly powerful and destroys literally everything
  • There is a lot of neglected intermediate ground between "AI has systemic biases" and "AI becomes a superintelligence that turns us all into paperclips", that's more severe that the "ethics" scenarios and more tractable than the "safety" scenarios, and we should be spinning up narrative momentum here too
  • A lot of this is in the domain of normal political activism, i.e. we can advocate to ban or regulate or otherwise slow down the progress of new AI technologies, in the same way that we try and slow down other things we don't like. (See Katja Grace's Let's think about slowing down AI for a similar view.)

The section on what the neglected intermediate ground is like is the part that I found most convincing. There's a silly section about wombats that didn't quite land for me despite generally liking silly stuff, but in general the argument seems pretty plausible. This argument is, more or less, "You know [gestures around], right? Like the general situation in $CURRENT_YEAR? This wouldn't be improved by having more powerful AI around, would it?" We're already living with bizarre, incoherent political controversies spread by social networks that mainly care about collecting data and selling ads, on an internet clogged up with slowly degrading nonsense produced by scammers and bots. More powerful AI is bad in the mundane sense that it can produce more and worse versions of these existing kinds of bad stuff, making the world even more incoherent and stupid and concentrating even more power in the hands of bad actors. In these "intermediate ground" scenarios the bad outcomes are mundane apocalypses using normal bad things like nuclear war and bioweapons, rather than nanobots and infinite paperclips. Still pretty bad.

The next section is about what to do about this. This takes the list of a long list of pragmatic, mundane-in-a-good-way ideas (advocate for better privacy; improve cybersecurity; replace incomprehensible linear algebra slurry with something more understandable; protest loudly about uses of AI we don't like).

One thing I'm still somewhat unclear on is exactly what the scope of "AI" that Chapman thinks we would be Better Without. Is it literally every bit of software with a neural net in it? Or neural nets being used in situations where the exact behaviour the system produces is important, and unexpected outcomes are unacceptable? Or models trained on great big piles of text from the public internet? Or AI research that looks like it will push forward capabilities in some way? Or is it specifically AI trained by big creepy tech companies to sell more ads and stuff? Can we still make nice pictures with Midjourney or should we be chucking the whole lot away until we understand what it's doing?

Maybe this is supposed to be kind of vague, because the book is aimed at a wide audience and the idea is to hammer in the point "AI could be very bad" without leaning too much on specifics. It looks like there are going to be two companion essays, Gradient Dissent and Are language models Scary? (these are currently stub pages) which may have more details. These are the sort of practical questions I'm struggling to think through myself, and reading other thought-out opinions is helpful.

The last section is about how we can accelerate technological progress in ways that don't depend on AI, by making science and engineering work better. This covers similar material to In the Cells of the Eggplant and I'm already completely on board.

I don't have any resounding conclusions to finish on here. I was confused going in, and slightly grumpy about having to think about it at all, and I'm still confused and slightly grumpy. But I've got a little further in thinking about this mess.


This blog now has comments! I updated Ghost and they turned out to be a built-in feature in the latest version. You need to make an account to post comments, and I don't have much of a feel for what the process is like yet, but feel free to try it out :)