Kill Math, or maybe Don't Kill Math
Bret Victor has this old series called Kill Math, which argues for, well, killing math, and replacing it with more concrete ways of working with quantitative reasoning:
This mechanism of math evolved for a reason: it was the most efficient means of modeling quantitative systems given the constraints of pencil and paper. Unfortunately, most people are not comfortable with bundling up meaning into abstract symbols and making them dance. Thus, the power of math beyond arithmetic is generally reserved for a clergy of scientists and engineers (many of whom struggle with symbolic abstractions more than they'll actually admit).
The earliest part of this, Simulation as a Practical Tool, is particularly uncompromising in arguing for replacing symbolic manipulation with simulations of concrete problems. It's an old essay, and also has a few caveats at the end, so I'm not sure it's really representative of his thinking. But it's an interesting position to think about and argue against anyway.
There's a excellent blog post by Evan Miller in response to it, called, inevitably, Don't Kill Math. He's engaging with Victor's post thoughtfully and with a lot of respect for his work, but ultimately thinks the argument against using analytic methods is pretty bad. I'm going to try and summarise his arguments here (though I also recommend going and reading the whole thing). First I'll go over Victor's original post a bit.
The main example is a word problem from a book on maths education, about a skateboarder on a playground roundabout. Basically doing some trig to find out the distance the skateboarder travels when they let go, and then converting it to time taken using an assumption of constant speed. He points out that this problem is contrived and uninteresting (agreed!) and uses some dodgy assumptions (yep!), and instead explores the problem more generally using computer simulation. So, for example, the back wall can be curved. Unfortunately the Flash simulation has died since 2009, but you can sort of make out what was going on from the remaining static images.
Here's his summary of why the simulation is better:
Concrete. Modeling lets us set up and solve problems in a concrete domain, without translating to and from abstractions. Keeping the solution grounded in the problem domain makes it palatable to many more people, makes it easier to reality-check, and encourages exploration of related problems.
In context. Simulation and visualization give us a broader understanding of the situation. Simulation shows us not just the final answer, but the path taken to get there. Visualization encourages us to see the solution as a point on a plot, not a number in isolation.
Unrestricted. Simulation handles non-analytic situations with ease. The conditions of a problem do not need to be contrived or compromised for a convenient symbolic representation.
Generative. A simulation tool can be used for design, not merely analysis. Guided by human creativity and insight, the tool can construct designs and help explore their implications.
Ok, so what are the arguments against simulation only? Miller's main argument is that they are clearer:
For the practicing scientist, the chief virtue of analytic methods can be summed up in a single word: clarity. An equation describing a quantity of interest conveys what is important in determining that quantity and what is not at all important.
So, for the skateboarder example, the angle θ is given by sin(θ) = r/h, where r is the direct distance to the wall, and h is the skateboarder's path.
What’s not important: The speed of the skateboard, the size of r in absolute terms, the size of h in absolute terms, the color of the grass, and everything else we might think of.
So analytic methods are good at taking the varied mess of ordinary experience, and pulling out the few relevant factors for a specific problem: in this case, just the ratio between two quantities. In comparison, when playing with a computer simulation, it's very hard to understand exactly what's relevant. Everything is mushed together in a way that makes finding structural relationships very difficult:
One might make “discoveries”, but one is never certain:
Does this “discovery” apply to all parameter choices?
What is the actual quantitative content of this discovery? If some relationship appears to hold — is this relationship approximate or exact? Numbers might go “up together” or “down together”, but is the underlying relationship linear, exponential, periodic, or what? If no relationship appears to hold — is that because I can’t see it, or because it’s not there?
Am I missing a more fundamental relationship in this simulation? Is there a product, ratio, or other function of parameters that is more important than the parameters considered individually?
Miller then talks about analytic methods that could do some of the work of generalising the original problem to more interesting cases: using a polynomial approximation for the curved wall, for instance, or modelling it as a parametric curve. Now, it's true that these methods are not very accessible to most people. But there's two ways that they differ from the mathematical simulations that Victor likes:
- they isolate structural components of the problem, with precise relationships between them
- they use the arcane 'legacy UI' of mathematics: inscribing lots of complicated symbols on a piece of paper and then transforming them by hand
Miller's argument is, roughly, that the first one is Good, Actually: this is what underpins most of the power of mathematical reasoning. To make maths more accessible, visual tools should instead be focussing on finding better replacements for the second one:
Minds like Victor’s should be focused not on eliminating analysis from the process of inquiry, but on making analysis easier, more intuitive, and more concrete — ideally, allowing mathematical novices to ply even the dark arts of Lagrange polynomials and the calculus of variations. Simulation facilities should serve only as a sort of back-up method for those lamentable situations where the fog is too thick to see anything clearly, that is to say, where analytic methods fail to enlighten.
This sounds exactly right to me: it's the argument I wanted to make, but someone had already done it for me. Useful when that happens!
Member discussion