Always Already Programming
I was clearing out my browser tabs earlier and I found a neat mini-essay by Melanie Hoff, Always Already Programming, that aims to deconstruct the difference between programming and using a computer. She blurs this from both sides:
- a user pressing buttons in a browser is still calling some underlying JavaScript - why does this final call not count as 'programming'?
- a programmer writing code is still pasting together bits from prewritten libraries and inbuilt language features - isn't this the same sort of thing that an end user does by selecting from the palette of existing options provided by a UI?
I think there's a fair bit of truth to this, and I really like this 'always already' terminology, which highlights the way that when programming we're always immersed in some pre-existing world of affordances, that all our actions take place on top of this ground.
Nevertheless I think I want to say that the distinction between 'programming' and 'using' is roughly in the right place, that clicking a button actually is different enough to pasting someone else's premade code in your click event handler for that same button that it makes sense to call the second one 'programming' and not the first. (I could change my mind on this.)
Ok, so why do I have this intuition? I've got this vague memory of a relevant chunk from Bret Victor's Learnable Programming, hmm where is it, oh yeah it's this bit. He's taking an example library for drawing shapes and making it increasingly 'visual', to the point that the UI is starting to look more like Paint than a traditional coding environment:
An objection might arise at this point. With this interface, is this even "programming"?
No, not really. But none of the examples in this section are "programming". Typing in the code to draw a static shape --
triangle(80,60, 80,20, 140,60)
-- is not programming! It's merely a very cumbersome form of illustration. It becomes genuine programming when the code is abstracted -- when arguments are variable, when a block of code can do different things at different times.
Learning programming is learning abstraction.
A computer program that is just a list of fixed instructions -- draw a rectangle here, then a triangle there -- is easy enough to write. Easy to follow, easy to understand.
It also makes no sense at all. It would be much easier to simply draw that house by hand. What is the point of learning to "code", if it's just a way of getting the computer to do things that are easier to do directly?
Because code can be generalized beyond that specific case. We can change the program so it draws the house anywhere we ask. We can change the program to draw many houses, and change it again so that houses can have different heights. Critically, we can draw all these different houses from a single description.
This sounds basically right to me, that the transition to 'programming' starts at roughly the point where you start pulling away from the raw experiential level of hitting buttons, and start abstracting parts of that experience out for reuse. This abstraction could be done visually, and sometimes is, and at that point it would still count as 'programming'.
So, um, is creating a recurring Google Calendar reminder 'programming'? I think I'll bite that bullet today and say yes, it is programming. And then think about this some more. (Is making a cron job 'programming'? I'm pretty sure that if one is, the other should be. This feels like roughly the right boundary zone, anyway.)
Member discussion