Nearly every computer programme created in the last 68 years was made by typing linear characters into a text editor. ASCII symbols arranged from left to right, top to bottom, are the universal medium of code.
if number % 3 == 0 && number % 5 == 0:print ("fizzbuzz")
Every major IDE and code editor is a text-first environment; VS Code, Vim, PyCharm, WebStorm, xCode, Atom, Sublime Text, the list goes on.
This might seem so obvious it's not worth pointing out. Textual programming is the water we swim in. We express computation logic in written text because that is naturally and obviously the best medium for it. At least, this is what we tell ourselves from the vantage point of people who have only ever lived in a textual programming world.
What, exactly, might be the alternative to programming in pure text? Am I suggesting we interpretive dance our way through a function that prints out the Fibonacci sequence?
While that sounds absurd, using our bodies to reason about the world around us through spatial understanding and physical movement is something we've been doing far longer than we've been typing text into screens. Embodied knowledge has about 200,000 years behind it vs. 68 for programming. I'm taking the emergence of modern Homo Sapiens as a very conservative marker. Our earlier hominid ancestors almost certainly reasoned about the world using spatial embodiment, but I can't necessarily point to hard material evidence for it. We could also get into a debate about the meaning of reasoning, but that is well beyond the scope of this post.
What I want to explore here is the history of a field known as "visual programming." The name itself is a bit of a misnomer since anything we see with our eyes technically counts as "visual." Which would certainly linear text. When people use the term they are usually referring to some kind of spatial or graphical programming environment. Almost anything that moves beyond the current textual approach falls under the "visual" umbrella.
The realisation that text-only programming environments don't take advantage of our wildly impressive visual capacities or spatial intelligence is not new. In fact it's roughly as old as programming itself.
Bret Victor coined the term Learnable Programming in his
These environments tend to look something like this:
They're your standard column-based "live coding" interfaces popular across the industry. You write code into one section, and hopefully get the output you expected in another. Syntax is abstracted away from the elements it affects.
It's easy to see why these linear, text-based interfaces seem like the best approach. They look identical to the standard interfaces the whole development industry uses to programme.
We write code into an isolated text editor. If you type the correct sequence of words and symbols into the editor, the correct series of events happens somewhere out of view.
We start this process in a context that looks something like this...
...and get the output in a separate browser window. God only know what happened in the middle.
We're only shown the end result. To see anything happening in the middle, we have to
console.log out data at each step of the way. Or dig into dense developer panels and debuggers. The industry has accepted flying blind as standard operating procedure.
We're training people in the same kind of environment they'll be working in professionally. In the just-get-a-job-mindset that's an excellent approach. But Bret isn't talking about the ideal way to learn programming in the short-term, bootcampy world view. When he talks about these environments as inadequate, he's referencing a much larger paradigm shift around how we should design human-computer interfaces.
He's pointing out that the standard text-based, disembodied, non-graphical interfaces we all put up with are unintuitive to humans who live in a highly visual, spatial, embodied world. While most of our modern user interfaces have graduated to a graphical, 3D space-based system, programming is staunchly attached to the linear text paradigm.
There's good reason for this. While many people have tried to develop
While visual programming isn't great for the scale of complexity professional programmers deal with, it's ideal for people who are learning to code. When we simply need to explain what's happening under the hood, graphical representations are the best way to help people build clear mental models.
Computer history legends like
for i in range(12): go(20) turn(30) you could draw elaborate graphic patterns. It became widely used in education and a whole generation began their programming lives in Logo.
A classic example of a logo-based interface where you direct a small turtle around a canvas. Source: pythonturtle.org
My own earliest memory of programming involved directing one of these small green amphibians around in circles. If you want a little throwback, you can play with a live Turtle environment at
The modern manifestation of Logo is MIT's
The scratch learning platform that allows you to construct clear visual chains of programming commands
If we look past the campy, child-friendly aesthetics of Scratch, it's hard to argue this kind of visual interface isn't helpful. There is no need to memorise the syntax, it's easy to browse through the available commands, and the physical shape and colour of each command limits makes clear where you can and can't put it. Hovering over operations shows you whether they're true or false and variables reveal their current value.
I could wax lyrical about the genius of this approach for paragraphs but I'll spare you. The
Let's circle back to Bret Victor and his concept of learnable programming. Bret outlined a set of principles he believed all programming environments should follow if they want their learners to make any headway. He argues a good environment should allow learners to:
Most of these are explicitly visual. We need to make what's happening in the programme readable through visual representations of each syntax element, variable, and change of state over time. As Bret puts it:
"People understand what they can see. If a programmer cannot see what a program is doing, she can't understand it."
I won't expand on these too much as Bret elaborates on them in the
Since Bret wrote his piece in 2012, it's recieved plenty of buzz and cultish admiration (deservedly, IMHO). But I haven't seen it fully applied in any live learning platforms.
The world of programming education has certainly stepped up its visual game over the last decade. It's no longer just two column, text-based execution contexts. We're now swimming in interactive visual environments and gamified educational platforms.
I began researching the field to see how many of them were putting Bret's principles into practice. The examples I looked at ranged from full-on illustrated games to lightly animated sequences of text. As I explored, I started to notice design patterns beyond the principles Bret outlined.
While Bret defined a set of ideals for a hypothetical learning platform, I became more interested in finding patterns in what already exists. While we're a long way from achieving the 'ideal' system, there's plenty of good design happening here and now.