I’m working on the proposal for my next book. As I do, I end up churning out little thought pieces. I figure I may as well share them now instead of later, and even let people participate in the formation of these ideas and arguments.
Jay Babcock at Arthur has maintained a place for me to publish these and reach a wider audience, so I’ll just excerpt and link to over there.
The first time I worked with a computer, way back in high school in the late ’70s, there was no such thing as software. To use the terminal, I had to write my own code and then input it into the computer. Only then would the computer be a typewriter, a calculator, a psychiatrist, or an elevator controller. A computer was an “anything” machine. Moreover, everything I wrote and saved—my “content”—was accessible and changeable by anyone else on the system—unless I specifically ordered otherwise. Media was no longer fixed, it was changeable. Not only ownership, but also the notion of finality itself had become arbitrary—even artificial.
Today, most of us think of computers—and all of our digital devices—in terms of the applications they offer: “What does it already do” instead of “what can I make it do?” Likewise, instead of teaching computer programming in school, we teach kids how to use Microsoft Windows. This difference is profound. It exemplifies the core difference between a society capable of thinking its way beyond its current limitations, and one destined to repeat the same mistakes until it drives itself to extinction.
Computers and networking technology present humanity with the greatest opportunity for renaissance since the invention of the 22-letter alphabet in about the second millennium BCE. But, just like then, we are squandering the opportunity.