The Joy of Nand2Tetris
Eureka
For the last month, I have been working through Nand2Tetris: The Elements of Computing Systems; Building a Modern Computer from First Principles. I recently completed part 1. This first half of the text illuminates the process of building a computer purely from elementary logic gates. Each chapter is comprised of an overview, a set of implementation details, and a project specification. After 6 chapters and approximately 50 hours of this format, I found myself amazed by the fact that I had a working, 16 bit Von Neumann computer running Pong on the binary output of an assembler which I programmed. To put it in perspective, 5 weeks ago I couldn't have told you what a Nand gate was, let alone the structure of a Von Neumann machine. Now I can construct the latter from scratch purely by combination and incremental abstraction of the former. I feel as though this experience has completely altered my perspective.
Abstraction
Many software engineer job postings today are titled Fullstack Engineer. What does this mean exactly? Depending on who you ask, this could mean a number of different things. Your average developer may assume the full stack consists of some combination of frontend and backend code. A systems engineer will probably split that backend bit into further parts. The electrical engineer might tell you that, in fact, the full stack has relatively little to do with code for the code is just an abstraction. The point is that a term like "full stack" is loaded and there likely is no end to the abstraction layers - turtles all the way down. After building a computer out of Nand gates, I feel more acquainted than ever with the notion of abstraction. When I work with one of my tools, be it the command line or the standard library of the language I am using, I can now see the criss-crossed etching of implementation details across its surface more clearly.
Nand2Tetris
The first chapter of Nand2Tetris specifies a project in which the reader builds a series of elementary logic gates: Nand, Not, And, Or, Xor, Mux, and Demux (both single and n-bit implementations of each). Funnily enough, I found this chapter to be far more difficult than any of the subsequents. I started this project around Christmas, during which I became sick with a cold for a few days. I specifically remember lying awake and feverish in the middle of the night trying to visualize the implementation of a Multiplexer. At some point it clicked and I recall having a sort of burst of imagination. All of a sudden, for the first time in my life, I felt like I gained a fundamental understanding of a part of the process by which we imbue inert systems with programmability (branching logic).
Whether it was boolean arithmetic, memory, machine language, computer achitecture, or assemblers, each
subsequent chapter offered a similar peek behind the curtain of hardware that many software engineers never dain
to draw. This experience has certainly made me a better programmer, even at the much higher level of
abstraction in which I typically work. I feel that having a basic understanding of what, really, is going on when
I type print("Hello, world.")
allows me to fit together pieces of high-level code from an unshakable
foundation. I may still have questions pertaining to the electrical underpinnings of logic gates, but I no longer
have to ponder questions like "How does a computer turn binary code into programs of arbitrary complexity?"