L01 Dec 18, 2025 7 min read

From instructions to binaries to programs

A walkthrough from source code to a running program.

giphy

I keep noticing the same pattern: I can build features quickly, but when something goes weird in production, my explanations get fuzzy right at the bottom of the stack.

So I’m doing something unglamorous on purpose: I’m walking myself from “the words I type” to “the thing that runs”.

The words I’m trying to pin down are simple, but I keep mixing them up: source code, compilation, linking, binary, executable, program, runtime, and runtime environment. When those boundaries get blurry, everything above them like a jedi force for a normal trooper.

Here’s the story in plain terms, the way I want to be able to tell it during a deploy or an incident.

Starting with source code. It’s a text. It’s expressive and readable for a programmer (most of a time) and full of intent. But it’s not the thing a machine executes. The machine doesn’t “run my Ruby” or “run my TypeScript” code directly. At some point, my ideas have to be translated into something closer to what the computer understands.

That translation is compilation. I used to treat compilation as a magic “make it faster” step, but the part that matters for me right now is simpler - compilation turns source code into a lower-level form. Depending on the language, this might be machine code, or it might be an intermediate representation that still needs more work later.

Often, the next step is linking. This is where separate pieces become a single deliverable. If my code references something defined somewhere else, linking is where those references get resolved into “this exact implementation will be used when the program runs”. This is also where a lot of “but it works on my machine” pain can sneak in, because assumptions about libraries and dependencies become real.

After that, We usually have a file We can ship. Most of the time it’s a binary, and it helps me to stop calling it “the code” and call it what it actually is - an artifact. Something I can build, store, copy, ship, and start. That word matters because it forces me to be concrete about what’s moving through the pipeline.

Then there’s a distinction I’ve personally paid for, multiple times - not every binary is an executable. A binary is just bytes on disk. An executable is a binary that the operating system can load and start. And a program is what exists after that start happens: a running instance with memory, state, and time passing.

Once I say it that way, a lot of confusion collapses. “We shipped the binary” doesn’t mean the program ran. “The file exists” doesn’t mean it’s executable. “It started” doesn’t mean it behaved.

Now, when the program is running, I tend to blame “the code” when the reality is “the context”. Two layers matter here. The runtime is the language/platform support machinery that exists while the program runs. The runtime environment is the bigger reality around it: the OS, configuration, filesystem, network, permissions, containers/VMs, resource limits, and clocks.

This is the habit I’m trying to build: when something behaves differently in production, my first instinct should be “is the runtime environment different?” before “the code must be haunted”.

Finally, hardware. I don’t need to be a hardware person to write software, but I do need the direction of travel. Eventually, work becomes machine instructions for a CPU. That’s the default “where execution happens”. A GPU shows up when parts of the workload are offloaded to a specialized parallel processor - but the story is still anchored by a program on a CPU coordinating the whole thing.

That’s it, may the force be with you!