Prompting Is the New Programming: A Beginner’s Take on LLMs
by Frédéric Brunner - April 2024
Large Language Models (LLMs) are revolutionizing how we interact with computers. For the first time in computing history, we can communicate with machines using natural language rather than rigid programming syntax. In this beginner's guide, we'll explore what makes LLMs fundamentally different from traditional programs and why that matters—even if you've never written a line of code.
What Are LLMs?
At their core, LLMs are computer programs that take text as input and generate text as output. This might sound simple, but there's something profound happening beneath the surface.
Every computer program takes an input and produces an output. Some programs take joystick movements as input and generate blinking pixels as output. Others capture your face and voice as input and display them on another computer in real-time. So what makes LLMs so revolutionary?
Unlike traditional programs that follow explicit instructions, LLMs learn patterns from massive amounts of text data—billions of documents, books, websites, and conversations. This training enables them to recognize relationships between words, concepts, and contexts. When you prompt an LLM, it's not executing a pre-defined function but generating a response based on the patterns it's learned, similar to how humans recognize language patterns.
Of course, LLMs aren't perfect. They can "hallucinate" facts, struggle with complex reasoning, and require careful prompting to produce reliable results. They're tools in the toolbox with tremendous potential but also meaningful limitations that engineers and users need to understand.
Traditional Programming vs. LLMs
Let's examine the first program every programmer learns to write:
print(“Hello World”)
You tell the computer to execute this instruction, and like a good listener, it prints:
Hello World
In this example, we've given it text, and it has given us text back—seemingly similar to what an LLM does. But there's a critical difference.
In traditional programming, the computer knows exactly what it must do because when it sees print(
, it has pre-built instructions to break down the task down to the pixels that must change on the screen to display Hello World (or whatever prameter). If you had written
printBig("Hello World")
unles printBig was explicitly defined by you elsewhere in the code the computer would have complained because it doesn't recognize it.
FATAL: function printBig undefined
If you want to print something in a larger font, you need precise syntax with expicit parameters like :
print("Hello World", font_size=36)
.
Traditional computers are like babies with strict preferences—give them pasta with cheese, they're happy; give them broccoli and apple, they protest. You must provide instructions that follow detailed rules so the computer can break tasks down to basic operations like "light this pixel, darken that one."
This approach is powerful enough to send rockets to the moon, but AI offers something fundamentally different.
The "Aha Moment": Pattern Recognition vs. Processing
Consider what happens when you ask an LLM to "print an elephant." Somehow, the machine parses the concepts of "elephant" and produces an outcome that captures the meaning of this thing. But the AI have not been explicitly programmed to know what an elephant looks like. This represents the key difference in modern AI tools—we don’t have to define
printElephant()
or
print(animal=”elephant”)
because now the AI can interpret instructions differently.
This was my personal "aha moment"—I realized this technology would change the world. Having programmed for 35 years since age 12, studied computer science at university, and built my career in the software industry, it still took me weeks to digest this shift and understand its implications.
The core difference is that instead of communicating detailed instructions, we communicate meaning, and the machine parses it into instructions based some representation of that meaning . We also go:
The LLM recognizes the patterns associated with the concepts of owl, moon, elephant and develops a plan to print something that has similar patterns.
How does it do this magic? I will discuss this in detail in another post. Let’s focus on the consequences first.
Revolutionizing Human-Computer Communication
This revolutionizes how we provide instructions (or "programming") to machines. As I like to say, "Prompting is the new programming language." It means we must rethink entirely how we design sogtware applications.
We can build systems that recognize the patterns associated with the criteria of our desired outcome and break it down into instructions to deliver results. Rather then US (programmers) having to do the detailed breakdown. We no longer need to code hundreds of lines with perfect syntax, ensure each instruction is perfectly sequenced, go through log file after log file to undertsand issues because the AI wil l do all that stuff for us if we can prompt it correctly.
If this works for programmer, we can make it work for everybody.
Real-World Implications
Pattern regognition solves a very significant problem: the difficulty of communicating with computers. You've experienced this when trying to update Excel spreadsheets or presentations with last-minute changes. It's challenging because we must provide detailed, specific instructions to achieve our desired outcome
Imagine instead saying: "Apply a 15% discount on all digital services, update the sheet and presentation, and send for review to me before forwarding to David." This is now possible, which explains why the billion of dollars invested globally in generative AI development and infrastructure.
Now that machines can interpret the meaning of our words rather than just follow statement instructions, we don't need to be so strict about syntax. Our interactions become more fluent and natural as a result.
The Future of Human-Computer Interaction
Not everybody wants to drive their car—I'd much rather say, "Take me there, but avoid the highway." Or "Update the margins and staffing data with the new discount applied." Or even "Gloria downloaded a client document yesterday with their address. Display it on the map and send an SMS to Gloria who's onsite and can't find the entrance."
Modern AI can handle these requests, but what's missing is the orchestration between services and seamless integration between LLMs and existing systems. This creates tremendous opportunities for software engineers armed with this new superpower—building bridges between natural language understanding and practical application systems.
The future belongs to those who can harness this technology to create more intuitive, responsive computing experiences for everyone.