Imaging sitting in front of an old Intel 8086 microprocessor, trying to make it multiply two numbers — say, 5 and 3. You could not just tell “multiply 5 and 3”. You had to guide it step by step, like teaching a child every motion needed to tie a shoelace. You’d write something like this:

MOV AL, 05h ; Load first number into register AL
MOV BL, 03h ; Load second number into register BL
MUL BL ; Multiply AL by BL (result stored in AX)

And if you wanted to print or use that result, more instructions followed — moving data between registers, handling carry flags, and managing memory addresses manually.

Every tiny operation had to be explicitly defined:

Load → Multiply → Store → Display.

Computers didn’t understand what you wanted — only how to do it.

The Rise of High-Level Languages — From Machine Steps to Logical Thinking

Then came languages like C, Pascal, and Fortran – our first real escape from the tyranny of registers and memory addresses. Instead of moving bytes around manually, we began to describe the logic behind what we wanted the machine to do.

Take the simple task of generating the first 10 numbers of the Fibonacci series.
In C, it might look like this:

#include <stdio.h>
int main() {
int n1 = 0, n2 = 1, n3, i;
printf(“%d %d “, n1, n2);
for(i = 2; i < 10; ++i) {
n3 = n1 + n2;
printf(“%d “, n3);
n1 = n2;
n2 = n3;
}
return 0;
}

Here we are no longer commanding the processor how to move values between registers or how to perform addition.

Instead, we’re outlining an algorithm – a step-by-step logical plan for generating a sequence. We still had to describe how in a structured way – through loops, variables, and arithmetic operations – but we had ascended a level of abstraction.

We were now talking to the compiler, not the processor.

For the first time, humans began to express intent in logic, rather than in pure hardware instructions.

Yet, even with this abstraction, early high-level programming still required us to manage compute resources manually — allocating and freeing memory, handling files, managing sockets, and optimizing CPU cycles.
We had gained expressive power, but the burden of how things worked under the hood still largely remained on our shoulders.

From Algorithms to Intent — The Era of Scripting and Declarative Programming

As computing evolved, so did the languages and tools we used. Scripting languages like Python, Ruby, and JavaScript allowed developers to write code faster, with less boilerplate, and with powerful built-in data structures.
At the same time, declarative programming paradigms emerged — think SQL, HTML/CSS, or configuration-as-code frameworks — where we describe what we want done rather than the step-by-step procedure to do it.
Libraries and frameworks added another layer of abstraction. Instead of writing your own sorting, searching, or networking routines, you could now import pre-coded algorithms and utilities. Want to sort a list? Call a function. Want to query a database? Write a declarative statement. The underlying how — memory management, loops, network protocols — was hidden from you.
In effect, these innovations freed humans from the minutiae of compute mechanics and let us focus on intent and logic at a higher level.
From assembly to C to scripting and declarative languages, every evolution has shifted us further away from how the machine works and closer to what we want the machine to do.

Early Attempts at “Natural Language” Programming — The COBOL Experiment

In the late 1950s and 1960s, COBOL (Common Business-Oriented Language) was developed with a bold vision: what if humans could program computers in plain English?
Instead of writing cryptic machine instructions or rigid algorithms, early computer scientists imagined a world where you could tell a computer:
“Compute the total sales for each department this month.”
COBOL’s syntax was deliberately verbose and English-like:
ADD SALES-DECEMBER TO TOTAL-SALES GIVING GRAND-TOTAL.
DISPLAY GRAND-TOTAL.
At first glance, it looks almost like an English sentence.
The hope was to let business experts and non-programmers express intent directly, bridging the gap between human thinking and machine execution.
However, COBOL and similar “English-like” languages had limitations:
Computers still needed exact instructions, and plain English is inherently ambiguous.
Compilers and interpreters couldn’t truly understand intent; they could only parse very rigidly structured sentences.
Any deviation from the expected phrasing would result in syntax errors, negating the promise of true natural-language programming.
The experiment highlighted an important insight: intent and human language are far more flexible than current compilers could handle.
It was an early glimpse into the dream of “what-based” computing — a dream that modern AI is finally making feasible.

The Human Translator — From “What” to “How”

Even as programming languages evolved, there remained a critical role for human intermediaries: the IT or software engineer. Their job was — and still is — to take the intent of domain experts and translate it into executable instructions that computers can understand.
Take a real-world example: a doctor wants to see the trend of a patient’s blood pressure over the last six months in a chart.
The doctor’s “what” is clear:
“I want a line chart showing systolic and diastolic blood pressure over time, highlighting abnormal readings.”
The IT engineer’s job is to translate this into the “how”:
  • Data extraction: Identify where the patient’s blood pressure readings are stored — databases, spreadsheets, or EMR systems.
  • Data transformation: Aggregate the readings by date, handle missing values, and normalize units if necessary.
  • Algorithmic representation: Choose how to plot trends, calculate averages, and highlight anomalies.
  • Execution: Write code using libraries like Python’s Matplotlib, D3.js, or dashboard frameworks to generate the chart.
  • Presentation: Ensure the chart is interactive, readable, and accessible to the doctor.
From the doctor’s perspective, they only care about the “what” — the insight from the chart.
The IT engineer converts this into the “how”, bridging the gap between human intent and machine execution.
This process illustrates that even as programming got higher-level, humans were still needed to map intent to algorithms, memory, and compute logic — a task that AI is now starting to assist with, dramatically reducing this translation burden.

AI — Eliminating the “How”

Artificial Intelligence is now starting to take over the translator role that IT engineers historically performed. By understanding natural language and generating executable solutions, AI allows domain experts to directly express intent without worrying about algorithms, memory, or code.
Example 1: Natural Language to Code
A doctor could now type:
“Show me a line chart of systolic and diastolic blood pressure trends for patient X over the last six months, highlighting abnormal readings.”
AI platforms like Copilot, ChatGPT with code generation, or AI-powered analytics tools can interpret this instruction and produce ready-to-run code, data queries, or even complete dashboards. The doctor gets the chart they need without an IT engineer manually translating their request into code.
Example 2: Reducing the IT Engineer’s Role in Software
Similarly, a financial analyst could type:
“Generate a report showing quarterly revenue growth per region, with bar charts and trendlines.”
AI tools can automatically fetch data, transform it, select appropriate visualizations, and generate the report. The analyst can focus entirely on insights and decision-making, while AI handles the “how” — writing queries, generating plots, and automating workflows.
AI doesn’t just make existing tasks faster — it shifts the paradigm: humans now focus on what outcomes they want, while AI interprets, plans, and executes the how. This is the ultimate realization of the long journey from assembly-level “how” programming to AI-enabled “what” computing.
In essence, AI is transforming computers from obedient machines that only follow instructions into intelligent collaborators that understand intent.

From “How” to “What” — Solving Bigger Problems

Throughout computing history, the bigger the focus on “what” rather than “how,” the larger the problems humans could tackle.
Assembly & early programming: Humans controlled every CPU instruction — solving tiny, narrowly scoped problems like simple calculations.
High-level languages & libraries: Focus shifted to algorithms and workflows — enabling spreadsheets, dashboards, and business automation.
AI today: Handles the “how” entirely, letting humans concentrate on grand-scale challenges.
Now, we can think BIG — like some of them below:
  • Population health: Predict outbreaks, optimize public health strategies.
  • Disaster response: Monitor, predict, and coordinate relief in real-time.
  • Crime prevention: Spot patterns, optimize interventions, and improve urban safety.
By removing the burden of translating intent into instructions, AI lets humans tackle problems once thought impossible — turning imagination into actionable impact.
The evolution from assembly to AI shows one truth: the less we worry about how, the more we can achieve. From bytes and registers to algorithms and AI, our journey has been a climb up the abstraction ladder. AI empowers us to think bigger, act faster, and solve problems that span industries, societies, and the world itself.
The question is no longer how — it’s what will we create next?