When code no longer needs to be written by human hands, what is the value of a programmer? This is not just a technical question; it's a deep reflection on identity and the future of our profession. I hope readers will patiently follow my article to ponder this together.
Introduction: A Revolution of Identity Is Quietly Underway
What would you think if I told you that in ten years, the profession of programmer might completely disappear? Don't rush to object; I'm not saying programmers will be replaced by AI, but rather that "programmer" this concept itself will be redefined.
Imagine this scenario:
You're sitting in the office, no longer facing a dense code editor, but an intelligent collaborative partner that understands your intentions. You use natural language to describe the desired function, and it not only generates high-quality code but also proactively suggests architectural improvements, identifies potential risks, and even predicts changes in user needs.
In this scenario, are you still a programmer?
This isn't science fiction; it's a happening reality. Claude Code, Cursor, Gemini Cli, Zed, GitHub Copilot and other AI programming tools' rapid development is profoundly changing the way software is developed. But an even more significant change is occurring at a deeper level: our understanding of "programming" itself is being subverted.
However, this subversion is not happening for the first time.
Looking back at the history of computer science, we find a consistent pattern: whenever tool capabilities significantly improve, the role and identity of programmers undergo fundamental transformations. Understanding this evolutionary pattern is crucial for grasping the direction of future development.
From Scientist to Worker—The Historical Evolution of the Programmer's Identity
To understand today's changes, we need to first review how the programmer profession has evolved. This story begins in the 1970s, which was the golden age for programmers.
Golden Age: The Glory of Computer Scientists
In that distant era, programmers had a resounding title: computer scientists. Donald Knuth, in his classic work "The Art of Computer Programming," defined programming as an art, a scientific practice of understanding the essence of computation, thinking about algorithmic structures, and controlling machine operations.
Programmers back then were not just "code writers" but creators of the computing world. They manually wrote assembly code, designed compilers, invented data structures, and optimized memory usage. Every line of code carried profound mathematical thought, and every algorithm could potentially change the trajectory of computer science.
Why was the status of programmers so high in that era? The answer lies in the scarcity of skills. Programming required deep mathematical foundations, thorough knowledge of computer hardware architecture, strong logical thinking abilities, and precise control over resources like memory, CPU, and storage. Acquiring these skills took years of specialized training, forming a natural technical barrier.
Imagine the work scene at that time:
Programmers had to manually design algorithms on paper, then convert them into assembly instructions, precisely calculating every memory address and meticulously optimizing every CPU cycle. This way of working required them not only to understand business needs but also to deeply comprehend the working principles of computers. They were the bridge connecting human thought with machine logic, the architects of the digital world.
First Transformation: Identity Crisis Brought by the Tool Revolution
Entering the 1990s, a technological revolution quietly emerged. Java, Python, Ruby, PHP, Visual Basic and other high-level programming languages sprang up, sharing significant common characteristics: stronger abstraction capabilities, lower learning barriers, and more complete ecosystems.
The impact of this revolution was far-reaching. Tasks that once required deep professional knowledge could now be easily accomplished through high-level languages and existing frameworks. A recent graduate, armed with Java or Python, could quickly start developing web applications or desktop software.
We can understand this process with a simple formula: increased tool capability leads to lower technical barriers, lower technical barriers lead to blurred skill boundaries, and blurred skill boundaries ultimately lead to a devaluation of identity.
The programmer's identity began to undergo subtle but profound changes. They transformed from "scientists who understood the essence of computation" into "engineers who used tools," from "artists who created algorithms" into "workers who assembled code," from "irreplaceable experts" into "trainable, outsourced, replicable skilled workers."
This change didn't happen suddenly; it was a gradual process. During this process, clear skill stratification emerged: system architects were responsible for overall design and technical decisions, senior developers handled core logic and complex problems, ordinary programmers completed specific feature implementations, and junior developers handled simple tasks and maintenance work.
Although the programmer community expanded, the true technical core remained in the hands of a few. Most programmers were actually "calling APIs, assembling frameworks, copying and pasting code." This phenomenon sparked intense debate at the time: Was programming losing its technical content? Were programmers being reduced to "code monkeys"?
Today's Turning Point: Fundamental Subversion in the AI Era
If high-level programming languages were an improvement to traditional programming, then AI programming tools represent a fundamental subversion. The essence of this change is not an upgrade of tools, but a redefinition of programming skills themselves.
Starting this year (2025), AI tools like Claude Code, Cursor, Gemini Cli, Zed, and GitHub Copilot are no longer just assisting programming; they are actively taking on programming tasks. They can understand natural language descriptions of requirements, automatically generate complete functions, classes, modules, handle error debugging and code optimization, and even perform cross-language code translation and refactoring.
This enhancement of capabilities brings a qualitative change: AI is seizing the core skill of "writing code" from programmers. The traditional programming skill chain is "Requirement Understanding → Algorithm Design → Code Implementation → Testing and Debugging → Deployment and Maintenance," while the AI era's skill chain becomes "Requirement Definition → Intent Expression → AI Collaboration → Result Verification → System Integration."
Note this fundamental shift: "Implementation capability" is no longer the core skill; "the ability to understand and control the generation process" is key. This is like shifting from handcrafted manufacturing to industrial production; what's important is no longer the exquisite skill of manual craftsmanship, but the ability to design and control the entire production process.
Theoretical Foundation—Six Fundamental Principles of Computer Science
Facing such changes, we need a theoretical framework to guide our thinking and practice. Fortunately, computer science provides us with a set of time-tested fundamental computing principles (refer to the book "The Great Principles of Computing").
These six principles not only explain the essence of computation but also provide a solid theoretical foundation for redefining the programmer's role in the AI era.
Principle One: Communication—Reliable Information Transmission
In traditional computer science, the communication principle focuses on the reliable transmission of information between different locations, including minimal length codes, error correction, file compression, encryption, and decryption techniques. But in the AI programming era, this principle gains new meaning.
Imagine you are collaborating with a genius programmer who cannot speak and cannot see. This programmer is incredibly skilled, capable of quickly writing high-quality code, but only if you convey your intentions precisely in a way they can understand. If your expressions are vague or omit critical information, they might write functionally correct code that doesn't meet your true needs.
This is the core challenge currently faced by AI programming. AI models are very powerful at code generation, but their understanding of natural language still has limitations. They will work strictly according to your literal description and will not automatically supplement common sense or infer implicit intentions like human programmers.
Therefore, establishing reliable communication protocols is like setting up a set of standard "translation rules" to ensure that complex requirements are transmitted losslessly to AI, and that AI's output can be correctly understood and verified. This requires us to learn a new "bilingual ability": thinking about problems in a human way while expressing them in a way machines can understand.
In practice, this means we need to transform vague business requirements into structured technical specifications, including clear intent declarations, detailed contextual information, specific constraints, and clear validation rules. This structured expression not only improves the accuracy of AI's understanding but also lays the foundation for subsequent quality control.
Principle Two: Computation—The Boundaries of Computability
The computation principle reminds us to deeply understand the boundaries of computability. This in AI programming is especially important, as AI tools, while powerful, are still fundamentally limited by computational complexity.
AI cannot solve NP-complete problems, cannot handle the halting problem, and cannot fully understand program semantics. When using AI tools, you need to clearly know which tasks AI can efficiently complete and which tasks require human creative thinking.
This principle teaches us to become "computational complexity evaluators." When assigning tasks to AI, you need to judge the inherent complexity of the task. For simple pattern matching and code generation, AI performs excellently; but for tasks involving deep semantic understanding and innovative design, human leadership is still required.
We can establish a problem complexity evaluation framework: linear and polynomial time problems can usually be handed over to AI for primary processing, exponential complexity problems require human-machine collaboration, and problems involving innovation and uncertainty should be led by humans. This classification not only improves efficiency but also avoids quality issues arising from using AI for inappropriate tasks.
More importantly, learn to break down complex problems into sub-problems that AI can handle. This decomposition ability itself is one of the core skills of programmers in the AI era. Just as an excellent conductor can break down a complex symphony into parts for each instrument, an excellent AI-era programmer can break down complex software requirements into modular tasks that AI can efficiently process.
Principle Three: Memory—Hierarchical Information Storage
The memory principle in traditional computer science focuses on the hierarchical structure of storage systems and the principle of locality. In AI programming, this principle guides us in building efficient knowledge management systems.
AI does not have innate long-term memory like humans; by default, each interaction is context-free. However, high-quality programming requires a large amount of background information: project historical decisions, architectural evolution, business rule changes, team coding standards, and so on.
We need to build a layered knowledge management system similar to a computer storage system. L1 cache stores the immediate context of the current session, L2 cache maintains project-level knowledge and decisions, and L3 storage holds long-term accumulated domain knowledge and best practices.
The value of this layered design lies in optimizing information access efficiency. Frequently used information is promoted to higher-level caches, while detailed historical information is stored in long-term storage and retrieved only when needed. At the same time, the principle of locality should be applied to ensure that related information can be accessed and processed together.
In practical applications, this means establishing a systematic documentation system for each project, recording not just what the code does, but why it does it, under what constraints, and what alternative solutions were considered at the time. This "intent-driven documentation" provides AI with critical information for understanding the project background.
Principle Four: Collaboration—Coordination of Multiple Entities
The collaboration principle in AI programming manifests as how to establish an effective human-machine collaboration relationship. Traditional software development primarily focuses on human-to-human collaboration, but now we need to design collaboration models among humans, AI, and other tools.
This requires us to rethink task division, communication mechanisms, conflict resolution, and quality control from all angles. In the new collaboration model, different participants are treated as having specific capabilities, coordinating their work through explicit protocols.
The key is to establish clear interface definitions and responsibility boundaries. Each agent has clear input/output specifications, defined capability scope, and specific quality responsibilities. When problems arise, the responsible party can be quickly identified, avoiding the dilemma of "who is responsible when AI-written code has issues."