Martin Fowler, a titan in the field of software engineering, recently shared his reflections on Large Language Models (LLMs) and software development. His article (available at https://martinfowler.com/articles/202508-ai-thoughts.html) is characterized by a master-level humility and a clear-eyed perception of reality.
Below are some of Fowler's core insights, particularly those where he frankly admits "I don't know." These might be more insightful than any definitive answers.
1. Existing AI Productivity Surveys Might Be Wrong
Fowler points out that most surveys on AI efficiency gains conflate different ways of using LLMs. The vast majority of people are still stuck at "advanced autocomplete," while true experts are already using agentic workflows that can directly read and write files. Survey data that doesn't distinguish between workflows might mislead us.
2. The Future of Programming? "I Have No Idea"
Regarding questions like whether junior engineers will be made obsolete or if senior engineers should change careers, Fowler frankly answers: "I have no idea." He believes that anyone claiming to know what the future holds is talking nonsense. His advice: Ignore predictions, experiment firsthand, pay attention to the details of others' workflows, and share your experiences.
3. AI Bubble? "Of Course It's a Bubble, But So What?"
Fowler is 100% certain that AI is a huge economic bubble, and it will inevitably burst. But that's not what's important. What matters is that we don't know when it will burst, how big it will inflate before it does, and how much real value it will create in the process.
4. Hallucinations Are Not Bugs, But Features
He quotes Rebecca Parsons' view: The essence of LLMs is to hallucinate, and we just happen to find some of these hallucinations useful. Practical advice: Never ask just once. Ask multiple times, or even let the LLM compare the differences between its own answers. For numerical calculations, ask at least three times.
5. Software Engineering Is Moving Away From "Certainty"
The unique aspect of traditional software engineering is our interaction with deterministic machines. The emergence of LLMs may signal that we are finally, like other engineering disciplines (e.g., structural engineering, process engineering), beginning to embrace and manage a world full of "uncertainty."
6. LLMs Are "Unreliable Junior Engineers"
We can liken LLMs to junior colleagues, but they confidently tell you "all tests passed," only to fall apart in actual execution. If a human colleague did this, they would likely have been reprimanded by HR long ago.
Discussion Point: Among Fowler's insights, which one resonates with you the most, or made you feel "clear-headed"?