No AI Silver Bullet
As early as 1986, Fred Brooks considered AI a potential silver bullet that could increase software development productivity by an order of magnitude. At that time, however, AI did not make it onto his shortlist of recommendations. Fast forward to 2025: Can our current LLM technology provide such a powerful tool? In this article, we will explore this question based on the seminal paper "No Silver Bullet" (NSB) and offer surprising, timeless insights.
Before we begin, a few questions arise. What does the term "silver bullet" mean? What was that AI thing back then, and why was it disregarded as a cure? How does all this relate to today's discussion about LLMs?
Get ready; it's going to be interesting!
NSB and its environment
According to the author, a silver bullet would be the highly anticipated means of stopping out-of-control software projects. By "out-of-control," he means over budget, beyond schedule, and producing a subpar product, which can be erroneous or frustrating for users. Such software projects are prevalent everywhere, especially in large business settings where terms like "enterprise," "corporate," and "industrial" are used in conjunction with "software."
The central argument of NSB is that
there is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity.
And further,
"NSB" argues, indisputably, that if the accidental part of the work is less then 9/10 of the total, shrinking it to zero (which would take magic) will not give an order of magnitude productivity improvement. One must attack the essence."
Before we delve into the essential-accidental dichotomy, let's examine Brooks's work environment and hypothesis.
He managed the development of IBM's System/360 family of mainframe computers, so he has a strong background in computer architecture and software engineering. Consequently, when he writes about a "software task," it is probably more of a project than a task. It certainly has a nontrivial size and complexity, which are usually found in larger business settings. These projects are typically subject to sponsored budgets, strict schedules, and controlling. In other words, the findings of NSB, as well as the analysis in today's LLM setup, are less relevant to small, one-person hobbyist or fun projects. However, it should be of even more interest to anyone investing money on a larger scale. This is especially true since many venture capitalists now believe that AI/LLM developments will easily achieve tenfold improvements, as investigated in NSB, and even surpass them throughout all industries.
Essential and accidental
Although there are different analogies, understanding the difference between the essential and accidental parts of a software task is key. It's no secret that there's a difference between conceiving and designing abstract programs and implementing them.
Returning to the central argument of the NSB. Since the essence does not lie in the accidental parts, to achieve an order of magnitude, the programmer must tackle the essential parts, which Fred Brooks describes as "those concerned with fashioning abstract conceptual structures of great complexity."
Fred Brooks details this nicely in his 1995 follow-up essay, "No Silver Bullet Refined":
The part of software building I called essence is the mental crafting of the conceptual construct; the part I called accident is its implementation process.
So, "accidental" has nothing to do with "occurring by chance," but rather finds its roots in the ancient usage of the Greek term by Aristotle. It refers to a property that a thing has; without it, the thing would not lose its essence.
Now, let's see what Brooks found out.
Dismissed silver bullets
In short, he suggests four measures to address the essence of software engineering and has dismissed nine developments that offered hope for a silver bullet.
Regarding the nine ideas, he questions whether they truly address the essence or if they are still limited to accidental handles.
- Ada and other high-level languages advances
- Object-oriented programming
- Artificial intelligence
- Expert systems
- "Automatic" programming
- Graphical programming
- Program verification
- Environment and tools
- Workstations
After almost 40 years, all of them deserve their own reflection, but regarding the promises of AI/LLM and the central question of this article, let's take a deeper look at artificial intelligence, expert systems, and "automatic" programming.
The combination of these three terms resembles the notion of current systems, such as LLM chatbots or agentsâany AI/LLM-driven system that enables magic programming by machines.
Clearly, the technology of 1986 has nothing to do with our current AI/LLM technology. However, this discussion is not about technology, but rather, engineering craftsmanship. Brooks wisely predicted the same effect that we see with our current technology.
Expert systems are part of artificial intelligence, which had its heyday in the '80s and '90s.
It is astonishing what Brooks anticipated about their potential, and even though, from our perspective, a completely different and inferior technology is meant, current LLM chatbots, or so-called "agents," can be viewed as their own breed of "expert system."
Here is what Brooks wrote, and note what is currently discussed when it comes to "vibe coding":
Edward Feigenbaum says that the power of such systems does not come from ever-fancier inference mechanisms, but rather from ever-richer knowledge bases that reflect the real world more accurately. I believe the most important advance offered by the technology is the separation of the application complexity from the program itself. How can this be applied to the software task? In many ways: suggesting interface rules, advising on testing strategies, remembering bug-type frequencies, offering optimization hints, etc.
Brooks acknowledges that such an expert tool to help inexperienced programmers would be important, butâand this is my interpretationâit would likely lead to the same solution as the anticipated "automatic programming."
Overall,
[Parnas] argues, in essence, that in most cases it is the solution method, not the problem, whose specification has to be given.
In other words, any NSB AI tool would target the accidental rather than the essential parts of the software task. He mentions some exceptions ("some systems for integrating differential equations have also permitted direct specification of the problem"), but concludes that generalization is very difficult.
NSB Candidates in 1986 and AI in 2025
What has changed?
AI/LLM is probably much better than the expert systems of the '80s in many, but not all, aspects and could help with many accidental tasks.
However, there is still a struggle with the Conceptual Essence in a similar way. They cannot build complex conceptual structures without pre-chewing the solution template or providing tight-leash guidance. Doing both would be on the accidental side.
A major reason this is so hard to achieve is hidden in the analysis of the four approaches Brook suggests as the most promising candidates to address essential software components.
These are:
- Reuse off-the-shelf libraries
- Rapid prototyping for iterative requirements engineering
- Organic, incremental growth of software
- Rely on great conceptual designers
When we take a high-level, broad view of those suggestions, they reveal where the real problems with solving the software essentials lie:
- The best techniques and designs originate from a few exceptional minds. Given their value, they have been packaged into libraries for reuse.
- Identifying the essentials is not done once upfront, but rather in many smaller, incremental steps following a timeline. It may also involve social exchanges with input from others.
This is interesting, and with retrospection, it also mirrors my own experience. Is this true for you too?
The development of successful software is also a continuous social challenge. You need to communicate with your users, clients, or requesters repeatedly. You have to look beyond human idiosyncrasies and be able to see things from your customers' perspective and understand their domain.
Without a strict, complete, and possibly formalized requirements specification laid out in advance, it simply becomes ongoing work. We now use the Agile methodology because the Waterfall method does not work well for most real-world software development projects. For domains that rely on upfront specifications and the Waterfall method, such as life-dependent or critical infrastructure technologies, I suspect that current LLMs are even further from the solution due to other issues. This is a topic for a different article.
Having condensed the preconditions for successfully handling the essential partsâingenuity and incremental improvementsâwe can compare them with the properties of AI/LLMs.
Is AI/LLM an exceptional designer?
No. It has been trained on vast amounts of knowledge and examples of varying quality. It can be compared to swarm intelligence, maneuvering through spaghetti code and valid idioms present in the training. It can also inter- and extrapolate, blur information, and distill a solution that is mostly good enough and plausible. This is not how experts or great designers work. They have a clear model in their minds shaped by deep knowledge, discipline, experience, and human intelligence, accounting for and distinguishing all aspects of a problem. The creator's model may first be a malleable prototype, but it is shaped by distinct dimensions of consideration that only great designers know how to follow. Copycats will lose track of one dimension or another, and their products will always be subpar. Greatness is knowing when to relax on one track and not the other without compromising the overall goal. Probabilistic machines cannot reproduce this by merely examining the results of other works and assigning weights to tokens, even if this procedure is broken down into smaller pieces.
Can AI/LLM resolve all essentials upfront? Or can it guide through the process?
Clearly not for the first part. The whole picture is not known to humans, who are faced with both known and unknown unknowns. It would be pure guesswork for probabilistic machines to hit the right track; it's no more plausible than playing the lottery.
The second part is where the great craftsman's skill comes into play. In other words, it is the interface with the knowledgeable designer above. You need a worldview and a skill set, as well as an initial master model that will be refined and adopted step by step. Any software of a reasonable size from the domains we have identified above (the NSB environment) has too many degrees of freedom for any machine to hold, sort, prune, and remix with full context on any single axis, as well as on the superposition of all of them. The models and their components, spun off from a designer's mind, are not all text-based. They can be of any dimension and sometimes cannot be explained to others unless they are conceived with a certain quality. It's not only about measurable properties. Going back to ancient philosophy, a good designer works towards the good, truth, and beauty. No machine understands any of these goals.
Conclusion
The hard part of building softwareâthe essence of a software entityâis still what leads software projects astray, blows schedules and budgets, and results in frustrating products.
There was no silver bullet to solve these problems in 1986, nor in 1995 when Brooks published his follow-up review. However, the measures Brooks provided are still sound and reasonable enough to compare today's AI against.
AI/LLMs still do not have the capabilities to handle the essence of the "software task" on par or better than mediocre programmers can with their tools of the trade today. AI/LLMs may help with the accidental aspects of software development, but trusting Brooks's analysis and today's observations, this help is insignificant when it comes to improving a serious, big project by one order of magnitude. I expect that AI/LLM will remain too specific, converging into a single technology that is useful for some things but not a silver bullet for others.
We have improved significantly since 1986, likely by an order of magnitude. However, AI does not deserve credit for these improvements. Besides the obvious improvements, there are drawbacks and regressions in software development, most evident in today's user interfaces and bloated products. These are mostly experienced as poor performance, which is sometimes corrected by advances in hardware.
Let's meet again in a few years and see if this has changed. My current prediction is that it won't have changed, but Brooks's proposalâi.e., reuse, incremental progress, and human ingenuityâwill still be among the top drivers of better technology. Not only for software.
And let's be honest: I think this is a good outlook for us as individuals and as a whole humanity.
References
- Brooks, Frederick P.,Jr, The mythical man-month: essays on software engineering (1995)
- https://en.wikipedia.org/wiki/Fred_Brooks