10 Software Engineering Laws (Gone Wrong)
 
I love this list by Signs of Triviality so I thought I would ask Perplexity for a few examples.
1. Conway’s Law
“You will ship your org chart.”
Spotify Example: Squads, Tribes, and Microservices
Spotify intentionally designed its organization into Squads, Tribes, Chapters, and Guilds – small, cross-functional, semi-autonomous teams responsible for specific features or functions of the product. This team structure mirrors the company’s modular, microservice-based architecture, where each service is independently developed, deployed, and maintained by the squad that owns it.
2. Brooks’s Law
“Adding manpower to a late software project makes it later.”
The Mythical Man-Month
One of the best-documented examples comes directly from Fred Brooks’s own experience managing IBM’s OS/360 project in the 1960s. As the project began to run behind schedule, management added more programmers to speed up progress. Instead, the project suffered further delays due to the ramp-up time required to train new developers and the exponential growth in communication overhead. This experience led Brooks to formulate his law in The Mythical Man-Month.
3. Zawinski’s Law
“Every program attempts to expand until it includes a web server. Those programs which cannot so expand are replaced by ones which can.”
A clear modern demonstration of Zawinski’s Law—the idea that “every program attempts to expand until it can read mail; those programs that cannot so expand are replaced by ones that can”
Example: Slack’s Expansion
Slack originally launched as a simple team chat app designed for fast internal communication. Over time, it evolved into a complex work hub by integrating file sharing, video calls, automation workflows, and a marketplace of third-party apps. This expansion gained users but also introduced feature bloat, with critics noting that Slack became less focused and more resource-intensive—an embodiment of Zawinski’s Law at work. The company expanded communication beyond chat, effectively trying to “read mail” in the metaphorical sense.
4. Parkinson’s Law
“Work expands so as to fill the time available for its completion.”
Example: API Feature Expansion in a Development Sprint
A developer recounted a firsthand experience at a software company where a simple API integration, initially estimated at two to three weeks, extended to nearly four months. Instead of ending when the functionality was complete, engineers spent weeks debating naming conventions, refactoring for “future-proofing,” and adding unnecessary edge-case logic. The task itself didn’t become more complex—extra time simply allowed extra work to emerge. This expansion perfectly reflects Parkinson’s Law in practice.
5. Pareto’s Fallacy
“When you’re 80% done, you think you only have 20% left.”
A strong real-world demonstration of Pareto’s Fallacy—the misapplication of the Pareto Principle (80/20 rule) that leads to oversimplified decisions and technical debt—comes from common patterns in software feature prioritization and performance optimization.
Example: Performance Optimization Fallacy
Developers often cite the Pareto Principle—“80% of performance issues come from 20% of the code”—as justification to delay performance tuning until late in the development cycle. This logic can backfire because performance bottlenecks are typically distributed across multiple small inefficiencies rather than concentrated in one area. The fallacy lies in believing optimization will be easy and surgical later on. In practice, teams find that the “vital 20%” of slow code is scattered throughout the system, as shown by TechnoSophos’ case study on premature optimization.
6. Sturgeon’s Revelation
“90% of everything is crud.”
A notable real-world example of Sturgeon’s Revelation—the assertion that “ninety percent of everything is crap”—applies widely in software engineering, especially in the context of open source libraries, application stores, and enterprise software solutions.
Example: Enterprise Software Selection
Enterprise IT professionals often encounter the same scenario when evaluating payroll, HR, or CRM solutions—a glut of low-quality (often overpromised) products, with just a few standout solutions meriting actual investment and adoption. As Paul Saunders notes in his review of aviation IT systems, while most payroll software is “rubbish,” the few successful platforms become industry standards precisely because the bar is so low elsewhere. Sturgeon’s Revelation, warped by some as an excuse for mediocrity, is better seen as a call to focus effort on finding or building software that rises into the valuable 10%.
7. The Peter Principle
“In a hierarchy, every employee tends to rise to their level of incompetence. Thus, in time, every post tends to be occupied by an employee who is incompetent to carry out its duties.”
A strong example of The Peter Principle—the idea that people in hierarchical organizations are promoted until they reach their level of incompetence—appears frequently in software engineering and tech leadership.
Example: Developer Promoted to Manager
A well-documented case involves a software developer, “Jane,” who excelled technically and consistently delivered high-quality software. Because of her success, she was promoted to a management position, where her new responsibilities included people management, strategic planning, and performance reviews. However, Jane’s technical expertise didn’t translate into leadership ability: she struggled with delegation, team motivation, and stakeholder communication. The result was reduced productivity and morale across the team—a textbook illustration of the Peter Principle in action.
8. Eagleson’s Law
“Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else.”
A highly relevant example of Eagleson’s Law in software engineering—“Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else”—is commonly experienced when returning to legacy code in long-running projects.
Example: Production Bug Fix
Another firsthand story comes from developers handling a critical bug six months after a feature was shipped. When they tried to debug the issue, they found variables, structures, and logic paths made little immediate sense—even though they implemented the feature themselves. This led to time-intensive rediscovery of the rationale behind design choices, resembling working with someone else’s code entirely.
9. Greenspun’s 10th Rule of Programming
“Any custom developed system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of the industry standard you refused to adopt”
A clear, modern demonstration of Greenspun’s 10th Rule of Programming—“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp”—can be seen in the evolution of major C-based software systems like GCC and Sendmail.
Example: Sendmail Configuration Language
Sendmail, written in C, is another archetypal example. In the 1990s, engineers extended it with an internal domain-specific configuration syntax to manage mail routing rules. Over time, this configuration layer transformed into an interpreter-like subsystem—with expressions, variables, and control flow—essentially becoming a miniature programming language embedded within the server. It mirrors Lisp functionality (symbolic evaluation and dynamic behavior) but remains bug-prone and hard to read, illustrating the “ad hoc, slow Lisp” Greenspun described.
10. The Iceberg Fallacy
“The cost of development of a new software product is the only ~25% of the total cost of ownership management sees and budgets for.”
A strong, practical example of the Iceberg Fallacy in software engineering is the common underestimation of hidden complexity and maintenance costs beneath a project’s “surface-level” features.
Example: Bug Fixing and Root Causes
When faced with recurring bugs or crashes, the temptation is to patch the visible symptoms (events at the tip of the iceberg) rather than investigating contributing patterns, underlying system structures, or flawed mental models (e.g., rushed deadlines, lack of automated testing) that generate those bugs. For instance, teams may repeatedly fix front-end errors caused by data inconsistencies, instead of addressing the root cause—an ill-defined API contract below the surface.
11. The LGTM Dilemma
“If you want to quickly ship a 10 line code change, hide it in a 1500 line pull request.”
A clear example of the LGTM Dilemma—the tension between speed and thoroughness in code review—emerges in modern software development workflows where reviewers approve pull requests with a quick “LGTM” (“Looks Good To Me”) instead of performing a detailed inspection.
Example: Shallow Code Reviews on GitHub
Many open-source GitHub projects experience this issue. When teams face pressure to merge quickly, reviewers often respond with “LGTM” after only briefly scanning a diff, skipping detailed analysis of potential bugs, design regressions, or security flaws. In one notable case documented by Unosquare, poorly reviewed code labeled “LGTM” later introduced production issues that took several days to debug—costing both time and customer trust.
Source:
