In the previous entry I introduced the first three Project Scope Axioms. They are as follows:
It is impossible to fully estimate feature complexity in objective terms.
Because it is impossible to objectively estimate complexity fully and thus anticipate all contingencies, an estimate of project scope will, in practice, always be an underestimate.
Project scope will expand to consume available resources, time and budget.
The practical implication of these axioms--that all projects will expand to fill an underestimated scope--is as follows:
Unanticipated contingencies cannot not be adequately addressed in the original time and budget estimated.
This is Project Scope Axiom 4, and simply tells us what we already now know: that the delivery of a feature-complete, high-quality project on time and on budget is not possible. But all is not lost! With a shift in expectations, we can still reach our win conditions and promote healthier relationships between clients and agencies. Once we recognize the tradeoffs involved in building any software system we can set our metrics for success accordingly. Owing to these tradeoffs, every project has three potential outcomes:
Feature complete; exceeds available resources
Feature incomplete; meets available resources
Feature incomplete; exceeds available resources
The unlisted fourth outcome, feature-complete and meets available resources, is the outcome I’ve just spent the last 5,000 words or so explaining isn’t possible. Of these outcomes, the first two can be considered successful, while the third is failure. As it stands, the vast majority of projects, at least in my experience, end up in the third, failure category. This is completely avoidable, but again requires a different and in some ways scarier approach.
Here’s another way of looking at it:
I believe we should focus our energies on achieving the second listed outcome: feature-incomplete; meets available resources. I want to focus on this outcome because “feature-completeness” is a nebulous concept, and extremely difficult to measure in practice, meanwhile we can readily measure when a project has met or exceeded available resources (usually time and/or money). This might sound crazy, since not being feature complete surely means the project has failed, right? Nope! What we’re trying to accomplish is not to reach feature-completeness, but rather to properly match project scope to available resources. This is in contrast to what we usually do, which is to attempt to cram a predefined scope into available resources. In order to properly match project scope to available resources, we must adequately plan for the appropriate feature set. To make this easier, at the outset of every project we should allocate extra time toward extensive planning and scoping exercises and cut as many features as possible right off the bat. What features should and shouldn’t be cut is a delicate conversation between the leadership teams on both sides, but the aim should be to execute at a higher level on the smallest number of features possible (the set of features that meet the minimum win conditions set forth by the stakeholders).
Now we’ve reached a critical point. When we talk about extensive scoping exercises, cutting features, etc, we’re making assumptions about the nature of the project, assumptions with may not necessarily hold. If you’ve involved yourself in software delivery for any appreciable length of time, you’re probably aware of the term “agile,” or more commonly, “Agile,” with a capital ‘A’. The Agile movement started when a group of pre-eminent figures in the software industry got together and wrote the Agile Manifesto in 2001. The Agile Manifesto is a set of principles by which software developers and product stakeholders might finally come together to develop software systems of the highest quality and sing Kumbaya. At this point I will not ascribe any further intentions to the creators of the Agile Manifesto and instead will focus on how agile project planning and development is practiced in the real world.
Agile is commonly seen as a response to earlier SDLC (software or systems development life cycle) models, in particular to what we now often-derisively call “waterfall,” an SDLC model by which a software project would be more extensively planned in advance with each discipline completing its work in distinct and separate phases (requirements gathering, design, development, testing, etc), delivering working software only toward the end, leaving little or no room for revision of features or collaboration either with other teams, disciplines, or with the client. With the advent of agile, waterfall (and, to a lesser extent, incremental and spiral) became a catch-all term for the reason why we were all so bad at developing quality software systems with limited resources. We used to do waterfall, and that’s why we never got it right. Now we’re agile, so that means we’ll get it right (except, of course, when we don’t).
Most, if not all, software development agencies call themselves “agile development shops.” The term is ubiquitous and as a result of its pervasiveness has become increasingly misused. When an agency brands itself “agile,” what they’re saying is not that they believe in and attempt to hold to a set of principles that will help them deliver better products, but rather that they fulfill whatever expectations clients have for what “agile” means to those clients. I know this is true because I could walk into any agency in the world right now and within a few minutes find a waterfall process model being used for an allegedly “agile” project on someone’s laptop or whiteboard.
When an agency says “we do agile development,” often what they’re saying is that they eschew extensive up-front implementation planning in favor of drafting a loose set of “requirements” against a set of designs or documentation, against which fluid phases of development will take place and the results continually offered up to the client for feedback and changes, and this process continues until enough features are developed or money runs out. Here’s the thing, though: this is not agile. The fact that you rushed headlong into developing software before understanding what you were aiming for and continually solicited and utilized client feedback throughout the design and development process does not by itself constitute agile development.
You might be thinking this is the part where I declare that everyone is bad at agile and my way is the right way to do agile, but it’s not. In fact, I’d prefer it if we stopped using the term altogether unless it’s in reference to specific principles that we’re following. The scenario described above is not agile. At best, it is iterative. We take a stab at a design, we get feedback, we revise, we get feedback, etc. We take a stab at requirements, we get feedback, we revise, we get feedback, etc. We take a stab at implementation, we get feedback, we revise, you see the point.
Iterative development, in a strict sense, has nothing to do, necessarily, with an agile development process, and literally everyone does iterative development to some degree. A waterfall development process can accommodate iterative development just as can an agile one. The problem is that dysfunction sets in when we try to use the wrong tools to solve the wrong problems. Even so, a true agile process, one that closely adheres to the principles set forth in the Agile Manifesto, is simply not the right tool for every project. An agile process is more appropriate for projects with qualities such as the following:
unknown, poorly-defined, flexible, or conflicting requirements
dedicated, small, autonomous, co-located teams
close collaboration between cross-functional teams (e.g., design and development)
close collaboration between client and agency
flexible timeline
flexible budget
If even one of these attributes does not apply to a given project, a true agile process may not be (and probably isn’t!) the right choice. Does that mean we must revert to a pure waterfall process? Certainly not. A colleague of mine once characterized the dichotomy between waterfall vs agile like this: With waterfall you are buying features; with agile you are buying resources. Additionally, there is an entire spectrum of options between the two often ignored, therefore we have a range of choices for how we conduct any given project, and we should be willing to adapt our process to the nature of the project. Is it appropriate to impose a strict set of rules and processes on a project? Sometimes, yes! The key is identifying what processes are appropriate to use in a given situation. There is no One True Process, and clients and agencies should work together to determine the best way to run a project based on the specific needs of the project and the tools and resources available.
I’d like to conclude this series by reiterating the main points of what I’ve covered. We established that the software industry as a whole (and specifically web software in the client-agency sphere) is highly dysfunctional as a result of distorted expectations and a fundamental misunderstanding of software development as a mechanical and predictable endeavor and not the creative and chaotic endeavor it actually is.
As a result of flawed client expectations with regards to how we can (or rather, can’t) adequately assess ahead of time the effort and cost involved in any software initiative, and agency complicity in fulfilling those flawed expectations, incentives on both sides become twisted around: clients want to know how much time and money will be required to implement some system without paying the necessary costs to acquire this knowledge, and agencies give out estimates on building that system which have no bearing on reality due to a consequent lack of adequate research and discovery and/or the presence of a budget or time constraint. Clients unsurprisingly then expect the perfunctory estimate received from the agency to be adhered to despite the utter impossibility of anticipating all contingencies, and agencies are forced to either lose money, cut vital features, or push the client for money and time they may not have, both of which result in unsatisfactory results and, worse, a loss of trust between both parties.
To address these issues, I have proposed the following: an extensive requirements-gathering effort should be undertaken before any estimates are given out regarding time or money, even in the cases of ostensibly “agile” projects. If the project has unknown or a poorly-defined end state, we must insist on a flexible timeline or budget, ideally both. If we are working under a fixed budget or timeline, then we must recognize that feature-completeness is not possible, and the goal of the project is instead to do as much work as possible until the budget is exhausted or the deadline reached, and it is necessary to prioritize and cut features. It is possible for this to result in a product that is incomplete and unusable when resources have run out (one of the things “Agile” was meant to address but frequently in practice does not), which means a certain amount of trust is required on the part of the client toward the agency, and the agency must strive to earn and keep that trust by being honest and forthright in all their efforts.
Our goal should be transparency and trust between agency and client, and the only way to reach this goal is to be realistic and well-informed about the facts of software development, and to be willing to pay the price for that knowledge.
Additionally, we must be willing and able to recognize the characteristics of each project as a unique set of needs for which a unique set of processes will be required. This is the topic that will be the focus of a future series. Thanks for reading!