Every feature added to a product has some cost associated with it. The up-front costs are the most salient, easy to understand and compute. It will take Team A two weeks to do this, it will take Team B one month to do that, and then Team C will spend two weeks on another thing.
Once these initial best-case scenarios have been estimated, some form of cost-benefit analysis is done in a hopeful manner to determine whether the feature is worthwhile to work on. It's often hard enough to propose a new body of work that will tie up limited resources (or incur costs), and maximizing the result of that cost-benefit analysis is one way of helping your argument during the proposal.
Future costs are often passed over to pad this analysis. Difficulties and problems that often arise in developing certain types of features are hand-waved away in an effort for the proposer to achieve success and get the green light on their project. Once the ball is rolling, inertia will typically carry it through to the end. Welcome home, sunk-cost fallacy!
And yes, I know you're thinking, "I don't do that! My team does xyz to mitigate those issues! We always predict what is needed perfectly!"
But... those issues still show up, don't they? And if they still show up and you have "processes" to "manage" them, where do they come from?
Lurking under the surface
Consider the following scenarios:
- Technical debt from a hasty development window ties up engineering resources sporadically for months (or support resources for years). Nothing huge or with any feeling of immediacy behind it, as nothing is broken. Cumulatively, however, it takes up just as much personnel time as implementing the feature—potentially much more—in revisiting those problem areas and "fixing" them with proper design and architecture.
- Edge cases are discovered during development that delay release for another sprint/cycle/quarter/whatever as a team troubleshoots, debugs, modifies, and tests the feature or product under newly-discovered constraints. Once those constraints are dealt with, new ones are discovered, and the cycle repeats.
- Additional features need to be added to a product after user feedback is given during some time with real-world users in real-world situations.
Dealing with each of these scenarios could fill a book, but they are often tied to missing processes in creating a product. These adverse outcomes are caused by iceberg problems—problems that consist of issues that exist underneath the obvious, only exposed when in the process of implementing a new feature or creating a product.
Product development is not product creation
It's very likely that "product development" in an organization has a defined process. But "product development" is only one portion of "creating a product." And it's that more extensive "creating a product" process that often has steps missing due to a poorly-defined structure.
There is only one universal solution to the problem of these missing steps that create iceberg problems—a more complete understanding of the situation among those who propose product features. Whether this deeper understanding needs to be in the process of developing products and features (issue 1 above), the potential "issue space" that the type of feature produces (issue 2 above), or in the userbase's needs (issue 3 above) is a matter to be determined by those running into these evergreen issues.
Depending on the organization, this may fall on various people, personas, or job titles. For instance, in a small startup, this might be within the CEO's domain, the CTO's, an engineer with some authority, or a VP or the head of marketing or sales. This might be within the product owner's wheelhouse with an established product in a company. Businesses that rely on an ITIL/ITSM model may have major incident managers who propose features or large projects to replace problematic systems. In a place with many stakeholders who have the ear of the product team, this could be several individuals in various departments, all with competing objectives and who are vying for limited development resources.
Breaking up the ice
The solution boils down to better information flow. Let's get back to those three scenarios above:
- An inexperienced development team might cause technical debt. Still, it's more likely to stem from a lack of communication between those who develop the features and those who propose them. This could be organizational—when one group has more significant influence over the other, there is a power imbalance. When the person submitting the feature for development might be responsible for the performance review of the person who decides how to implement it, and that person knows this, a perverse incentive is created. Now, the production of the feature is plagued with hasty development windows, as those managing the product development try to squeeze in as many features as possible within a timeframe to appease those who hold influence over their career.
- Edge cases discovered during development might be inevitable, but they also might be the result of sloppy or hasty discovery or feature planning processes. Imagine a situation where there is only one round of review between the development team and whoever proposes features. An initial assessment might estimate the development time at a month. Once the (manager, product owner, etc.) speaks to the team at length, however, the team realizes there are a variety of possible issues that could arise when situation A meets situation B meets situation C. These weren't discussed in the initial review as it was a very high-level discussion. Since there is no follow-up review planned, this information gets lost in the busy days of those involved. The edge cases show up during testing, delaying the release as the team reexamines the potential issues that only appear during extensive testing (or worse, after the feature has been released and the support team is inundated with real-world incidents).
- Missing features being added after user feedback is often a more straightforward issue. This commonly stems from product or feature proposals with no user research backing them. These issues aren't on the development team—they are typically the domain of the product team, an incident manager, or a stakeholder who thinks they know the userbase but are missing some critical context. This can be mitigated in a few ways depending on where the information is lost. In the case of not knowing how users will interact with a product or feature, the organization could create a group of "testing users," selected from a pool of high-use individuals who are very familiar with the business processes and the product use in real-world situations. Prototypes can be sent to these users, and feedback can be collected to refine the product or feature, or more extensive interviews can be done with them to get at the heart of user pain points. There are even more possible solutions in this space, but that lies at the heart of user experience research, and that's an entire subspecialty of the larger design world.
These iceberg problems are almost always a result of a lack of information flow. Optimizing this flow between individuals and within teams and organizations can go a long way to preventing them from sinking projects unexpectedly. Achieving this result requires a mixture of high-level thinking about how systems interrelate within an organization and lower-level thinking about how products and features are created. Unfortunately, there is no one-size-fits-all solution to these problems—each team will need to assess their situation and develop processes that mitigate this information loss.