Using MVP to manage your project
A short time ago I was considering using the concept of Minimum Viable Product (MVP) for a complex software implementation when an ex-colleague and friend told me that MVP was losing favour. Sure enough, a quick Google for the “death of MVP” brings up lots of results. But those results don’t present a consistent alternative, so I thought it was worth exploring the idea some more.
What does MVP mean?
Typically, the MVP is described as the most pared down version of a product that can still be released. Its purpose is to limit the software’s initial scope until feedback can be gathered from real end users, thereby reducing costly investment working to incorrect assumptions. As such, the MVP must:
Provide sufficient value that people are willing to use it initially; and
Demonstrate enough future benefit to retain early adopters; and
Enable a feedback loop to guide future development.
When considering MVP in a business software context, the first two points change slightly. The decision making is no longer taken by an individual user-consumer; rather the business is making an investment decision, incurring cost and disruption in return for long term value from the software. The third point remains critical though; the MVP is a way to understand how the software supports the business in real life, thereby adapting plans for subsequent phases of complex projects.
So why is it falling out of favour?
My Google results gave a variety of answers, undoubtedly influenced by whatever alternative method the author was advocating. But it can all be summed up by differences in opinion of what MVP means in practice.
Some authors saw the MVP as a test or experiment. That makes sense, especially for a completely new idea. Spend the least effort you can to find out if there is even a market for your new thing; and only if there is do you go on to spend time and money developing it properly. But if it’s only an experiment, that might suggest that quality isn’t so important...
Other authors suggest that the MVP is the first “proper” release of a product. In that instance, the scope, and thus resource and cost, is likely to be a lot more than that required for a quick test or experiment.
These weren’t the only interpretations, but quickly we can see that there are different contexts for M, V and P:
Minimum will differ considerably dependent on whether you are testing out a completely new concept, have a replacement for a legacy system or have an evolutionary idea in an existing market, in which case your minimum may well need to have feature parity with competitors
Viable depends on the system’s purpose – for instance, viability for a business-critical system may require a much more mature product than a consumer facing social media app
If you are validating demand for a new concept, you might not even have a Product; again, there will be differences in quality aspirations for something that might be throw-away.
I’m not going to opine on which of these definitions is right – evidently they each have a place. But clearly if there are many interpretations of what MVP means then it’s highly likely to lead to confusion and trouble at some point in your systems project, so if you’re going to adopt an MVP approach take time to make sure everyone has the same understanding of it.
What are the alternatives to MVP?
A number of alternatives to MVP have been proposed, reflecting these different scenarios. I’m going to highlight only a select few.
Minimum Loveable Product
Clearly MLP is describing creating software that users love. Within Financial Services I can see this being a really useful focus for Digital projects that are developing websites and apps for the end customers to use, where both features and user experience will be differentiators for customer experience and thereby retention.
Minimum Viable Experiment
MVE deals with prototype scenarios, where the key objective of the project is to validate the market for a new product – so for instance this might be useful when developing the software for a new type of financial product.
Riskiest Assumption Test
Instead of building a product, RAT focuses on understanding and validating your assumptions – do the smallest amount of effort you can to validate your riskiest assumption, and once you’ve done so move onto the next one.
The uses of these approaches are still not clear cut, however. Following an MLP model, you could spend a lot of time and money making a system with a beautiful user interface and lots of clever features, but if you’ve failed to validate your assumptions about what users want, you may well have gone down a dead end.
Is it the end of MVP?
In some ways I think it should be – any terminology so open to interpretation may do more harm than good. Yet on the other hand its objectives, which I listed in the opening section, remain valid, and more than one of the alternatives could apply to your project. So, as long as everyone on your project understands what you mean by MVP, for that specific project, MVP may still have a place. And the MVP approach will vary for different projects.
Consider a project to replace your core business systems with a commercial off-the-shelf (COTS) software package. It will already have a massive amount of functionality, hopefully at a high quality. In this instance MVP will focus on production-quality development and configuration of the software that is absolutely essential for the whole business to operate. Whereas the approach for a Digital project may be far more likely to take an experimental or MLP approach.
Should you use MVP (or an alternative) for a COTS project?
There are certainly some advantages to taking an MVP approach for implementing a COTS package. If you have a hard deadline (such as a regulatory requirement or needing to get off obsolete hardware) then MVP gives you a way to make sure you focus on your critical business requirements (for instance your “Must Haves” using a MoSCoW prioritisation – see Understanding your system requirements). Also, by deferring potential usability and automation enhancements to a later phase, you will have a much better understanding of how the software works for your business, and it’s quite likely that your requirements, or their priorities, will change.
On the other hand, MVP carries some risk. If the MVP is defined incorrectly you may defer some requirements to a subsequent phase, only to discover that the vendor is unable to meet those requirements – by which time you are quite committed having invested a lot of money in integrating the software with the rest of your IT landscape, user training, etc. This may be mitigated by the RAT approach.
A general principle we like to apply to COTS implementation projects is one we describe as Bend-To-Fit, at least in the first iteration of an implementation. This concept is analogous to the MVP and aims to take the COTS package as-is, with minimal, if any, functional enhancements. After all, you ought to have selected the COTS package on the basis it serves the majority of your requirements out-of-the-box.
Often the enhancements you believe you require whilst you configure and deploy the software are very different to those you really require when using the software in production. Avoid spending lots of time, effort and money enhancing a COTS package in favour of bending your business to fit the package, unless that compromises something that gives your business real competitive advantage. And if you find yourself staring at an ever-increasing number of change requests find out why and ask some tough questions of both yourself and the vendor.