One product many ideas – we know it? Fortunately, there are many prioritization models. You can read about the five basic ones in the nngroup article. One of them is the RICE model, which scares many people off because of its math and fractions. Completely unnecessary.
Classic RICE Model
There are a lot of articles on this that explain this exactly, but for the record. You evaluate all your features in the following categories:
- Reach – assumed number of users in a specific time. For example, 10k Users per month. If all features concern the same cohort (e.g. new users) – you can ignore the Reach aspect.
- Impact – What impact will this have on our product (metric).
- There are many methodologists. I suggest 1-5
- Confidence – certainty that this feature will help us achieve it. Based on data, research, reports and experience. E.g. 80%
- Effort – How much time do we need to implement this feature. Include. Business, Design and Development. Depending on the degree of detail, you can estimate in Men -Days, -Weeks, – Month or on a scale of 1 – 5
(Reach x Impact x Confidence) / Effort = RICE Score
You multiply the first three numbers by yourself. Divide the result by Effort. The number obtained in this way gives you the value of your feature. If you do this for each featura – you can prioritize them.
Everything is fine, but what if it fails?
A hypothesis can be a failure
Not every hypothesis is correct. To be wrong is a product thing. It is important to document your metrics well and draw conclusions for the future. The key here is to understand that undoing a hypothesis change is not always a simple process.
The cost of the hypothesis rollback is a cost component: user experience, business (operational), development. Depending on these components, this cost can be high. This cost should also be included in the prioritization of hypotheses.
The rollback cost of a Hypothesis lowers its value
A well-implemented hypothesis should have a cohort of users and an experiment so selected that the cost of the rollback is minimal. This is the case in a perfect world. In fact, we know how it is;)
What is the cost of the hypothesis rollback to the RICE model? Like Effort, it lands on the negative side (in the denominator). However, it is not the same as Effort. Moving on to the point:
(Reach x Impact x Confidence) / (Effort x Hipotesis Rollback) = RICE Score
In the picture version it looks like the picture below. See also figjam template
How to conduct a prioritization workshop?
Remotely and efficiently 🙂 Just like the classic RICE. I don’t want to reinvent the wheel here. In brief:
Participants: The workshop should be attended by people who are familiar with the topic. And they have the appropriate competences to be able to experimentally estimate all topics. Asynchronous work with featuras before the workshop should help.
Workshops can be divided into the business part (first columns) and the rest. But it is an option. Better to do it holistically in three steps.
1. Alignment of knowledge
about the featers and determination of reach. If there is space – you can discuss here, but it is also worth leaving the discussions with the estimates. Look down.
2. Voting with the scrum methodology.
Everyone estimates in secret. Everyone reveals their estimates on a given signal. Extreme assessments should share their rationale.
3. Adding Up Points
Here’s where math comes in, but luckily we have calculators.
RICEH is not a perfect method for prioritizing hypotheses
This variant of RICE has its drawbacks, but in my opinion it is much more tangible than the classic impact / effort. However, it is important to use prioritization methods and to discuss. The discussion in the product should be structured and focused on the effect and development.
If you use the RICEH method – please give me honest feedback. Preferably on Twitter.