Variations of ICE prioritisation
There is only 1 tool in the world that works well for prioritisation. It is a list. Something on top of the list is more important than something at the bottom.
Multiple lists lead to competing priorities.
Multi-dimensional presentation of the priorities moves the accountability away from leadership and typically overloads people with work and context switching.
We can use many techniques to get an artifact of a prioritised list.
One of them is ICE.
ICE Scoring
ICE stands for:
I - Impact
C - Confidence
E - Ease
You score each dimension on a scale from 0 to 10. And then, you calculate the ICE score as:
ICE = Impact * Confidence * Ease
After getting all the scores, you sort by ICE Score and have your prioritised list.
The Impact dimension is about assessing how impactful this thing will be. Ease is about how easy this thing will be to develop, test and launch.
Confidence is a more tricky thing depending on the source. One definition is about how confident you are about guesses impact and ease. Another definition might be the confidence of moving you closer to your north star metric by getting this backlog item done.
I will leave the definitions here. You can read about them from multiple sources on the Internet. Just Google “ICE Prioritisation”.
For sure, ICE helps prioritise product experiments / bets.
In this article, I want to mention 2 useful ICE variations.
Variation A: Bugs and Support
When dealing with bugs or support requests, we do not need Confidence.
Instead of that, we may exchange that with U - Urgency. It is a factor that enables our (e.g.) Customer Success Team to manipulate the prioritisation a little bit.
Maybe one big customer calls them daily - it might be an urgent thing, important for our company.
In that case, we may end up with the following:
I - Impact
U - Urgency
E - Ease
I found that useful for prioritisation of bugs and support requests.
You end up with:
IUE = Impact * Urgency * Ease
Variation B: Split the Impact
In some contexts, describing the Impact dimension can be challenging.
In some teams, I found it helpful to split the Impact dimension into:
Severity - Is the critical user journey crashed? Or is it a supportive feature?
Scope - Are 5 million customers affected or just 1 customer?
In that case, you want to build a single number from Severity and Scope, so you end up with the following math:
ICE = ((Severity + Scope)/2) * Confidence * Ease
You can read more about this way of prioritising an impact by Googling “RICE Prioritisation”. I didn’t use the terms “Reach” and “Impact” just to show that you can build your definition of the original Impact dimension that works best for you.
Alternative approaches for math
In standard ICE definition, you can find this kind of math:
ICE = Impact * Confidence * Ease
But you may also experiment with:
ICE = (Impact * Confidence) / Ease
Tool for building a shared understanding
The obvious benefit of using this kind of prioritisation technique is building the final list of priorities we can use to decide on what we want to work on first.
However, there is also another one.
Most discussions about something being more or less important are very opinionated, subjective and not driven by the data. Often extroverts win.
By converting our opinions into numbers, we can start discussing why person A believes it creates an impact of 4, while the other person believes it is 8.
It leads to collaborative learning about our product, the business we are building something for, and everything behind feasibility.