I understand the need to pivot quickly and to stop working on the wrong things, given user feedback. But there is a real risk, in some industries with putting an incomplete MVP in front of people when you are trying to disrupt an incumbent software system.
Karthik Hariharan put it this way "A simple MVP won’t cut it. Your competition is no longer non-software solutions. It’s probably existing, but suboptimal software. Which means if you’re going to compete with it, your software needs to be significantly better."
It may still work in B2C markets, but B2B will have a lot of boxes to check and that can lead to an awkward conversation if your product is missing too many key features.
I agree, that's a good point. Sometimes the MVP should be much bigger.
Still - you shouldn't plan a 3-year roadmap of features, BEFORE you released that MVP. Even if it takes 6 months, or a year, be open to what happens after you release it.
Jumping aboard the enterprise software development team after years of freelancing, I can see what you're talking about. 😃
Yes, we had a long planning week, but the POC feature I'm going to build in the next two months will be released for preview. We haven't made any plans for perfecting it; we just want to get it out to the users, see how they'll use it, and then move forward or kill the feature.
I think anyone who can reduce a feature to a shippable product that can be done in a reasonable amount of time can follow this pattern. Of course, that requires being critical about "must have" and "nice to have" features and lowering your expectations for the initial round!
Very interesting article (I swear I thought that before I noticed you shared my article :P).
I have a few questions about it.
1. How would you implement this in a team that's already working in the poop-paper pattern?
Which, by the way, is an anti-pattern by itself ;P.
2. Does this way of work fit Taranis? If so, which steps are you taking to make this shift, if needed?
3. About point 3:
You said that if it comes out 0.8% you do nothing.
Maybe the right way to go with it is to set a more general, like: "Improve retention rate".
Because setting a very specific metric is worth nothing if you miss it and you don't look into it, and your team will know that these specific targets don't really mean anything.
I’ll start with the last one - I’m not sure an exact goal is needed, but I’m sure some indication of when to NOT continue with an effort is a must. Otherwise, you might continue to improve a useless feature.
The other 2 questions are connected - our roadmap looks like the example I shared :)
Only recently, since becoming a director, I have more influence on it, and I’m working to change the way we work. It’s a slow process, and I’m still not sure what are the correct steps :)
I'm curious on point 3, what happens if the criteria isn't met. How do you think about this when setting your goals/targets? Do you talk about what actually happens if the target isn't met? Is it always just, "this is what we want, it's ambitious, let's try our best!"?
I think the important part is to ask that question, to make people think about it. In my experience, nothing happens... But leads to question 5 - maybe we can set up a 'failure metric', in addition to the success one, which will indicate we should stop. So above the success metric it'll be a huge success and we need to double down on it, between the failure and success metric we'll just continue, and below the failure one we stop.
I understand the need to pivot quickly and to stop working on the wrong things, given user feedback. But there is a real risk, in some industries with putting an incomplete MVP in front of people when you are trying to disrupt an incumbent software system.
Karthik Hariharan put it this way "A simple MVP won’t cut it. Your competition is no longer non-software solutions. It’s probably existing, but suboptimal software. Which means if you’re going to compete with it, your software needs to be significantly better."
It may still work in B2C markets, but B2B will have a lot of boxes to check and that can lead to an awkward conversation if your product is missing too many key features.
I agree, that's a good point. Sometimes the MVP should be much bigger.
Still - you shouldn't plan a 3-year roadmap of features, BEFORE you released that MVP. Even if it takes 6 months, or a year, be open to what happens after you release it.
Jumping aboard the enterprise software development team after years of freelancing, I can see what you're talking about. 😃
Yes, we had a long planning week, but the POC feature I'm going to build in the next two months will be released for preview. We haven't made any plans for perfecting it; we just want to get it out to the users, see how they'll use it, and then move forward or kill the feature.
I think anyone who can reduce a feature to a shippable product that can be done in a reasonable amount of time can follow this pattern. Of course, that requires being critical about "must have" and "nice to have" features and lowering your expectations for the initial round!
2 months is quite a while :)
But it’s a good sign there is no phase 2 for until you release.
Very interesting article (I swear I thought that before I noticed you shared my article :P).
I have a few questions about it.
1. How would you implement this in a team that's already working in the poop-paper pattern?
Which, by the way, is an anti-pattern by itself ;P.
2. Does this way of work fit Taranis? If so, which steps are you taking to make this shift, if needed?
3. About point 3:
You said that if it comes out 0.8% you do nothing.
Maybe the right way to go with it is to set a more general, like: "Improve retention rate".
Because setting a very specific metric is worth nothing if you miss it and you don't look into it, and your team will know that these specific targets don't really mean anything.
What do you think?
Great questions!
I’ll start with the last one - I’m not sure an exact goal is needed, but I’m sure some indication of when to NOT continue with an effort is a must. Otherwise, you might continue to improve a useless feature.
The other 2 questions are connected - our roadmap looks like the example I shared :)
Only recently, since becoming a director, I have more influence on it, and I’m working to change the way we work. It’s a slow process, and I’m still not sure what are the correct steps :)
Short, crisp and insightful!!
I write about QA practices in agile team, you would love to read my articles published here: https://qaexpertise.substack.com.
Thanks Priyanshu!
I'm curious on point 3, what happens if the criteria isn't met. How do you think about this when setting your goals/targets? Do you talk about what actually happens if the target isn't met? Is it always just, "this is what we want, it's ambitious, let's try our best!"?
I think the important part is to ask that question, to make people think about it. In my experience, nothing happens... But leads to question 5 - maybe we can set up a 'failure metric', in addition to the success one, which will indicate we should stop. So above the success metric it'll be a huge success and we need to double down on it, between the failure and success metric we'll just continue, and below the failure one we stop.
More of a range than a single metric.
That sounds very useful. Know what strive for, know when to cut your losses. Decide before the sunk cost fallacy starts to destroy decision making.