Brent

Difference Between Velocity & Capacity: A Product Manager's Perspective

Velocity

The number of story points delivered in a sprint is called Velocity.

For example, if a development team planned a sprint, and points from all of the stories was 30, but at the end of the sprint, the team delivered 27 points. The team’s velocity would be 27.

From a product manager or product owner’s perspective, this metric can we a useful tool to plan future work. From a development team’s perspective, it can be used as a KPI to monitor the health development teams.

Velocity For Future Projects

Velocity can be a helpful KPI to plan/forecast either future projects. It can also help give insight into when current projects may be completed.

If you want to do this, track the average velocity over the last 4 sprints. Using individual sprint velocity won’t work as well since single sprint velocity varies from sprint to sprint due to vacation/leave, sick days, etc. By using that past 4 sprints, it provides a better gauge of the velocity for future sprints.

For example, if the velocity for the last 4 sprints were 23, 29, 35, 24, the prediction for future sprint velocity would be 27.75 (sum((23,29,35,24)/4).

In order for this to be a meaningful tool to help your team plan future work, it’s also important that the development team can estimate both user stories and project work relatively accurately.

Remember, when using a team's velocity to plan future work, the number should be used as a guide but should not be used as a contract.

Velocity For Development Team’s ‘Health’

A ‘healthy’ development team will typically have consistent velocity sprint over sprint. This is achieved because of the team’s ability to estimate (point) work and plan sprints well (know what is realistic to complete in the sprint timeframe). Teams usually get better the longer they work together and the more consistent the type of work is.

Does having a high velocity mean the team is a good one?

I’ve been asked this a few times before, and the answer is no. Having a high velocity every sprint, or even a low velocity every sprint, doesn’t mean the team is a good or bad one.

I’d even argue that the actual number doesn’t matter. Every development team points stories differently. A 3-point user story for one team might be a 5-point user story for another. It’s more important that development teams point and complete stories consistently every sprint.

There are circumstances where consistently is really hard but we’ll get into that a bit later.


As a product manager, if you’re finding that a team’s velocity is inconsistent, it’s a good idea to diving deeper with the team and understand more information behind the numbers. Jump into the conversation with a collaborative mindset to discover more. Treat it like a user interview.

The better the development team and you get at this process of measuring output, the easier it is to collaboratively build a roadmap.

Things to understand about using velocity to track development team’s health:

  • Too much focus is bad: While consistency is a sign of a “healthy/mature” development team, focusing on it isn’t always the right thing. Too much focus on consistency velocity may hinder the developer’s ability for creative problem-solving. For example, if a better idea appears mid-sprint, which is out of scope from the existing idea, you want to encourage having conversations about this. If developers feel too tied to velocity tracking, they could choose to just do the existing story because that’s what is being measured.
  • Non-pointed work: In my experience, research and development (R&D, Spikes, etc.) aren’t pointed on purpose. I’ve also worked in environments where some companies pointed bugs and others didn’t. Non-pointed will lower velocity, and that’s ok. By knowing this, you’ll be able to understand that there’ll be a drop in velocity if projects need a lot of R&D. This is common in new technology projects.
  • Team changes: As development teams change, this will affect velocity. Adding new team members likely won’t increase the velocity at first, it can even decrease it while the team adjusts. After new team members get up to speed, the velocity should be higher than before the addition. If it doesn’t increase, it’s a good time to dive and in and do some discovery.
  • Unstable sprints: I define ‘unstable’ sprints as ones where stories are being added or removed mid-sprint. When this happens, velocity will be inconsistent.

Capacity

The total number of available hours for a sprint is called the development team’s Capacity. Available hours are calculated based on the number of available resources minus things like planned vacation/leave, company events, country holidays, etc.

Capacity is used to plan the sprint. The team commits to completing a set number of user stories/tickets within the sprint time frame. Points are used in the process to help gauge the difficulty of the story and to help gauge the feasibility of completing the sprint compared to past sprints.

For example, let’s say the team commits to 23 stories in a sprint and the point total for that sprint is 39; however in past sprints, the team’s average velocity (over the last 4 sprints) has been 27 points, the team should have a conversation why they’ve committed to more points.

If this happens, here are some things to ask/think about:

  • Is our available capacity higher in this sprint? (Fewer developers on vacation, no holidays in this sprint, etc.)
  • Can we outperform our average velocity? (New hires are contributing more, limited/no R&D in this sprint, last sprints had a lot of R&D, etc.)
  • Are there stories that have high points which may be a risk of not completing? Can we break those down into smaller stories to reduce the risk of rollover?

Agile Games: Paper Airplanes

Paper airplanes aren’t just for kids. They can also help cement agile concepts for small or large groups.

The Overview

Teams will compete against each other by creating paper airplanes. Each team scores points when their airplane makes it past a set distance.

All of the teams will have three rounds made up of:

  1. Planning (3 minutes)
  2. Building (3 minutes)
  3. Retrospective (3 minutes)

The team with the most cumulative points through the three rounds wins!

Why?

Even something simple like paper airplanes can help teams understand some core concepts of agile processes. I’ve found this game helpful to explain:

  1. Definition of Done (DOD)
  2. Estimation
  3. Incremental Improvement
  4. Usefulness of Retrospective
  5. Timeboxing (Sprints)

Materials

  1. Paper… a LOT of paper.
  2. Whiteboard, flip chart, or large TV to keep score.
  3. Tape or something to mark the throwing line and the “finish/point” line. The distance should be hard to achieve, ideally 10+ meters (30+ feet) apart.

Rules

  1. Only planes that cross the finish/point line count as a point. I typically allow planes that both fly and slide across count as a point. It’s up to you to define what “crossing” means. Is it any part of the plane or does the entire plane need to be fully across?
  2. Each team has the same amount of paper.
  3. A plane can only be thrown once. If it passes the throwing line in one round, the team cannot use it again in a different round even if it didn’t make it to the finish/point line.
  4. Planes cannot be crumpled into a paper ball.
  5. Planes can only be made of one sheet of paper (no paper clips, tape, etc.).
  6. Each team provides an estimate of the number of points they will get before each round.
  7. Each team should add a logo/number on the plane (helps keep track of points).
  8. Optional: Planes can’t have a pointy end (safety first 👷‍♀️).
  9. Optional: Add something to the Defenition Of Done (DOD) like adding a logo or name to each paper airplane.
  10. Optional: Teams can either have an unlimited supply of paper, or they have a set amount of paper, which they can use however they want throughout the three rounds.
  11. I prefer a set of paper so there isn’t excess waste and it gets the same agile concepts across.

Game Time

Round 1

Explain the rules and allow for questions. Have people move into teams of 4 to 10, then go through the 5 steps below:

Step 1: 3 minutes for the teams to plan for their paper airplane building, throwing, etc.
Step 2: 3 minutes for building planes.
Step 3: Get each team’s point estimate. Record it on the point board.
Step 4: 3 minutes of throwing (all teams can throw at the same time). Record the points for each team.
Step 5: 3 minutes for each team to do their retrospective.

Round 2

Repeat the 5 steps from round 1.

Round 3

Repeat the 5 steps from round 1.

Debrief

After all three rounds are done, tally the points up and announce the winning team (who had the most points). Then, bring up some questions from the list below based on how much time you have for the debrief.

Estimation

  • How did your estimates change over the three rounds? Better/worse/why?
  • How did your group decide? Single person, team effort? Why?

Incremental Improvement / Retrospectives

  • What were one or two things that came up during a retro, that had a positive effect?
  • How did you incorporate other team’s successes and failures into your team?

Timeboxing (Sprints)

  • Instead of three rounds, you had 9 minutes to plan, then 9 minutes to build, then 9 minutes to throw (waterfall method), would your team have more or fewer points?
  • Instead of planning, building, and throwing in timeboxes, what would happen if there were simply 3 rounds of 9 minutes (planning, building and throwing totally combined).

Definition of Done (DOD)

  • Was the DOD clear? If not, why?
  • Were there any planes that were disqualified? Why?


Product Prioritization: Speed Boat

Product Prioritization - Speed Boat

Identify what's slowing your product down

Welcome to 'Product Prioritization' - our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our sixth installment, we take a look at a group activity called 'Speed Boat'.

Venting can be therapeutic, and it and also be incredibly insightful for your product team. There's valuable information you can take away from understanding what users or your teams hate in your product. The problem can be that complaints can in fast and furious and might not seem actionable.

It's really easy to focus on the trees instead of the forest and fix a lot of 'one-off' things for each complaint.

Early in my career, and in the early stages of a SaaS startup, if an influential user would complain, we'd rally the troops to delight them with a quick fix. Being a young company, we valued individual client satisfaction at the cost of scalability and sustainability.

It wasn't until later that I came to realize that the opportunity cost of delighting one client could sacrifice the happiness of many clients, especially when the customizations for clients meant that the product became bloated and slowed future development.

As I matured as a product manager, I was able to see complaints in perspective with the ecosystem of our product and industry. One great way to achieve this is with a prioritization technique called Speed Boat (I wish I used this back in 2008/2009).

What is Speed Boat and how does it work?

Get a group together in a room with a whiteboard. You can use a video call, but make sure you have a digital whiteboard that the group can interact with.

  • Sketch a speed boat, one that looks like it should go really fast. Feel free to do this before the meeting.
  • With the group, draw an anchor. Let the group know that the boat has the potential to be setting world speed records but the anchor is slowing it down.
  • Explain that the anchor is a representation of a feature that is keeping your product/platform from moving faster and being better (it could be a process or service depending on your company).

Now it's time for the group's participation. Have the group draw anchors and label them with the features they feel are slowing the product down and keeping it from being great.

*Bonus points if you want to have them use size to visualize how big of a problem the feature is to them. The bigger the anchor the more the feature is slowing the product down.

If people don't like to draw, you can have post-it notes ready for them.

---

Why I like this exercise.

I've found this activity can be relaxing, therapeutic, helps with team bonding, and a visual representation of the product. Doing an activity like this also seems to take the aggressiveness/anger out of complaints.

What you'll find is that most users, no matter how many complaints they have, still want to see the product improve and your role is to tap into that.

A few tips to ensure the meeting works well and the group stays focused:

  1. Don't let one user command all the attention. If this starts to happen, call out other group members to participate.
  2. Set ground rules so the group knows it's a brainstorming session and all anchors are welcome. Details can be sorted out later.
  3. You can change what the boat represents depending on what you'd like to get out of your session. It may represent a product line, a website, a project, etc.

---

Thanks to Folding Burritos for creating the Periodic Table of Product Prioritization Techniques.


Product Prioritization: Planning Poker

Stop gambling on what to do next… by playing poker

Welcome to ‘Product Prioritization’ — our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our fourth installment, we take a look at ‘Planning Poker’ also known as ‘Scrum Poker’.

At Left Travel, we enjoy using ‘Planning Poker’ when it’s important that the team needs to come to a consensus. This technique is perfect for:

  • Aligning different stakeholders
  • Extracting silo-ed information from stakeholders
  • Keeping meetings interactive and fun

What is Planning Poker and how does it work?

At a high level, ‘Planning Poker’ is a prioritization technique where multiple stakeholders get together and establish the value of a project, feature, or idea. For the purpose of this blog post, we’ll discuss ideas.

The technique is gamified to estimate value. Stakeholders are presented with an idea and each one of them votes on how valuable they think the idea is by using a set range of cards or poker chips with varying values. Votes remain hidden until all members have voted to avoid influence from other members. Once everyone is decided, the votes are revealed at the same time.

After everyone has presented their votes, the stakeholders who voted with the highest and lowest values explain their reasoning. The voting process repeats until the team agrees on a value for the idea.

How to Play

Step 1: Deal Cards or Poker Chips

Each person is given a set of cards or poker chips. The value of the cards or poker chips should be set as 0, 1, 2, 3, 5, 8, 13, 20, 40, 100. While ‘Planning Poker’ can be played with different values (like a Fibonacci sequence), what matters most is that the higher the bets get, the larger the gap is between the next lowest and next highest values.

Step 2: Rules & Establish Values

The moderator or scrum master explains the rules of the game to the group (explained in the following steps).

Next, the moderator establishes what the number value of each card or chip is worth. Since value is subjective, it is crucial to complete this step before starting the exercise. Take the time to go over a few past ideas that are complete and assign them a value. It is best to pick ideas that vary strongly in value to allow the stakeholders to be able to easily compare low, medium, and high-value past ideas to new ideas. Use the phrasing ‘X idea is a 40 because…’

Step 3: Present the Idea

Next, get the product manager or owner to present the ideas to the group and ensure that there is full clarity on every aspect of each of them. The moderator can also act as the product owner for some or all of the ideas that are being discussed. Allow time for Q&A from the stakeholders.

Tip: Standardize the way the ideas are being presented to avoid a stakeholder over- or under-emphasizing specific ideas based off of their personal opinions. Set timing and structure requirements.

Step 4: Voting

Once everyone has had a chance to ask questions about the idea, it is time to vote. Each stakeholder selects a card or chip and places it face down on each idea. The higher the value of the card or chip the more important it is to the stakeholder. Once everyone has cast their vote, all of the votes are revealed at the same time. It is important to keep the votes secret until everyone is ready in order to make sure that the stakeholders involved aren’t influenced by others in the company — no matter what their role is.

Step 5: Discussion

Start the discussion by having the stakeholders that cast the highest and lowest votes explain why they gave the idea that value. Through this discussion new data can be discovered as the high and low-value voting members will often have additional information about the idea that others didn’t have prior to voting. For example, a stakeholder might know how an idea may possibly have a massive impact on another feature, or how the idea would be a big waste of time because it doesn’t impact any key KPIs.

The moderator will typically only need to call on those who had the highest or lowest value, unless a stakeholder who voted in the middle is very passionate about an idea. At some point in the game, most stakeholders will end up on the high or low end so they’ll get the opportunity to participate. If there is someone who constantly votes in the middle, call on them at some point to make them feel included in the discussions.

Step 6: Assigning Value/Voting Again

Assuming that not everyone assigned the same value to an idea, after hosting a discussion, have the group vote again. Repeat the process until the group comes to a value consensus (they all vote the same). Once agreed upon, assign the decided value to the idea and move on to the next idea.

Tip: If the stakeholders aren’t coming to a consensus and a revote has been cast, it is helpful to ask the stakeholders that are not aligned if they are comfortable adjusting their vote up (or down) to meet with the group. This usually works.

If it doesn’t work, note down what the scores from the group were and the members who wouldn’t adjust their vote. This is done not to single them out, but to make a reminder to approach them later so that you can dive deeper into their reasoning.

Step 7: Finishing Up

Once all of the ideas have a documented assigned value, sync up with the team that estimates the size (level of effort) of ideas.

Once the size has been determined, create a ratio of the idea/ feature/ project, to the level of effort. Give bonus points if the team can take the information and get it down to story points, sprints, days, etc. Once completed, there will be a list of prioritized ideas.

Tip: A simple 4-quadrant list with value and level of effort will help identify ideas that stand out.

The Benefits of Physical vs. Software

Last week, the Left Travel team did a ‘Planning Poker’ session to value some of our upcoming data projects. When prepping for it, I looked into the benefits of using physical cards/chips compared to using a software program.

I ended up deciding to use physical cards. I found that many of the paid or free ‘Planning Poker’ software options were either too cumbersome or tied to a roadmapping system. For our team, the effort to go through the onboarding process was too much of a pain. In saying that, if you’ll be using the ‘Poker Planning’ technique often, it might be a good idea to use software.

You can purchase ‘Poker Planning’ cards on Amazon or Mountain Goat Software.

Other Uses of Poker Planning: Backlog Grooming & Remote Teams

Outside of assigning collective value, the ‘Planning Poker’ technique can be used to groom your backlog and development estimations (sizing). It is recommended to use the Fibonacci sequence instead if you’re doing one of those.

‘Planning Poker’ also works really well with remote team members. The moderator will have some extra prep to ensure that the stakeholders have the card or chips before you start (software may be a better option for remote teams), but the voting and discussions work well if everyone is on a video call.


Product Prioritization: MoSCoW

Must, Should, Could, Won’t

Welcome to ‘Product Prioritization’ — our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our fifth installment, we take a look at ‘MoSCoW’, a quick way to identify things that will surface to the top and sink to the bottom. The MoSCoW prioritization technique isn’t as refreshing as a Moscow Mule but, it’s still a good one.

It’s similar to the Stacked Ranking technique, but sometimes it’s either too hard or takes too long to get a ranking of the features you want to prioritize. If you find that features are too similar, and your team is ‘arguing’ over a feature that should be in the #3 or #4 spot, MoSCoW should be a good fit.

Besides a yummy drink… what is MoSCoW?

MoSCoW is an acronym to help you remember four different categories when you’re running a prioritization session.

  • M = Must Have. Critical features that must be included in the product. If it’s not included, the product release will be a failure.
  • S= Should Have. Important features, but not critical for the product. These could be features released in phase 2 or added into phase 1 if your team has extra development time.
  • C = Could Have. Commonly called ‘Nice to haves’ aka ‘NTH.’ These features aren’t necessary for the release. As new information comes from users, these features may move to a ‘Must’ or ‘Should’, or to a ‘Won’t’ in future planning sessions.
  • W = Won’t Have. Kill these ones. These features will be things that aren’t aligned with the goal of the product, or maybe the risk/value is in the wrong quadrant.

Wait. What about the two Os?! They don’t stand for anything but are just there to create a name that’s more memorable.

This is a good method when you need a quick ranking to start to paint the picture of what should be in the next release, in the MVP, or even in the next sprint.

I’ve found that MoSCoW works better in smaller groups. In larger groups, the nuance of a feature being in the ‘Should’ or ‘Could’ group may take away from the intention of getting a quick prioritized list.

Moderator Tips

This method is also enhanced when combining it with real Moscow Mules.

When coaching a team about this method, it’s a great idea to bring in real Moscow Mule cup as a visual aid. Having this visual helps your team remember this technique.


Product Prioritization: Buy-a-Feature

Using cash to identify key ideas

Welcome to ‘Product Prioritization’ — our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our third installment, we take a look at ‘Buy-a-Feature.’

At Left Travel, we enjoy using the ‘Buy-a-Feature’ technique when working with internal teams or external users who ‘want it all.’ It’s always challenging working with stakeholders who want all of the features, all at the same time — this prioritization technique helps enable them to describe the value they see in the features in a new way.

What is ‘Buy-a-Feature’?

‘Buy-a-feature’ is a product prioritization technique used when a product is under development to quantifiably estimate how valuable a feature or an idea is. To do so, a product team will work directly with customers and key stakeholders to solicit feedback and prioritize enhancements or features which the participants want or value most.

How to use ‘Buy-a-Feature.’

Our team loves this prioritization technique, and as such, we highly recommend it under the right circumstances, such as during an in-person focus group. To use it, we’ve developed a game that breaks it into 5 simple steps:

Step 1: Make a feature list.
As a team, make a list of the features that need to be prioritized.

Step 1.5: *Optional* Assign each feature a price.
Give each feature on the list a value or price. The value or price should be based on the relative size, LOE, and scope of the project to represent the effort required to build it.

At Left, we’ve run this technique with and without prices. While both options work well, we’ve found that by having prices it helps focus groups that are outside of software development understand the actual ‘cost’ of a project.

Step 2: Get customers and stakeholders together.
Get your company’s stakeholders and/ or customers into a room (or on a video call) to start the game. Explain the features on your list to the group to ensure everyone has full clarity on their benefits.

Step 3: Give out the cash.
Give everyone in the focus group the same amount of money to use during the game. If you’ve assigned prices to the features as in Step 1.5, give them between 50–60% of the total cost of all of the listed features. This is to make sure they are being selective in their buying decisions.

Step 4: Have them buy.
Ask your stakeholders to “buy” the features they like. They can spend all their money on one or two, or spread it out evenly — it’s their “money,” they can spend it how they want to!

Observe the buying process and have the stakeholders explain why they spent money on the features that they picked. This is the Product Manager’s opportunity to listen to your customers and/ or stakeholders and understand both their individual and group ‘buying’ decisions.

Step 5: Collect observations for action.
Arrange the list of features by order of how much was spent on each feature (top=most money; bottom=least money). Now you have a list of features ranked and a value assigned to them.

Once the game is completed, use the ranked list and collected observations to make informed decisions on future product development based off of your customers’ and stakeholders’ needs.

Tips when using Buy-a-Feature:

  • This technique carries more weight when done with end-users as it shows the value they see in the features they would use in the product.
    This game can be run either individually with a stakeholder, or in a group of stakeholders.
  • If there are a few features that are bought with a similar amount of money, group them together. For example, a $5 difference between two features may be insignificant or subjective to the particular stakeholder, depending on how much money you gave the group.
  • Allow ideas to flow from your participants. If new features or ideas come up, use the structure of the game to ask what the estimated value of the feature would be and where it would fit within the ranked list.
  • For a fun twist, use real money. There’s something about handling real money that changes people’s buying behaviour.

Product Prioritization: Feature Buckets

Using buckets to plan for future work

Welcome to ‘Product Prioritization’ — our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our first installment, we take a look at Feature Buckets, originally proposed by Adam Nash.

At Left Travel, we use Feature Buckets to ensure our roadmap is balanced between:

  • generating revenue
  • ensuring our users are delighted
  • fitting in longer-term strategic projects.

What are Feature Buckets?

Feature buckets are the classification framework of creating different groups, or ‘buckets’, that product features or ideas fit into. It is beneficial as in the way of roadmapping, and by having several buckets, it allows for a well-rounded and balanced product which satisfies more stakeholders.

The four categories of feature buckets

There are four commonly used categories used to provide balanced software. They are:

  1. Metric Movers
  2. Customer Requests
  3. Customer Delight
  4. Strategic

Metrics Movers

This bucket includes the features needed to move the needle on key metrics that matter to your business around growth, engagement and revenue. This can be anything from ARR, Churn, ARPU, MAU, LTV, ATV, etc.). For example, at Left Travel, we use metrics that focus on the traffic we send over to our partners and the quality of that traffic. For this, we use Qualified Referral Rate (QRR), Revenue Per Qualified Referral (RPQR), and our partner’s Conversion Rate.

If there is alignment on what the key metrics are that your business follows, it helps narrow the scope of this feature bucket.

Customer Requests

The Customer Requests bucket is filled with requests your organization receives from users and is important to carve out your roadmap. While having this bucket doesn’t necessarily mean you’ll address all, or even a large portion, of the requests that come in it, it does help ground the company to identify current pain points that users are having and decide when, how, or even if, you will address them.

Customer Delight

Remember the time you showed a user something, and they LOVED it? Features in this bucket may not be coming from users directly, but they spark joy in the customer when they see it. Here’s the best recipe to craft these features into delicious user treats:

  1. Listen to users and understand their pain points.
  2. Leverage technology to test and try.
  3. Innovate on UX to deliver and delight.

Strategic

Data projects and new markets or opportunities are types of projects that can be hard to fit into the three previous buckets but are still important. That is why there is the ‘Strategic’ bucket for features that help keep the software looking forward and past some minutiae. Use this bucket to think big and be aligned with the business’s values and goals.

Balancing Buckets

Having not enough buckets

Having too few, or too many, buckets can cause problems.

If you have too few buckets, you may be putting all your eggs in one or two baskets. For example, if you only worked on features that fit into the Metric Movers and Customer Requests buckets, it is easy for your roadmap to lose sight of the bigger picture. If this happens, your software may become bloated with customer requests. This often leads to making segments of your customers happy for the short term while making the software more complex for the rest of your users. If you don’t have work filling up each of the four buckets, you’re missing important feedback opportunities from either internal or external stakeholders; or simply put, there’s a blind spot in your software.

How to find your Feature Bucket blind spots

  1. Brainstorm what features fit into the empty bucket(s).
  2. Imagine a competitor. How would their product stack up against yours? Focus on that.
  3. Take the list of features you’re not building and run them by your stakeholders (users, developers, dev ops, support, executives, sales, marketing, etc.,). What is their reaction?

Having Too Many Buckets

Simplicity is important when you need to be constantly communicating the roadmap to stakeholders. With too many buckets, it can get confusing. If you have lots of feature buckets, it’s time to think long and hard about why the extra buckets exist. Ask yourself the following questions:

  • Was it created to get a stakeholder’s work on the roadmap?
  • Could the buckets be rolled up into fewer ones?
  • Are the buckets too granular?

An important part of a roadmap is to be able to effectively communicate what’s happening now and what will be happening soon. If you have 8 buckets, it’s hard to have your team understand and support all of them. Best practices show that people can hold between 3 to 5 buckets effectively.

Ok… where does prioritization come in?

The feature bucket technique is aimed at exposing and categorizing ideas or product features into groupings. When I started at Left Travel, I found that the customer request and customer delight categories were under serviced. By ensuring that we keep our focus on those areas, we’ve been able to further close the gap between our competition’s user experience. Using feature buckets, it should help to:

  1. expose which buckets have too many or too few projects — helps to identify blind spots
  2. identify which features or ideas don’t fit into your roadmap and can be removed
  3. enable a meaningful conversation about the capacity assignment for each bucket and your team.

NOTE: This technique is not helpful to determine which feature is more valuable to do first.

Roadmap Example

Below is an example roadmap which visually resembles buckets (rows) and their status (columns). This can be easily changed to show dates in the rows if that’s the type of roadmap your team prefers.


Product Prioritization: Stacked Ranking

Using rankings to facilitate discussions

Welcome to ‘Product Prioritization’ — our series of tools, tips, and best practices for the skilled Product Manager to determine priorities and get results. Each month, we will highlight one of the dozens of popular methodologies and explain how to use it.

For our second installment, we take a look at stacked ranking, first popularized by Jack Welch at GE in the 1980’s.

At Left Travel, we use stacked ranking when our team is looking for a quick and dirty list of priorities. Whether it’s a list of high-level sprint goals or which beer to buy for beer-o-clock, we’ve found this works best if the items in the list aren’t too complex.

What is stacked ranking?

A widely used prioritization technique, stacked ranking is used across multiple industries. At its most basic level, stacked ranking is the act of taking your list of items (ideas, stories, epics, etc.) that needs prioritization and ranking them from the most important (top of the stack) to the least important (bottom of the stack). That’s it — easy right?

The answer is yes and no. While the prioritization technique is simple in practice, it relies on qualitative data and opinions, which may not align with user value.

Tips and Tricks

1. Question the order:

Whether you created the list, or you’re reviewing it, it is important to ask questions about the reasoning behind the order of items to avoid bias.

Questions to consider:

  1. Why is the top idea the most important?
  2. Why is the bottom idea the least important?
  3. How much more/ less important is the idea in the middle than the top/bottom idea?

2. Rank individually, discuss together:

To avoid opinions being swayed during your team’s initial stacked ranking process, have each team member rank the list on their own and then compare the results. When there are differences between the lists, encourage a discussion to discover why.

At Left Travel this has led to great collaboration and knowledge sharing, particularly when someone on our team specializes in a certain data set.

By using stacked ranking, team members feel empowered to give their opinions on the ordering. When the team comes together, it makes for an insightful conversation about why there are differences between everyone’s ranks.

3. Get feedback:

Due to the opinion based nature of stacked ranking, it is important to solicit feedback from a wider group than your immediate team. Try circulating the list to other internal peers and stakeholders and ask if they feel differently about the ranking. Driving discussion is a quick way to get feedback and help mitigate opinion bias.

4. Individual use:

Stack ranking is great for prioritizing individual daily tasks that feed up into your larger company objectives. Online product management tools like Trello and Asana are helpful platforms to share your individual task list with your team.