Friday, 11 November 2011

Quick backlog estimation

One thing that's been really painful in my previous experience is overall backlog estimation. This is often asked for by the business owner as an impact assessment or to try and gain an understanding of roughly how long is a chunk of work going to take.

The issue is when the chunk of work is split into hundreds of smaller chunks of work each of them requiring a fair amount of analysis and design and perhaps UX, etc. in order for the estimate to have any reasonable meaning. Teams often aren't allowed this time while the need for sizing of the chunk of work still remains and could be important to the business.

Over the past few months we have used an approach to fast backlog estimation which combined with follow up prioritization session (I will post about this in another blog post) has resulted in acceptable outcome for both sides. In order for this to work  it is useful if your team has achieved good level of common understanding about estimation of requirements and your backlog should contain a list of user stories and (almost unavoidably a list of epics).

Start with User Stories

Prepare by printing all your user stories (I'd print them all and not use any existing index cards) and have a big table available for the session. Ideally ask 2 dev pairs (or may be 3 devs and a QA) to join you. Separate user stories from epics and work with the user stories first. Separate the cards into 2 piles and give each pair of team members one pile. Ask each pair to work in one half of the table so they don't mix cards (for now). Also ask them to put the user stories in groups by size - e.g. in the middle of the table user stories that seem to be about the average size of a typical story (Group A) , at the top user stories that seem much bigger than average (e.g. 2x or 3x) (Group B) and at the bottom user stories that seem very simple (e.g. 1/2 or 1/3 the average) (Group C) . After this phase is done ask the pairs to switch sides of the table and review what the other pair has done and mark any cards that they may not agree with the suggested sizing for discussion. Have the discussions and move the cards as appropriate if that's what team members agree. You can also have a 4th group of cards (Group D) for stories that have so many unknowns that it seems impossible to put into any of the other groups. Once this phase is done make sure to carefully collect your cards in groups so you can update them later on.

Now the epics

Following a similar approach hand each pair about half of your epic cards and ask them to group them in the following groups. For example in the middle of the table - those for which is  relatively clear what needs doing but it just seems like a lot to do (Group E) , those that seem relatively small at the bottom of the table (Group F) and those that seem fairly unclear or there are lots of unknowns put at the top (Group G). Again ask the two pairs to work in separate half of the tables and then swap to review their groupings. Once this is done thank the team members for their participation and with all collected cards proceed with updating your actual cards in excel or the electronic tool that you're using.

This should take you between 25 and 40 minutes depending on your backlog size. We have consistently achieved below 30 minutes with backlogs of around 80-150 items.

Getting to numbers

Now this will depend on your specific team estimates. I'll take an example where the average size of a story for you might be 5. Note - any stories estimated by the whole team in your usual estimation workshops leave as they are - you can use them to calculate your story average. For any stories without estimates you can do the following: Assign this to group A cards, to group B cards assign 3 times your average size - which would be 15 or perhaps 13 if you're using Fibonacci numbers. To group C stories assign estimate of 2. To Group D items assign 20s and  give to your BA/PO for further analysis - somehow indicate this in your electronic tool. There's two approaches you may want to use for the epics.  Either look at them as gaps of missing stories and ask your BA/PO to estimate how many stories approximately (and as a range) s/he thinks they may contain and use the average story size - e.g. Gap A is likely to produce 5-8 stories so I'd use the higher estimate of 8 and multiply it by my average of 5 which means that Gap A will have size of 40 for the time being.
Or if your BA/PO is not comfortable to do this for now use similar approach to stories e.g. - to Group E assign 3 times the group B estimate - e.g. 40, to group F assign the group B estimate - e.g. 13 and to group F items give something like 2x your group E estimate - e.g. 80 or 100.

Backlog size

Now once you've done all the updating your total backlog size will probably look enormous. Obviously the more epics you have the bigger number you'll get. Explain to your BA/PO that the epics need clarifications and that you'd need these resolved, questions answered and sizes refined before having a meaningful backlog size. Your BA is likely to start analyzing all over the place so why not start with these big items alongside any risky items s/he might be looking at. You could also review the remaining unanswered questions and gaps on a weekly basis reiterating that the sooner you refine those estimates the sooner you can get to a realistic size.

Interesting enough having epics in the backlog has not stopped us doing prioritization sessions with business owners - I will explain how in another blog post.

Once you agree that your backlog seems to contain mostly stuff with known size then you can follow standard burn down/up practices to produce that much desired date/range. I'd also monitor historically what is the growth of the size of a backlog and apply that as a buffer because new stories will get produced. Ideally you'd want to have this growth historic figure based on a full release cycle.

So this one has already  become bigger than I was hoping it will be. I hope it is useful and makes sense. I will try to take a picture next time we do this and attach it here. All feedback is welcome ;)

No comments: