≡ Menu
Pawel Brodzinski on Software Project Management

Estimation Quality

Estimation Quality post image

OK, so I am on yet another agile event. I’m sitting there in the last row and a guy on the stage starts covering estimation. That’s interesting, I think, maybe I’ll learn something. After all estimation is something that bothers me these days.

Planning poker, here it comes. Yawn. People would pull the card with their estimate, yadda, yadda, yadda, they’d discuss and agree on a story point value. Yawn. The distribution of estimates would be normal.

Wait, now the guy has my attention.

So apparently he knows the distribution of the estimates. Good. How about checking what the distribution of lead times for historical data is. Because, you know, there are people who have done that, like Troy Magennis, and it seems that this distribution isn’t normal (Troy proposes Weibull distribution as the closest approximation).

In short: if your estimates have a different distribution than historical data for the tasks that the team has completed the estimates are crap.

Back to the presentation. The guy points how estimates for big tasks are useless in his context. Well, yes, they are. All the estimates in his context are crap from what I can tell. Then he points that we should be splitting tasks to smaller ones so planning poker makes sense and produce more reasonable estimates.

Um, sorry sir, but you got it wrong. If you are using estimates that are distributed differently than historical lead times you are overly optimistic, ignorant, or both.

How about this: I will not poke you in the eye with my pen but you will check the distribution of the estimates and the past lead times. Then you will recall some basic stuff from statistics course and stop selling crap that we can’t possibly answer the question: “when will it be done?”

OK, rant aside. I don’t say that coming up with the idea how long it will take to build something is easy. Far from that. I just say that there are pretty simple things that can be done to verify and improve the quality of the estimates.

Once you got the estimates look at their distribution. If it is different than what you know of historical data (because you’ve checked historical data, haven’t you?) the estimates don’t bear much of value.

In fact, you can do better. Once you know the distribution of historical lead times you may use that to model estimates for a new project. You don’t really need much more than simply basic work break down. You can take the worst case scenario from the past data and assume the best case scenario is 0 days and let the math do its magic. No guesswork whatsoever needed.

The credit for this post should go to Troy Magennis, who opened my eyes to how much use we can make out of limited data we have at our hands.

in: project management

0 comments… add one

Leave a Comment

Next post:

Previous post: