Tag: software estimation

  • The Kanban Story: Coarse-Grained Estimation

    Recently I told you what a screwed way we chose to measure lead time in our team. Unfortunately I’ve also promised to share some insight how we use lead times to get some (hopefully reliable) estimates. So here it goes.

    Simplifying things a bit (but only a bit) we measure development and deployment time and call it lead time (and don’t feel bad at all about that). So how do I answer to our customers when they ask me when something will be ready?

    That’s pretty easy. If we talk about a single feature and there is no other absolutely top priority task, I take a look at the board to track down a developer who will be first to complete his current task and discuss with him when he expects to finish it. Then I know when we can start working on the new, super-important feature ordered by the client. Then it’s enough to add two weeks (which is our average lead time) and make some effort to look oh, so tired as I’ve just completed such a difficult task of estimation. Oh, I need to tell the customer when they’re going to get the darn feature too, of course.

    This however happens pretty rarely. We try to keep our MMFs (Minimal Marketable Features) stick to their name which means they are usually small. This also means that described situations, when client wants just one feature from us, are pretty much non-existent. That’s why you might have noticed I didn’t take into consideration a size of a feature in described scenario. In the real life we usually talk about bigger pieces of functionality. What we do then is we split the scope into small MMFs and then use two magic parameters to get our result.

    One of parameters you already know – it is an average lead time. However there’s one more needed. It takes 13 days for the feature to get from the left to the right side of the board, but how many features are on the board at the same time? I took a crystal orb and it told me that on average we have 4,5 MMFs on the board. OK, I actually did a simple analysis of a longer period of time checking our completed MMFs and I got this result, but doesn’t a crystal orb sound so much better?

    Now, the trick I call math. On average we can do 4,5 MMFs in 13 days. Since I have, let’s say, 10 features on my plate I have to queue them. If I worked with iterations it would take 2,2 iterations (10/4,5) which basically means 3 iterations. But since we don’t use time-boxing I can use 2,2 and multiply it by 13 days of my lead time and get something about 30 days. Now I don’t start with empty board so I have to add some time to allow my team to finish current tasks. On average it would be half of lead time so we should be ready in, say, 37 days.

    And yes, this is rough estimate so I’d probably go with 40 days to avoid being blamed for delivering estimate which looked precise (37 looks very precise) but was just coarse-grained estimate.

    That’s basically what I went through before telling my CEO we’re going to complete management site for the product we work on in 3 months. Well, actually I didn’t told him yet, but 3 months there are. No less, no more.

    Although this post may look as it was loosely coupled with the Kanban Story it is definitely belongs there. So go, read the whole story.

  • The Kanban Story: Initial Methodology

    You already know how our team looked like and what concerns I had with implementing plain old Scrum. You also know we chose “take this from here and take that from there” kind of approach to running our projects. Our base method was Scrum but we were far, far away from accepting it the orthodox way.

    What we finally agreed on within the team can be described in several rules:

    1. Planning
    For each application at the beginning we were deciding what we want to see there in the next version. We were trying to limit a number of features but didn’t set any limits how long development should take. The longest cycle was something between three and four weeks.

    2. Requirements
    When we’d known what we wanted to do we were creating user stories and non-functional requirements. Each session took whole team, even when someone wasn’t going to develop that specific application. As it usually happens the scope was being changed as we discussed different aspects of planned work. We were adding (more often) or cutting off (less often) features. We were building a coarse-grained architecture as we discussed non-functional requirements. At the end of the day we had a whiteboard full of stories which were a starting point for development.

    3. Tasks
    Since user stories were rather big we were splitting them to smaller pieces – development tasks. This was changing view from feature-wise to code-wise. Development tasks were intended as small pieces of work (possibly not bigger than 16-hour long) which were a complete wholes from a developer’s point of view. Ideally a single user story should have several tasks connected to it and single development task should be connected with only one user story or non-functional requirement. It wasn’t a strict rule however and sometimes we had many-to-many relation since specific changes in architecture (single development task) were affecting many usage scenarios (many user stories). Development tasks should make as much sense as possible for developers so they were defined by developers themselves and by those who were going to do the job. Tasks were stored in a system which was used company-wide.

    4. Estimation
    At the very beginning we had no strict deadlines but we were estimating anyway. There were two goals: learn to estimate better when there aren’t huge consequences for being wrong and getting into a habit of estimating all of planned work. Our estimates were done against developer tasks, not user stories. The main reason why we went this way was that tasks were smaller so estimates naturally were better than they would be if we did them against user stories. A specific, pretty non-agile, thing we were doing with estimation was a rule of estimating each task by a developer who would be doing it. More on this in one of future posts.

    5. Measuring progress and work done
    Current progress could be verified by checking actual state of tasks connected with a specific app. Nothing fancy here. The thing which was important here was writing down how much time it took to complete the task. When completed each task had two values – estimated time needed to complete the task and time it really took to do it. At the end of the day we knew how much our estimates were flawed. The important thing here was official announcement that no one is going to be punished in any way for poor estimates.

    That was pretty much all. No stand-ups since we were all sitting in one room and we discussed issues and work progress whenever someone felt like it. No Scrum Master since, well, we weren’t doing sprints and within specific project things were rather self-organized. A kind of Product Owner emerged, impersonated by me, as we needed a connector between constantly changing business environment and the team working on different projects. You could say we needed one to bring some chaos to our work which would be equally true.

    We were going to build a few small projects that way. At the same time we were prepared to improve the way we create our software as soon as notice flaws and find a way to fix them.

    Keep an eye on the whole Kanban Story.

  • Sure Shot Method to Improve Quality of Your Estimates

    That’s the old story: we suck at estimating. Our schedules tend to be wrong not only because there are unexpected issues but mostly because we underestimate effort needed to complete tasks. There’s one simple trick which allows improving quality of estimates. It’s simple. It’s reliable. It’s good. On the minus side you need some time to execute it.

    Split your tasks to small chunks.

    If a task lasts more than a couple of days it’s too big – split it. If it’s still too big – do it once again. Repeat. By the way that’s what agilists came with over years – initial size of user stories was intended to be a few weeks long, now it’s usually a few days at most.

    Why it works?

    • Smaller tasks are easier to process in our minds. As a result we understand better what we’re going to do. As a result we judge better what means are needed. As a result our estimates are better.

    • Smaller tasks limit uncertainty. In smaller room there are fewer places to hide. The fewer places to hide the fewer unpredicted issues there are. The fewer unpredicted issues the more exact are estimates.

    • Small tasks mean small estimates. A month-long task should be read as “pretty long but it’s hard to say exactly how long” task. Bigger numbers are just less precise. Splitting tasks ends up with smaller chunks of work. Small chunks of work end up with small-sized estimates. Small-sized estimates mean more precise estimates.

    As a bonus, during task-splitting exercise, you get better understanding what should be done and you early catch some problems which otherwise would appear much later and would be more costly to fix.

    And yes, the only thing you need is some time.

  • What’s the Use of Wild Guesses Instead of Proper Estimation?

    OK, this one is controversial. Software estimating. How to do it good, or good enough? First of all, when talking about the subject we all can learn a lot from Glen Alleman who bring quite an uncommon perspective to the area as he’s based in industries which tolerate missed deadlines with much less patience than the average.

    Now, what’s the point? If you asked me how software estimating should be done I’d direct you to Steve McConnell book on the subject. Believe me or not, he didn’t write anything good (if anything at all) about role of wild guessing in software estimation. Which doesn’t make me avoiding wild guesses at all times.

    Actually sometimes I make good use of them.

    Now, go flame me.

    What’s the use of wild guesses instead of proper estimation? Wild guesses don’t have to be so wild if you ask a right person. If I ask my development manager how long something would take and it’s a piece of software he’s fairly familiar with in vast majority of cases he’ll come with some guess. It should be reasonable enough to judge whether it’d take 9 months to do the job or rather just a few weeks.

    I don’t consider that as a full-blown estimate. It’s hard to use it as a base to create a schedule but still it’s a valuable feedback to judge whether and when your team is capable to overtake the project or decide if the whole thing is worth further presales effort. If you need to develop a bunch of tools which your competitors already have in their product portfolio chances are good you won’t match their schedules and/or prices. If you add that your team is backed up for the next half of the year it’s probably not worth investing effort to do more than just getting first coarse-grained approximation.

    Yes, every wild guess should be considered very carefully since after all it’s, well, just a wild guess not an estimate done according to rules. However doing proper estimation is quite an effort and sometimes you need something rough but quick instead of exact but demanding.

    And remember: don’t ever, ever allow your wild guesses to be sent to sales department.