Tag: feature size

  • Limit Work in Progress Early

    We’ve just started another project. One of things we’ve set up at the very beginning was a Kanban board. It wouldn’t be a real Kanban board if we haven’t had work in progress limits. There are two common approaches I see out there in terms of setting WIP limits.

    One is to work for some time without WIP limits just to see how things go, get some metrics and get a decent idea what work in progress limits are suitable in the context.

    The other approach is to start with pretty much anything that makes sense, like one task per person plus one additional to have some flexibility.

    Personally, I like the latter approach. First, no matter which approach you choose your WIP limits are bound to change. There’s no freaking chance that you get all the limits right in the first approach. And even if you did the situation would change and so should the WIP limits.

    Second, you can get much value thanks to limiting work in progress early even if the limits are set in a quick and dirty mode. In fact, this is exactly why I’m biased toward this approach.

    In our case one last stage before “done” has been approval. We’ve had an internal client (me), thus we haven’t expected any issues with this part. We weren’t far into the project when first tasks started stacking up as ready to approval.

    As I initiated the discussion about us hitting the limit in testing (the stage just before approval) I was quickly rebutted “maybe you, dear client, do your goddamn job and approve the work that is already done, so we can continue with our stuff.” Ouch. That hurt. Yet I couldn’t ask for a better reaction.

    Lucky me, I asked about staging environment where I could verify all the stuff that we’ve built. And you know what? There was none. Somehow we just forgot about that. So I eagerly attached blockers to all the tasks that were waiting for me and the team could focus on setting up the staging environment instead of building more stuff.

    An interesting twist in this story is that setting up the staging has proven to be way trickier than we thought and we found out a couple of flaws in the way we managed our demo servers. The improvements we’ve done in our infrastructure go way beyond the scope of the project.

    There are two lessons here. One is that implementing WIP limits is a great knowledge discovery tool that works even if the limits aren’t yet right. Well-tuned WIP limits are awesome as they enable slack time generation, but even if you aren’t there yet, any reasonable limits should help you to discover any major problems with your process.

    There’s another thing too. It’s about task sizing. In general the smaller the tasks are the more liquid the flow is and higher liquidity means that you discover problems faster. If it took a couple of weeks to complete first tasks we’d find out problem only after that time. With small tasks it took a couple of days.

    So if you’re considering whether to start with limiting work in progress on a day 1 of a new project, consider this post as an encouragement to do so. Also you may want to size first few tasks small and make sure that they go throughout the whole process so you quickly have a test run. Treat it as a way of testing completeness of the process.

  • The Kanban Story: Standard-Sized Features

    It’s been some time from the previous chapter of the Kanban Story but it doesn’t mean we’ve stalled. There’s an interesting concept which I keep referring to but only recently realized I’d never really written about it.

    The concept is called standard-sized features.

    If you read about Kanban you just have to be familiar with the term Minimal Marketable Feature, which is a piece of functionality which is as small as possible, yet sellable to a potential customer. It is a very nice concept and that was exactly what we started with when we were adopting Kanban.

    However, after some time we realized that we sacrificed marketability of our features in order to get them similar in size. This change wasn’t really planned. It just seemed more convenient on occasions to split the work that way.

    It might be easier for us since we were never orthodox with MMFs. After all how could you possibly marked some architectural changes or rewriting one of integration interfaces from scratch? Anyway that wasn’t a reason to come up with standard-sized features (should I call them SSFs?)

    SSFs were born because we needed to prepare some estimates for fixed price projects even though for most of these projects preparing estimates was the only activity we were performing. It is a kind of situation when you don’t really want to invest lots of time yet you still want to come up with some reasonable numbers in case you actually get the project and are expected to deliver it.

    In such environment MMFs can be tricky. I can perfectly market a feature which consists like a couple dozen lines of code if it is the right code in the right place. On the other hand for some other features you may need tons of code until finally there is something which can possibly be sold to anyone, assuming that they’re partially crazy and would buy some half-baked functionality.

    So yes, MMF can differ vastly in size. And this renders your average measures, like average cycle time, pretty much useless. If variation of cycle time is big so you may deliver it in 2 days but you can also deliver it in 2 months, depending on what kind of MMF it is, you can’t base on average or median. This way if you’re out of luck and get a bunch of huge MMFs you’re also totally screwed.

    The answer can be SSF. With standard-sized features you don’t really care what you can market. Actually that’s probably not the case for majority of teams since we rarely directly sell our crap… I mean products. SSFs however allow you to deal neatly with estimation. You can use some simple methods to come up with a number which would go into a formal agreement like instantly, no matter how hard you try to explain a salesman what coarse-grained estimate really means.

    Another nice trait of SSF is it helps make flow smoother. If feature’s size differs much you tend to have those big tasks hanging there for a longer time. It makes the team less flexible, as part of the team is dedicated to work on this specific task, and it consumes part of the limit for a longer time which may yield some blockers.

    One of arguments against SSFs I heard is it’s pretty hard to learn how to get features standard-sized. Probably if you want to make them all delivered in 9 days sharp it is hard indeed. However you don’t aim for such accuracy. Variation between 6 and 12 days is completely enough. Anyway the surprising thing, at least for me, was we quickly became pretty good in splitting the work into standard-sized chunks of work. It looked like we didn’t even try and got it right, so we just kept doing the good job.

    Standard-sized features proved to be valuable so you might consider them as one of options when deciding what exactly you want to have on your sticky notes you put on the Kanban board.

    If you liked this story make sure you check the whole Kanban Story series.

  • Internal Audit: Lessons Learned

    Last week we had an internal audit in our team. If you automatically think “ISO 9001” when you hear “internal audit” let me just state it had nothing to do with ISO. We have a few so called quality requirements in the company and every team should follow them but these aren’t anything like ISO procedures.

    The reason why I was interested how the audit would go is we work pretty differently than the rest of the organization so quality requirements aren’t really adjusted to our specifics. I was also curious how our non-standard process would look like in comparison to other teams.

    I caught a few things which are worth sharing.

    Small Features versus Big Features

    Our currency for functionality is MMF (Minimal Marketable Feature). MMF itself probably doesn’t tell you much and I know different teams use different sizes for MMFs. In our case average MMF can be coded by a single developer in about a week, so they are pretty small.

    On the other hand in other projects typical currency for functionality is requirement which is significantly bigger and, as such, less detailed. I’d say that on average it is few times bigger than MMF.

    Where’s the difference? I mean, besides the size itself. One very visible difference is we do significantly less paperwork. With features small enough that they can be easily analyzed and decomposed by product owner, developer and tester we don’t need to write low-level design documents or test cases or user stories. We can spend more time doing the real work instead of creating a pile of documents which tend to become outdated as soon as you hit the save button.

    Frequent Deployment versus Not-So-Frequent Deployment

    I guess there aren’t many teams which still treat deployment as one-time effort done at the end of project. But still, frequent deployment is a painful thing, especially when developers work in different team than QA engineers and QA engineers work in different team than release managers. An average project isn’t deployed biweekly. Not even close.

    On the contrary we deploy, on average, once every 9 days and I mean calendar days here. Of course it didn’t look like that from the very beginning and it cost us a bit to get there. I would say it took us about half a year to gain this pace.

    Where’s the difference? I could tell you about time-to-market, but it really depends on project whether you need time-to-market counted in days or months. The cool thing is somewhere else. With such a short cycle time we don’t have a temptation to incur technical debt. We don’t leave unfixed bugs. We just don’t. After all we might have been discussing about a single low priority bug, not a few dozens of them so the fixing cost would be so low that we don’t even start the discussion in the first place. Even if we hit a dead-end redoing the whole darn feature doesn’t take us a couple of months – it’s just a few days so it’s much easier to accept it.

    Everyone on the Same Page versus Formalized Process

    We’re small team and we’re co-located so we have fairly few communication issues. The process we follow is very simple and, what is even more important, crafted by our team. We all are at the same page.

    Most of the teams in the company follow more formalized process. There are different functional teams which are meant to cooperate, there are specific documents expected from each other by people working on projects etc. On the top of that there’s some office politics too and formalized process is used as a weapon.

    Where’s the difference? This one was pretty much a surprise for me, but it looks like our process appears as mature and effective even though it isn’t even recorded in any form. We just know what everyone should do. Whenever some obstacle appears on the way people naturally try to help each other as they see and understand what’s happening. And yes, I’m over the top just because no one forced me to write the process description down. On the other hand formalized process is often artificial for other teams who have to follow it and they sometimes focus on being compliant to the process (or fighting with it) instead of building software.

    What Does It Mean For You?

    I’m not trying to say here you should now automatically cut every feature you work on in half (at least), deploy three times more frequently than you do now and throw every procedure out. It would probably end with chaos, which your managers might have not appreciated.

    I’m not trying to say it was easy to get where we are now. It took great people, some experience, a lot of experimenting and enough time to get better at what we started doing. A year ago I would have hard time trying to point how our approach might be better than what you could find in the rest of the company.

    And I’m not trying to say you should read these advices in “this over that” manner known from Agile Manifesto. It’s not only our team which is pretty different than average team in the company but a product is also far from standard projects we build in organization so I compare pretty different environments. I wouldn’t even say that every single team in the company would benefit from implementing our approach, although a few of them definitely would.

    These are just things I learned while talking with Ola who audited our team (and many other teams in the company). For me these lessons weren’t obvious. I mean connection between frequent deployment and short time-to-market is pretty clear for anyone who understands what “deployment” and “time-to-market” means, but I’ve never really considered that very short production cycle may help to avoid technical debt too.

    And if you’re interested what the audit result was, I’d say it was pretty good. I mean, everyone likes to listen to some compliments from time to time, so we may want to ask for another audit soon.

  • Why We Don’t Write Test Cases Anymore

    Almost a year ago I shared an advice to use test cases. Not because they are crucial during testing or dramatically improve the quality of the product (they are not), but because of value you get when you create test cases.

    A confession (and yes, you’d guess it anyway if you read the title): we don’t write test cases anymore.

    We stopped using them and the reason isn’t on the list of shortcomings of test cases. Actually I was aware of these shortcomings a year ago and I were all “test cases are great” anyway. What have changed then?

    We dropped test cases as a side effect of implementing Kanban, but you can perfectly use both if you like. In our case one of effects of switching to Kanban was making our pieces of functionality pushed (pulled actually) to development smaller. Before the switch we had pretty big features which were split into several (8-15) detailed user stories. After the switch we have much smaller features which would make 2 or 3 detailed user stories if we didn’t drop writing user stories at all.

    And the reason for making features smaller was simple – smaller features, smoother and more flexible workflow.

    Initially we were connecting test cases to features, not user stories. It was so because pretty often one testing flow was going through a few different user stories. I told you they were detailed. When standard feature-size went down we realized there’s much less value in preparing test cases.

    Once again: main value of creating test cases is thinking about specific usage scenarios, looking for places forgotten during design. The more complex feature the bigger are chances there is something screwed up. With small features test cases lost much of their value for us since most problems we were able to locate instantly and fix problems without additional technique. And yet the effort needed to create and maintain test cases was still significant. So we dropped the practice.

    It looks like my advice is: make your features smaller, and then you’ll be able to drop user stories and test cases. And I must say it does make sense, at least for me. What do you think?