Tag: cycle time

  • Don’t Limit Work in Progress

    I’m a long-time fan of visual management. Visualizing work helps to gather low-hanging fruits: it makes the biggest obstacles instantly visible and thus helps to facilitate the improvements. By the way, these early improvements are typically fairly easy to apply and have big impact. That’s why we almost universally propose visualization as a practice to start with.

    At the same time a real game-changer in the long run is when we start limiting work in progress (WIP). That’s where we influence the change of behaviors. That’s where we introduce slack time. That’s where we see emergent behaviors. That’s where we enable continuous improvements.

    What’s frequently reported though, is that introducing WIP limits is hard for many teams. There’s resistance against the mechanism. It is perceived as a coercive practice by some. Many managers find it really hard to go beyond the paradigm of optimizing utilization. People naturally tend to do what they’ve always been doing: just pull more work.

    How do we address that challenge? For quite some time the best idea I’ve got was to try it as an experiment with a team. Ultimately there needs to be team buy-in to make WIP limits work. If there is resistance against a specific practice, e.g. WIP limits, there’s not much point in enforcing it. It won’t work. However, why not give it a try for some time? There doesn’t have to be any commitment to go on after the trial.

    The thing is that it usually feels better to have less work in progress. There’s not that much of multitasking and context switching. Cycle times go down so there’s more of sense of accomplishment. The pressure on the team often goes down as well.

    There’s also one thing that can show that the team is actually doing much better. It’s enough to measure start and end dates for work items. It will allow to figure out both cycle times (how much time it takes to finish a work item) and throughput (how many work items are finished in a given time window).

    If we look at Little’s Law, or rather its adoption in Kanban context, we’ll see that:

    Little's Law

    It means that if we want to get shorter cycle time, a.k.a. quicker delivery and shorter feedback loops, we need either to improve throughput (which is often difficult) or cut work in progress (which is much easier).

    We are then in a situation where we understand that limiting WIP is a good idea and yet realize that introducing such a practice is not easy. Is there another way?

    One strategy that worked for me very well over years was to change the discussion around WIP limits altogether. What do we expect when WIP limits are in place? Well, we want people to pull less work and instead to focus on finishing items rather than starting them.

    So how about focusing on these outcomes? It’s fairly simple. It’s enough to write a simple guidance. Whenever you finished work on an item first check whether there are any blockers. If there are attempt to solve them. If there aren’t any look at any unassigned items starting from the part of the board that’s closest to the done column (typically the rightmost part of the board). Then gradually go toward the beginning of value stream. Start a new item only when there’s literally nothing you can do with ongoing items.

    Kanban Board

    If we took a developer as an example the process might look like this. Once they finish coding a new work item they look at the board. There is one blocked item. It just so happens that the blocker is waiting for a response from a client. A brief chat in the team may reveal that there’s no point in pestering the client for feedback on the ticket for now. At the same time it may be a good time to remind the client about the blocker.

    Then the developer would go through the board starting at the point closest to the doneness. If the process in place was something like: development, code review, testing and acceptance by the client, the first place would be acceptance. Are there any work items where the client shared feedback, i.e. we need to implement some changes? If so that’s the next task to focus on. In fact, it doesn’t matter whether the developer was the author of the ticket but if there are any tickets that used to be his it may be input for prioritizing.

    If there’s nothing to act upon in acceptance, then we have testing. Are there any bugs from internal testing than need to be fixed? Are there any tickets that wait for testing that the developer can take care of? Of course we have a century-old discussion about developers not willing to do the actual testing. I would however point that fixing bugs for the fellow developers is equally valuable activity.

    If there’s nothing in testing that can be taken care of then we move to code review and take care of anything that’s waiting for code review or implement feedback from code review that’s been done already.

    Then we move to development and try to figure out whether the developer can help with any ongoing work item. Can we pair with other team members? Or maybe there are obstacles where another pair of eyes may be useful?

    Only after going through all the steps the developer moves to the backlog and pulls a new ticket.

    The interesting observation is that in vast majority of cases there will be something to take care of. Just try to imagine a situation where there’s literally nothing that’s blocked, requires fixing or improvements and nothing that’s in waiting queue. If we face such a situation we likely don’t need to limit work in progress any further.

    And that’s the whole trick. Instead of introducing an artificial mechanism that yields specific outcomes we can focus on these outcomes. If we can guide the team to adopt simple guidance for choosing tasks we effectively made them limit work in progress and likely with much less resistance.

    Now does it matter that we don’t have explicit WIP limits? No, not really. Does it matter that the actual limits may fluctuate a bit more than in a case when the process has hard limits? Not much. Do we see actual improvements? Huge.

    So here’s an idea. Don’t focus on practices. Focus on understanding and outcomes. It may yield similarly good or better results and with a fraction of resistance.

  • (Sub)Optimizing Cycle Time

    There is one thing we take almost for granted whenever analyzing how the work is done. It is Little’s Law. It says that:

    Average Cycle Time = Work in Progress / Throughput

    This simple formula tells us a lot about ways of optimizing work. And yes, there are a few approaches to achieve this. Obviously, there is more than the standard way, used so commonly, which is attacking throughput.

    A funny thing is that, even though it is a perfectly viable strategy to optimize work, the approach to improve throughput is often very, very naive and boils down just to throwing more people into a project. Most of the time it is plain stupid as we know from Brook’s Law that:

    Adding manpower to a late software project makes it later.

    By the way, reading Mythical Man-Month (the title essay) should be a prerequisite to get any project management-related job. Seriously.

    Anyway, these days, when we aim to optimize work, we often focus either on limiting WIP or reducing average cycle time. They both have a positive impact on the team’s results. Especially cycle time often looks appealing. After all, the faster we deliver the better, right?

    Um, not always.

    It all depends on how the work is done. One realization I had when I was cooking for the whole company was that I was consciously hurting my cycle time to deliver pizzas faster. Let me explain. The interesting part of the baking process looked like this:

    Considering that I’ve had enough ready-to-bake pizzas the first setp was putting a pizza into the oven, then it was baked, then I was pulling it out from the oven and serving. Considering that it was almost a standardized process we can assume standard times needed for each stage: half a minute for stuffing the oven with a pizza, 10 minutes of baking and a minute to serve the pizza.

    I was the only cook, but I wasn’t actively involved in the baking step, which is exactly what makes this case interesting. At the same time the oven was a bottleneck. What I ended up doing was protecting the bottleneck, meaning that I was trying to keep a pizza in the oven at all times.

    My flow looked like this: putting a pizza into the oven, waiting till it’s ready, taking it out, putting another pizza into the oven and only then serving the one which was baked. Basically the decision-making point was when a pizza was baked.

    One interesting thing is that a decision not to serve a pizza instantly after it was taken out of the oven also meant increasing work in progress. I pulled another pizza before making the first one done. One could say that I was another bottleneck as my activities were split between protecting the original bottleneck (the oven) and improving cycle time (serving a pizza). Anyway, that’s another story to share.

    Now, let’s look at cycle times:

    What we see on this picture is how many minutes elapsed since the whole thing started. You can see that each pizza was served a minute and a half after it was pulled out from the oven even though the serving part was only a minute long. It was because I was dealing with another pizza in the meantime. Average cycle time was 12 minutes.

    Now, what would happen if I tried to optimize cycle time and WIP? Obviously, I would serve pizza first and only then deal with another one.

    Again, the decision-making point is the same, only this time the decision is different. One thing we see already is that I can keep a lower WIP, as I get rid of the first pizza before pulling another one in. Would it be better? In fact, cycle times improve.

    This time, average cycle time is 11.5 minutes. Not a surprise since I got rid of a delay connected to dealing with the other pizza. So basically I improved WIP and average cycle time. Would it be better this way?

    No, not at all.

    In this very situation I’ve had a queue of people waiting to be fed. In other words the metric which was more interesting for me was lead time, not cycle time. I wanted to optimize people waiting time, so the time spent from order to delivery (lead time) and not simply processing time (cycle time). Let’s have one more look at the numbers. This time with lead time added.

    This is the scenario with protecting the bottleneck and worse cycle times.

    And this is one with optimized cycle times and lower WIP.

    In both cases lead time is counted as time elapsed from first second, so naturally with each consecutive pizza lead times are worse over time. Anyway, in the first case after four pizzas we have better average lead time (27.75 versus 28.75 minutes). This also means that I was able to deliver all these pizzas 2.5 minutes faster, so throughput of the system was also better. All that with worse cycle times and bigger WIP.

    An interesting observation is that average lead time wasn’t better from the very beginning. It became so only after the third pizza was delivered.

    When you think about it, it is obvious. Protecting a bottleneck does make sense when you operate in continuous manner.

    Anyway, am I trying to convince you that the whole thing with optimizing cycle times and reducing WIP is complete bollocks and you shouldn’t give a damn? No, I couldn’t be further from this. My point simply is that understanding how the work is done is crucial before you start messing with the process.

    As a rule of thumb, you can say that lower WIP and shorter cycle times are better, but only because so many companies have so ridiculous amounts of WIP and such insanely long cycle times that it’s safe advice in the vast majority of cases.

    If you are, however, in the business of making your team working efficiently, you had better start with understanding how the work is being done, as a single bottleneck can change the whole picture.

    One thought I had when writing this post was whether it translates to software projects at all. But then, I’ve recalled a number of teams that should think about exactly the same scenario. There are those which have the very same people dealing with analysis (prior to development) and testing (after development) or any other similar scenario. There are those that have a Jack-of-all-trades on board and always ask what the best thing to put his hands on is. There also are teams that are using external people part-time to cover for areas they don’t specialize in both upstream and downstream. Finally, there are functional teams juggling with many endeavors, trying to figure out which task is the most important to deal with at any given moment.

    So as long as I keep my stance on Kanban principles I urge you not to take any advice as a universal truth. Understand why it works and where it works and why it is (or it is not) applicable in your case.

    Because, after all, shorter cycle times and lower WIP limits are better. Except then they’re not.