Category: kanban

  • Cumulative Flow Diagram

    One of charts that give you a quick overview of what’s happening in a project or product work is Cumulative Flow Diagram (CFD). On one hand in CFD you can find typical information about status of work: how much work is done, ongoing and in backlog, what is the pace of progress, etc. This is the basic stuff. On the other hand, once you understand the chart, it will help you to spot all sorts of issues that a team may be facing. This is where Cumulative Flow Diagram shows its real value.

    Before we move to all the specific cases let me start with the basic stuff though (feel free to scroll down if you’re familiar with this part).

    Cumulative Flow Diagram

    The mechanism of Cumulative Flow Diagram is very simple. On a vertical axis we have a number of tasks. On a horizontal one we have a timeline. The curves are basically a number of items in any possible state shown in a time perspective. The whole trick is that they are shown cumulatively.

    If the green curve shows stuff that is done it will naturally grow over time – that’s simple. If the blue line shows tasks that are in progress, and we have stable amount of work in progress it will still go up as it adds to the green line. In other words work in progress would be represented by the gap between the blue and the green lines… We’ll come back to that in a while.

    Any line on CFD represents a specific stage. In the simplest example we’d have items that are to be done, stuff that is ongoing and things that are done.

    For the sake of the rest of this article I’m going to use a simple process as a reference.

    Workflow

    We have a backlog, items that are in development or testing and stuff that is done. For the sake of Cumulative Flow Diagram examples it doesn’t matter whether tasks in development are ongoing or done and waiting for testing. However, as we will find later, there may be some indicators that would make tracking these two stages separately valuable.

    With such a workflow our Cumulative Flow Diagram may look like this.

    Cumulative Flow Diagram

    First, the meaning of the lines. The green one shows how many items have been delivered over time. Everything that is between the blue and the green curves is stuff that is in testing. The area between the red and the blue lines shows how much stuff is in development (either ongoing or done). Finally, the top part below the orange line is the backlog – how many items weren’t yet started.

    In a glimpse we can find a few important bits of information about this project. First, after slow start a pace of delivery is rather stable. Pretty much the same can be said about work that is in progress – the pace is stable and things go rather smoothly. We know that the scope has increased a couple of times, which we can tell looking at jumps of the orange line. Finally, comparing where the green line (done) and the orange line (scope) are on a vertical axis right now we can say that we’re not yet halfway through the project.

    Quite a lot of information for the few seconds, isn’t it? Well, there is more.

    Cumulative Flow Diagram

    On this CFD a few things have been shown explicitly. One is a scope change. We’ve discussed it on the previous chart too. Another one is the space between the red and the green lines. It represents work in progress (WIP). Note, basing on Cumulative Flow Diagram only you can’t learn how much work in progress you have precisely; it is some sort of approximation. Pretty good one, but only approximation. It is a very good indicator how WIP is changing over time though. There also is an arrow labeled “prod. lead time” where “prod.” stands for production. It roughly shows how much time we need to complete an item. Again, it shouldn’t be used as the ultimate lead time indicator but it shows pretty well what lead time we’ve had and how it changes over time. Finally, we can approximate slope of done curve to roughly estimate the delivery time. Of course if the scope changes the delivery time will change as well thus the scope line (the orange one) is also approximated.

    Now, we have even more information. Yay!

    You will rarely see such nice Cumulative Flow Diagrams though. And that’s good news actually. I mean if CFD looks plain and nice all the time you can only learn that much from it. The real CFD magic is revealed when things don’t go so well.

    Let’s go through several typical cases.

    Cumulative Flow Diagram

    In this situation the spread between the red and the green lines is growing over time. It indicates a really bad thing – we have more and more work in progress. That sucks. Increased WIP means increased lead time as well. Not only is time to market longer but also it is more and more difficult to deliver anything fast when we need it.

    That’s not the worst thing. The worst thing is that with increased amount of work in progress we also increase multitasking thus we incur all the costs of context switching making the team less efficient.

    Do we know that for sure? Um… no, not really. I make here an assumption that the team setup hasn’t changed, meaning that we have the same people spending similar amount of time on a project, etc. If it was Cumulative Flow Diagram for a team that is constantly growing and then it would be just OK. The chart may also present an increasing number of blocked tickets which definitely would be a problem but a different one then described above.

    In either case such a situation is a call for more analysis before jumping to conclusions. The potential reasons I offer you with this and following charts are simply the likely ones; not the only available.

    By the way, please treat all the following remarks keeping that in mind.

    One more interesting observation about this Cumulative Flow Diagram is that we have no clues where the root cause for increasing WIP lays. Neither development nor testing part seems to be steadily attached to any other line over time. A further investigation is a must.

    There are charts where we get some clues which stage of a process is problematic.

    Cumulative Flow Diagram

    Whoa, this time the development part compared to the testing part is really heavy. What can we learn from it? We don’t have problems with testing. Also, if a definition of testing is “testing and bug fixing,” which is a typical approach, it doesn’t seem that quality of work is much of an issue either. If we are to point fingers we’d point them to development part, wouldn’t we?

    And we might have been wrong. Of course one thing that may be happening here is a lot of items in development but few of them ready to test. Another issue though may be that there is a lot of stuff waiting for testing but availability of testers is very limited and when they’re available they focus on finishing what they started.

    How can we tell? We can’t unless we have more data. In fact, another line on the chart – one that distinguishes items in “development ongoing” from those in “development done” – would help. Without that the CFD is only an indicator of a problem and a call for a deeper analysis. After all, that’s what Cumulative Flow Diagrams are for.

    Another flavor of a similar issue is on the next CFD.

    Cumulative Flow Diagram

    We can find two things here. Let’s start with the more obvious one – the shape of the green line. It looks like stairs, doesn’t it? Stairs are typical when the last stage, which commonly is some sort of deployment, is done in cadences, e.g. weekly, biweekly, etc. Building on that, a stairs-shaped delivery line mean that work in progress and lead time would vary depending on a moment of release cadence you’re in. Maybe it’s time to make a step toward continuous

    There is one more thing here though. There is pretty significant, and increasing, number of items that are in testing but don’t get released. The gap between the blue and the green line is growing with each consecutive release.

    This one is a real issue here. It may mean that we have a problem with quality and we can hardly reach a state when an item has all the bugs fixed. It may mean that developers simply don’t pay much attention to fixing bugs but tend to start new stuff; at the same testers would follow up on new stories as they wait for bug fixes for the old ones anyway. It may mean that a code base is organized in a way that doesn’t allow releasing everything that is ready. Once again, the root cause is yet to be nailed but at least we know where to start.

    It seems we have more questions than answers. If you think that I’m not helping it will be no different with the next example.

    Cumulative Flow Diagram

    This would happen occasionally in almost every team. All the lines flatten out. What the heck? The first thing I do when I see that is I check for public holidays or company-wide event happening during that time. It may simply be time when no one was actually working on a project and there is a perfect explanation for that.

    Sometimes it is not the case though. This is when things get interesting. If everyone was at work but the chart still indicates that no one got anything done it most likely tells a story about serious problems. A staging environment could have gone down so everyone has been focusing on bringing it back alive. Another project could have needed help and virtually everyone has been sucked there. There could have been a painful blocker that has forced everyone in the team to refocus for a while.

    In either case, whatever it was it seems to be solved already as the team is back on track with their pace.

    Another flavor of such a scenario would look a bit differently. It would give more hints too.

    Cumulative Flow Diagram

    There are two important differences between this and the previous Cumulative Flow Diagrams. One is that, in this case, only two lines flatten out; the development line keeps the healthy progress. The other is that ends of both the green and the blue line are as flat as a table top.

    The latter suggests that whatever is the problem it isn’t solved yet. What the problem might be though? It seems that the team has no problem starting development of new items. They can’t, however, start testing, thus they clearly can’t deliver anything either. One of probable hypothesis would be that there is something seriously wrong with either the testing environment or the testers.

    In the first case it just isn’t technically possible to verify that anything works as intended. In the second it seems something bad happen to our only tester (if there were more than one there would likely be some progress). There is another hint too. Developers don’t seem to care. They just start, and possibly complete, their stuff as if nothing happened.

    I’d say that these guys have to deal first with the issue and then discuss how they collaborate. I sense a deeper problem here.

    The same way as the previous example indicates an issue in the way people cooperate, the next one suggest a quality problem.

    Cumulative Flow Diagram

    Development line goes up in a stable and predictable manner. The testing curve? Not so much. And we better not mention the done line. Obviously we have more and more work in progress here over time – we’ve covered this one before.

    But wait, then suddenly the magic happens and everything goes back on track. At the very end we have decently small amount of work in progress and much stuff delivered. The smell here is how the done (and testing to some point as well) curve skyrockets at the end.

    How come that earlier such pace was impossible? I’d challenge the idea that the team suddenly become so fast. Of course they might have not kept the board up-to-date and then, out of the blue, have realized that they’ve had way more finished items that they’ve thought they had.

    More likely scenario is that under pressure they just deployed whatever seemed at least remotely close to working. If that’s true the problem isn’t solved at all and it’s going to come back to bite them in their butt. A curious reader may try to draw a picture how further part of Cumulative Flow Diagram would look like in this case.

    The next one is one of my favorites. I wonder why it is so far down the list. Oh well…

    Cumulative Flow Diagram

    This Cumulative Flow Diagram is surprisingly common. Let’s try to list a few things that we can find here. The development curve goes up aggressively. Halfway through more than 80% of items are started. Testing doesn’t go nearly that well. And delivery? Well, the start was crappy, I admit, but then it simply went through the roof. And it isn’t only a single day that would suggest delivery of uncompleted stuff. Odds are that these items are properly done. Wouldn’t bet the real money on that but wouldn’t be surprised if that it was so either.

    Of course we have very high WIP in the middle of this CFD but at both ends the gap seems to be significantly smaller.

    Ah, one more thing. It seems that at the end of the day we’ve delivered everything that was in the backlog. Yay!

    Now, what would be the diagnosis in this case? Time boxing! This is one of classic visualizations of what typically happens over the course of an iteration. If a team is comfortable with planning and has rather stable velocity it’s likely that they’d fill backlog with reasonable amount of new features.

    Then, given no WIP limits within the time box everyone does their own thing: developers quickly start many features having no pressure other than the end of the iteration to finish stuff. Eventually, the backlog is cleared so the team refocuses to finish stuff, thus the acceleration at the latter stages of the process.

    If you pictured a series of such Cumulative Flow Diagrams attached one to another you’d see a nice chain going North-East. You’d find many of these in Scrum teams.

    Another chart, despite some similarities to the previous two, suggests a different issue.

    Cumulative Flow Diagram

    In this case almost everything looks fine. Almost, as the done line barely moves above the horizontal axis. However, when it finally moves it goes really high though. What does it mean?

    My guess would be that the team might have been ready with the stuff but, for whatever reasons, they wouldn’t deliver. In fact this is one of typical patterns in fixed price, fixed date projects, especially in bigger ones. Sometimes a basic measure that is tracked is how many items are done by the production team. No one pays attention whether it can possibly be deployed in production or even staging environment.

    Eventually, it all gets deployed. Somehow. The deployment part is long, painful and frustrating though. Cumulative Flow Diagram representation of that pain and tears would be that huge narrow step of the done curve.

    Talking about huge and narrow steps…

    Cumulative Flow Diagram

    Another chart has such a step too. We’ve already covered its meaning at the very beginning – it is the change of the scope. In this case it is not about the fact that such change has happened but about its scale and timing.

    First, the change is huge. It seems to be more than a half of initial scope added on the top of it. Second, it happens out of the sudden and pretty late in the project. We might have been planning the end date and now, surprise, surprise, we barely are halfway through again.

    Now, this doesn’t have to be a dysfunction. If you were talking with the client about the change or it is simply a representation of expected backlog replenishment that’s perfectly fine. In either case you it shouldn’t come as a surprise.

    If it does, well, that’s a different story. First, if you happen to work on fixed prices contract… man, you’re screwed up big time. It isn’t even scope creep. Your scope has just got on steroids and beaten world record in sprint. That hurts. Second, no matter the case you likely planned something for these people. The problem is it’s not going to happen as they have hell lot of work to do in the old project, sorry.

    So far lines on Cumulative Flow Diagrams were going only up or, at worst, were flat. After all, you’d expect that given the mechanism of creating the chart. That’s the theory. In reality a following chart shouldn’t be that much of a surprise for you.

    Cumulative Flow Diagram

    Whoa! What happened here? A number of stories in testing went down. The red line representing stuff in development followed but don’t be fooled. Since the gap between the red and the blue line is stable nothing really happened to the items in development; it’s only stuff in testing that was affected.

    Now, where did it go? Definitely not to done bucket – the green line didn’t move. It didn’t disappear either as the total number of items (the orange line) seems to be stable. A few items had to go from testing to backlog then.

    What could it mean? Without an investigation it’s hard to say. I have good news though. The investigation shouldn’t be long – such things don’t happen every other day. For whatever reasons stuff that was supposed to go past code complete milestone was marked as not started.

    I sense a major architectural or functional change. What’s more, it’s quite probable that the change was triggered by the tests of aforementioned items. Unfortunately it also means that we’ve wasted quite some time building wrong stuff.

    Another flavor of that problem looks a bit scarier.

    Cumulative Flow Diagram

    Again, the total scope didn’t change. On the other end every other line took a nosedive. Once again amount of stuff in development doesn’t seem to be affected. This time the same can be said about items in testing. It’s the delivered stuff that got back to the square one.

    It means that something that we though was done wasn’t so. One thing is that we were building wrong stuff, exactly as in the previous example, only we discovered it later. We likely pay the order of magnitude bigger price for the late discovery.

    There’s more in it though. This Cumulative Flow Diagram shows that we likely have problems with acceptance criteria and / or collaboration with a client. I mean how come that something that was good is not so anymore? Either someone accepted that without checking or we simply don’t talk to each other. No matter the case it sucks big time.

    Would the orange line never move down then? Oh yes, it would.

    Cumulative Flow Diagram

    I mean, besides an obvious case where a few items are removed from backlog and the only line that moves down would be the orange one, we may find this case. Using the technique perfected in the previous examples we will quickly find that a few items that were in testing are… um, where they are actually?

    Nowhere. They’ve disappeared. They haven’t been completed, they haven’t been moved back. These items are no more.

    What does it mean? First, one more time we’ve been working on wrong stuff (fools we are). Second, we’ve figured it out pretty late (but could have been later). Third, the stuff doesn’t seem to be useful at all anymore.

    It’s likely that we realized that we don’t know how exactly build this or that and we asked the client just to learn that they don’t need either of those anymore. It’s also likely that we’ve encountered a major technical issue and rethought how we tackle the scope possibly simplifying the whole approach. Whatever it was, if we had figured it out earlier it wouldn’t have been so costly.

    Finally, one more Cumulative Flow Diagram I want to share with you.

    Cumulative Flow Diagram

    Think for a while what’s wrong with this one.

    When compared to the previous charts it seems pretty fine. However, by now you should be able to say something about this one too.

    OK, I won’t keep you in suspense. In the first part of this CFD work in progress was slowly but stably growing. However, it seems that someone noticed that and the team stopped starting new stuff. You can tell that seeing how relatively flat the red line has become somewhere in the middle of the chart.

    Given some time testing and delivery, even though their pace hasn’t changed, caught up. Work in progress is kept at bay again and the team’s efficiency is likely improved.

    As you can see, despite past several examples, you can see the effects on improvements at Cumulative Flow Diagrams too. It’s just that CFD is more interesting in terms of learning that you have a problem than finding confirmation that you’ve solved it. The latter will likely be pretty obvious anyway.

    Congratulations! You made it through to what is probably the longest article in the history of this blog. Hopefully you now understand how to read Cumulative Flow Diagrams and what it may indicate.

    I have bad news for you though. You will rarely, if ever, see such nice CFDs as shown in the examples. Most likely you will see an overlapping combination of at least a few patterns. This will likely make all the lines look like they were tracing a rollercoaster wagon.

    Fear not. Once you get what may be happening under the hood of the chart you will quickly come up with the good ideas and the right places to start you investigation. After all, Cumulative Flow Diagram will only suggest you a problem. Tracking it down and finding an antidote is a completely different story.

    However, if you’re looking for a nice health-o-meter for your team Cumulative Flow Diagram is a natural choice.

  • Kanban Landscape and Portfolio Kanban

    One of reasons why Kanban Leadership Retreat (KLRAT) is such an awesome event is that it pushes our understanding of Kanban to a new level. No surprise that after the retreat there’s going to be much content related our work in Mayrhofen published here.

    One of sessions was at KLRAT was dedicated to sort out the Kanban landscape – how we position different Kanban implementations in terms of both depth and scale.

    Here’s the outcome of the session

    Kanban Landscape

    To roughly guide you through what’s there: the axes are maturity and scale. Maturity would differentiate implementations that are shallow and use only parts of Kanban from those that are characterized by deep understanding of principles and practices. Scale, on the other hand, represents a spectrum that starts with a single person and ends with all the operations performed by an organization.

    If we use scale as staring point we would start with Personal Kanban. If you ask me I believe that the range of depths of Personal Kanban applications should be wider (thus a bit taller area highlighted on the following picture), but I guess it’s not who should take the stance here.

    Kanban Landscape Personal Kanban

    Then we have the whole lot of different Kanban applications on a team and a cross-team level. For most of attendees this was probably the most interesting part and I guess there will be much discussion about this part across the community.

    Kanban Landscape Team Level Kanban

    For me though a more thought-provoking bit was the last part (which by the way got pretty little coverage in discussion): Portfolio Kanban. After all, this is my recent area of interest.

    Kanban Landscape Portfolio Kanban

    Since we didn’t have enough time to sort out all the details during the session the final landscape was a sort of follow-up. Anyway, my first thought about the whole picture was that the range of depth of Portfolio Kanban implementation should be broader.

    Given a simple fact that limiting work in progress on portfolio level is tricky at best, many teams start simply with visualization. Of course I don’t have anything against visualization. In fact, I consider visual management, which is implementation of the first Kanban practice, a tool that allows harvesting low-hanging fruits easily. In terms of improvements on portfolio level low-hanging fruits are rarely, if ever, a scarce resource.

    Having said that, Portfolio Kanban or not, I don’t consider visual management a very mature or deep Kanban implementation. That’s why my instant reaction was that we should cover less mature implementations on portfolio level too.

    Kanban Landscape Portfolio Kanban

    That’s not all though. When we think about shallow Portfolio Kanban implementations, e.g. visualization and not much more, we should also think about the scale. It’s a quite frequent scenario that we start visualizing only a part of work that is done across the organization, e.g. one division or one product line. From this perspective, such implementations are closer to multi-service scale than to full-blown portfolio context.

    That’s why I believe Portfolio Kanban implementations should cover even broader area, especially when we talk about low-maturity cases.

    Kanban Landscape Portfolio Kanban

    Finally, the picture would cover different Portfolio Kanban implementation I know. I guess this might mean that, in some point of future, we will go into more detail talking about maturity and scale of Portfolio Kanban implementations. However, as for now I think it is enough.

    Interestingly enough the area I propose for a portfolio level implementations covers much of whitespace we’ve had on the picture. It is aligned with my general perception of Kanban as a method that can be scaled very flexibly throughout the broad spectrum of applications.

  • All Sorts of Kanban

    Some of you who pay more attention to what is happening in the Lean Kanban community may have noticed that there’s an ongoing discussion about what Kanban is or should be. It’s not about what do we use Kanban for but how exactly we define Kanban.

    Another incarnation of this discussion was started by Al Shalloway with his post on how he typically approaches Kanban implementations and the explicit statement that he doesn’t support Kanban method anymore.

    Al Shalloway Kanban

    Obviously it sprung a heated discussion on Twitter, which is probably the worst possible medium for such a conversation. I mean how precisely can you explain yourself in 140 characters? That’s why I didn’t take part in it. However, since I do care about that here is my take on the whole thing – not just Al’s comments but the whole dispute altogether.

    Let me start with the very old discussion about Scrum. I’ve always been a fanboy of ScrumBut. In fact, all sorts of ScrumButs. Not that I don’t see risks attached to using only part of the method (but that’s another story). I just think that any method is some sort of generalization and its application is contextual. It means that specific implementation has to take into account a lot of details that were simply not available or known when the method was defined.

    Fast forward to the discussion around Kanban. My general attitude hasn’t changed. When I see more and more different approaches to Kanban adoption I have the same feelings as I had when I’ve seen people experimenting with different Scrum implementations. The more the merrier.

    I totally respected, and learned from, the discussion about different order of adoption of Kanban practices that happened last year at Kanban Leadership Retreat. I totally respect, and learn from, Al’s approach as well. There’s no single flavor of Kanban.

    The part I don’t respect is dissing other approaches to Kanban adoption. If we want to discuss why something works or doesn’t work let’s do that in a specific context. I guess we will quickly find out that different approaches are valid in different contexts.

    So what is the definition of Kanban? Personally I base on the definition of Kanban method as described in David Anderson’s book (including all the later work that will hopefully make its way to the second version of the book). From my experience it is the most commonly used and most widely known. It is also covered extensively with experience reports and derivative work. So if we look for a benchmark, something we universally refer to when talking about Kanban, I think we already have one.

    At the same time I’m perfectly OK when I see other flavors of Kanban. As far as we understand the method, why specific practices are there, we can start tweaking it so it fits our specific context better. This is exactly what is happening with Kanban these days. And my take is that it is still the same Kanban, no matter whether one prefers interpretation of David Anderson, Mary Poppendieck, Henrik Kniberg, Hakan Forss, Al Shalloway or someone else (we’ll see more of those, I’m sure). I’m OK with that, no matter that personally I have my preferences across the list.

    There are different sorts of Kanban and that’s actually the best part. There is no one-size-fits-all approach and there never will. It is always contextual. And this is why we need diversity.

    Personally, I’d love to see these discussions run in such a spirit and not following the “my way is better than yours” line.

  • Kanban Leadership Retreat: Portfolio Kanban

    This year’s Kanban Leadership Retreat (KLRAT), as always, was awesome. In fact, despite sharing some critical feedback during retro session at the very end of the event, I still consider it the best event of the year, hands down. This year I’ve come back home with the biggest homework ever: experiments to try out, ideas to play with, concepts to write down, etc. It means that coming back next year is a no-brainer to me.

    One area that you’ll hear a lot about here is Portfolio Kanban. And this was also the subject of my session at the retreat.

    The Goal

    One of my goals for KLRAT this year was pushing forward the discussion on Portfolio Kanban. Answering questions like: what are the boundaries of the method? What are the gaps we still need to cover to make Portfolio Kanban thorough? How the implementations on portfolio level are aligned with the method as we know it?

    During the session I wanted to talk about all these things. My expectation wasn’t to rule out all the doubts. I assumed that the outcome would include some answers as well as some new questions but overall would bring us to better understanding what Portfolio Kanban is.

    The Hypothesis

    I hypothesized that Kanban method with its principles and practices define well the approach we can use on portfolio level. In other words that we don’t need any other definition for Portfolio Kanban than the one we already have for Kanban. This is where we started.

    The Process

    I didn’t want to start with the Kanban definition and look for its possible applications (we would find those, wouldn’t we?). I asked participants for a brain dump practices, actions and techniques that we use manage portfolio. Then we sorted them into six buckets of Kanban practices. For example portfolio overview would be covered by visualization so it went to the visualize bucket, limiting number of ongoing projects would obviously go to the limit WIP bucket, etc.

    Of course there were odd balls too. These went to the side as another group. Actually, this one was the most interesting one for me as they pointed us to possible gaps.

    Everyone had a chance to briefly explain what they put on the wall and why it went to a specific bucket. Then we used the remaining time to discuss the challenges we see – which questions weren’t addressed and need further work.

    The Outcomes

    There are a few lessons learned from the exercise. I think the most important bit is that the hypothesis was confirmed. I am convinced that using constraints of Kanban method we can neatly define Portfolio Kanban.

    Of course the specific techniques will be different. Interpretation of practices will vary too. But the same is true with team level Kanban applications in different contexts.

    Things get more interesting once we go deeper into the details. Let’s look at the wall which is the documentation of the session (click for a larger version).

    KLRAT Portfolio KanbanKLRAT Portfolio KanbanKLRAT Portfolio Kanban

    After a glimpse you can see that one bucket is almost empty. Surprisingly enough is the improvement / evolution bucket. Does it mean that we don’t see a match between Kanban method and portfolio management in this aspect? Personally, I think that it would be too quick to draw such conclusions.

    An observation made by Klaus Leopold was that quite a bunch of the stickies on the wall that could be placed not only in their original place but also in the improvement / evolution bucket. That’s obviously true. But then I can’t help but thinking that if we were doing the very same exercise with Kanban on team or service level the end result would look different.

    I think that the answer is that evolution on a portfolio level involves different behaviors and different tools than on a team level. How exactly? Well, this is one of these loose strings we have after the session, so I don’t have an answer for this. Yet.

    Finally, pretty obvious outcome of the session is the list of the challenges we will have to address (or explicitly leave unaddressed) to finalize definition of Portfolio Kanban

    KLRAT Portfolio Kanban Challenges

    Although we aren’t there yet in terms of defining what Portfolio Kanban is going to be, we made a big step forward. And this was exactly what I wanted to achieve. I didn’t want more divergence as I believe we’ve had enough of that so far. I didn’t expect more convergence either. Not just yet. Also I think that the time at KLRAT is just too scarce to spend it discussing the exact definition.

    And that is how the end result of our work looked like.

    KLRAT Portfolio Kanban

    All in all there are going to be follow-up steps – those that will bring convergence. If you are interested in further work on the subject stay tuned.

  • Maturity of Kanban Implementation and Kanban Kata

    One of interesting bit of work that is happening in Lean Kanban community is Hakan Forss’ idea of Kanban Kata. Kanban Kata is an attempt to translate ideas of Toyota Kata to Kanban land.

    A simplified teaser of Kanban Kata is that we set a general goal, a kind of perfect situation we unlikely ever reach. Then we set short term, well-defined, achievable step that brings us closer toward the goal. Finally, we deliberately work to make the step, verify how it went and decide on another step. Learn more about Kanan Kata from Hakan’s blog.

    Honestly, I was a bit skeptical about the approach. One thing that seemed very artificial for me was the advice how we should define short term steps that lead us toward the ultimate goal. “Improve lead time by 10% in a month.” What kind of goal is that? Why 10%? Why in a month? How should we feel if we manage to improve it only by 8%? Should we cease to continue improvements when reaching the goal after a week?

    I know that these questions assume treating the goal literally and not very much common sense but you get what you measure. If you set such measurements, expect that people would behave in a specific way.

    I think the missing bit for me was applying some sort of relativity to Kanban Kata. Something that would address my aversion to orthodoxy. Something that would make the application context broader. I found the missing link in David Anderson’s keynote at London Lean Kanban Day.

    Interestingly enough, the missing link is my own work on maturity of Kanban implementations. Yes, it seems I need David to point me usefulness of stuff that I did.

    The context of my work on depth of Kanban implementation is that instead of trying to use sort of a general benchmark I simply used “where we would like to be” as a reference point to judge where we are right now. In short: I’m not going to try to compare any of my teams to, e.g. David Anderson’s team at Corbis. Instead I want any team to understand where their own gaps are and work toward closing them.

    Such an approach perfectly suits setting the goal of Kanban Kata, doesn’t it?

    I mean, instead of having this artificial measure of improvements we have internally set end state which is resultant of opinions of all the team members. On one hand this approach let us avoid absolute assessments, which rarely, if ever, help as they ignore the context. On the other it helps to set meaningful goals for Kanban Kata-like improvements.

    Relativity requires a team to understand the method they are trying to apply, but I would argue that if the team doesn’t understand their tools they’re doomed anyway.

  • WIP Limits by Conversation

    The biggest challenge when applying Kanban on portfolio level is how to introduce WIP limits. Kanban without limiting work in progress will always be shallow. In fact, many would argue (me included) that it is not Kanban at all.

    The problem is that typical methods we use to limit work in progress on a portfolio level simply don’t work. Well, of course you can try to limit the number of concurrent projects, but if you’re like a vast majority of companies your projects will vary in their size very, very much. I find 1:200 difference in size between the smallest and the biggest project run by an organization pretty common.

    If we wanted to translate this to work we do on a team level we would be talking about having tasks that we finish in anything between half a day and half a year.

    It means that you could substitute one of your big ongoing projects with a dozen concurrent small ones and that’s still fine. Except that using a number of projects as a WIP limit doesn’t seem like a good idea anymore.

    Limiting work in progress in such an environment has to be contextual. One has to take into consideration size and length of projects, dependencies between them, etc. A different WIP limits will be applicable when portfolio is dominated by medium-to-big endeavors and different will make sense when you’re coping mainly with small projects. In short, to say anything more about sensible WIP limits we have to know the context.

    If we discuss the current context, everything that is happening right now, estimated effort needed to complete the new project, available and required capabilities and any other potential projects we can likely say whether we should or should not start the project. This is basically the core of idea called WIP limits by conversation (I credit Klaus Leopold who I learn the term from). With each new project in a backlog we discuss to say whether it fits the implicit WIP limits or not.

    It may sound like it’s a lot of work but it isn’t. The most difficult discussions will be around relatively big projects but then, you don’t start such projects every other week, do you? Discussions about small projects may be more frequent but they will be way easier to decide on too. And they won’t be happening so very often either.

    A tool that is very handy to support WIP limits by conversation is good visualization. Unless everyone involved has a general idea which teams work on what, what capabilities free teams can offer, what are other commitments, etc. the discussion will be basing on gut feelings. And a gut feel of most CEOs is that the company will cope with every single project… somehow. This is how you end up having 30 people involved in 100+ projects. Not the most effective way of working, right?

    I am perfectly aware that the approach seems to be vague, but the general rule is that the more variable the work is the less explicit WIP limits can be.

    WIP limits by conversation may also seem fragile. If there is a person who pushes more and more projects into the system lack of explicit rules for limiting work in progress may seem like a weakness. Not necessarily so. Usually visualization is enough to show risks attached to a project that doesn’t fit available capabilities. After all, no wants to start a project that is doomed for failure and will hurt the organization’s reputation.

    Of course the conclusion of discussion may be that not starting the project is not an option because of, e.g. relationship with the customer, but then you simply start talking about costs. What other work won’t be done or will be delayed, etc. A format of conversation proves to be very useful on such occasions.

    One nice side effect of introducing WIP limits by conversation is that you are encouraged to talk about thing like expected value, cost of delay, estimated effort and probabilities of all of these numbers for all the projects that you start. It usually helps to refrain from projects that don’t make much sense but unless you’d started asking such questions no one would have been aware of the fact.

    Another gain is slack time generated on a team level. If you care about not overloading the teams, occasionally they won’t have an ongoing project. This is perfect moment to all sorts of improvement work as well as learning or helping other teams.

    My experience is that, despite its vague nature, limiting WIP by conversation works surprisingly well. After all, I don’t know many people that want to make their teams miserable and hurt their organization on purpose.

  • Portfolio Kanban: Why Should I Care?

    It’s an interesting observation for me: people keep asking me to speak about Portfolio Kanban. London, Krakow, Chicago… it seems that for me Portfolio Kanban is going to be the topic to speak about this year.

    When I started with Portfolio Kanban it was an experiment – a tool I wanted to play with to see whether it is useful at all. When you start speaking publicly about such things though, there is one important question you have to answer: why should anyone care?

    After all, unless the question is answered Portfolio Kanban is just a toy.

    So… why?

    When I look at work that is happening in lean and agile communities I see a lot happening on a team level. Scrum is a framework designed for a team level. When you look at Kanban implementations, vast majority of them are on the same level too. Now, should we be worried? Is it wrong? No! It’s perfectly OK. Well, sort of.

    Let me start with this:

    A system of local optima is not an optimal system at all; it is a very suboptimal system

    Eli Goldratt

    Focusing on a team level and a team level only we are optimizing parts as rarely a single team is a whole. I’m far from the orthodox view that we should focus on optimizing the whole and the whole only, as most of us don’t have enough influence to work on such a level.

    It doesn’t mean though that we should cease any responsibility for optimizing the whole system, no matter what sphere of influence we have.

    This is basically why improvements on a portfolio level are so crucial. They don’t have to be done instead of improvements of a team level or prior to them. In fact, a holistic approach is probably the best option.

    If you aim your efforts on a team level only you likely become more efficient, but the question is: efficient at building what?

    Processing the waste more effectively is cheaper, neater, faster waste.

    Stephen Parry

    If the wrong decisions are made on a portfolio level, efficiency on a team level doesn’t really help. What’s more, it can even be harmful because we just produce waste more efficiently. I can think about a number of wasteful activities imposed on teams on a portfolio level, but two are the biggest pains in the neck.

    One is starting projects or products that shouldn’t be started at all in the first place. There is dumb notion that people shouldn’t be idle, so whenever individuals or teams have some spare time it is tightly filled with all sorts of crazy ideas that simply aim at keeping people 100% utilized. That’s just plain stupid.

    Usually, even when people are busy, they get this kind of work anyway, as “they will cope with that somehow.” After some time not only do we run lots of projects or initiatives of questionable value but we have to spend additional effort to finish and maintain them.

    Another pain point is multitasking. All these filler projects are obviously of lower priority than regular work. So what people end up with is they keep switching between projects whenever higher priority tasks calls. The problem is that a context switch between two projects is even more painful than a context switch between two similar tasks within the same general context. Oh, and have I already mentioned that once you’ve finished those filler projects you keep switching back to them to do maintenance?

    So basically what you get is very low value work at cost of huge context switching tax. Congratulations!

    Oh, is it that bad? It’s even worse.

    If you are doing the wrong thing you can’t learn, you will only be trying to do the wrong thing righter.

    John Seddon

    If the organization starts all the fires on a portfolio level, teams end up trying to cope with that mess. If they care, they will make the wrong thing a bit better. Does it help? Not at all. The sad thing is realization what could have been happening instead, which is basically learning.

    The organization could have been learning what work really adds value. The teams could have been learning how to work better. On all levels there would have been opportunities to improve thanks to occasional slack time.

    And, by the way, the organization would have been operating more efficiently too, thanks to less context switching.

    This is basically why you should focus more on organizing your project / product portfolio.

    Why Kanban in this application then?

    I guess I already gave the answer between lines. Visualization, as always, enables harvesting low-hanging fruits. I mean, unless we see how screwed up we are, we often don’t even realize the fact. Visualization also helps to substitute everyday project-related wild-ass guesses with everyday project-related informed decisions. Sounds better, doesn’t it?

    Then there are WIP limits that enable conversations about what projects get started, how we staff the teams and how we react in all sorts of special case situations. In fact, without that bit changes introduced by Portfolio Kanban will be rather shallow.

    Finally, if you are aiming for improvements, Portfolio Kanban gives you a change mechanism that is very similar to what you know from team level Kanban implementations.

    The best part though, is how easily you can start your journey with Portfolio Kanban. Even though it tackles the part of the organization that is usually highly formalized and full of politics, Portfolio Kanban doesn’t require, at least at the beginning, to have everyone signed up for the idea. A single person can use Portfolio Kanban as a disruptive weapon and see what it brings.

    Seriously, it’s enough to have only one person willing to work consistently on Portfolio Kanban board to see the first yield of the improvements. And one doesn’t have to wait very long until the first meaningful discussions around projects start. Then you know something has already changed.

    Even when no one really realized you used Kanban to achieve that.

    If you liked the article you may like my Portfolio Kanban Story too.

  • Brickell Key Award Nomination

    I’m on cloud nine. I was nominated to this year’s Brickell Key Award. For those of you who don’t know what that is, Brickell Key Award is the way of honoring people that have shown leadership and contributions in Lean Kanban community. I wouldn’t fancy the award that much if not the list of people who won it during previous years: Jim Benson, David Joyce, Arne Roock, Russel Healy, Richard Hensley and Alisson Vale

    There’s another part of the story. I’m simply honored to be in the company of Jabe Bloom, Yuval Yeret, Hakan Forss, Chris Shinkle and Troy Magennis as the nominees. In fact, my humility suggests me to question whether I even belong to this splendid pack.

    I’ve never been a part of Lean Kanban community for material gains as I still (happily) sit on practitioner’s side of the fence and use occasional consulting gigs mainly as a way of sharpening the saw. It gives me comfort of straight talking, as I’m not trying to sell anything to anyone. Probably if you read my writings, heard me speaking at events or simply talked with me you know what attitude I’m taking about. After all whenever learning something new last thing you want is a rosy picture.

    My hope would be that my efforts helped you and your teams understand what Kanban is and how to implement it successfully. If it was so and you want to share some of the love this is the right time let know the selection committee (or simply leave a comment under the post which is awesome too).

    By the way, if you want to share some hate because I misled you or something, that is fine too. I love critical feedback, and it’s definitely helpful for the selection committee as well.

  • Why I Don’t Limit WIP (On Occasions)

    As much as I love visualization as a technique that gives pretty much any team handful of quick wins, I do consider limiting work in progress the bit that makes or breaks team’s long-term ability to improve. Introducing, fine tuning and maintaining WIP limits is arguably the most difficult part of Kanban implementation, yet the one that pays of big time in a longer run.

    Shouldn’t limiting WIP be a no-brainer then?

    No, not really.

    Introducing work in progress limits is an investment. A long-term one. If a team isn’t ready to make long-term commitment to limit WIP it may not be their time yet. I mean, would you expect that a pretty much chaotic team would understand their process, let alone shape the process using WIP limits? They have quite a few prerequisite steps to make so let them start with that.

    OK, but this is the case of teams I often dub immature. But then there are teams that I’m working with and that most definitely are mature enough and we still don’t introduce WIP limits.

    Why?

    Introducing work in progress limits is an investment. A long-term one. Have I already said that? Oh… Anyway, if we are talking about sort of temporary team working on short-term arrangements investment into making WIP limits work may not be worthwhile.

    Let me give you an example. One of my recent project lasted 7 weeks, including first couple of weeks to get things running. Over the course of the project team setup has changed twice. Our environment was far from stability.

    Instead of setting up explicit WIP limits we just paid attention to what was happening on the board and were reacting whenever needed, using a couple rules of thumbs:

    • Whatever is closer to completion (further down the flow) it has priority over stuff that is on earlier stages.
    • We finish what we’ve started before moving on to another task, unless we encounter a blocker.

    Thanks to that we naturally limited work in progress and context switching despite lack of work in progress limits. Probably it wasn’t that strict and aggressive as it would be with explicit WIP limits, but I still think we’ve done decent job.

    The interesting thing is that I doubt we’d be able to fine-tune the WIP limits before the end of the project, even knowing everything we know once we are done. The situation was evolving very rapidly; bottlenecks were in 4 different places throughout these few weeks. We’ve made a couple of gut calls deciding who should do what, like developers helping with testing or design. In fact, we didn’t need explicit WIP limits to make these calls, although definitely understanding how the work was done was a crucial bit.

    If the project was supposed to last a few more weeks we would already have WIP limits on our board. But now the situation has changed; people are working in different setup, so limiting work in progress has to start from scratch.

    There are two lessons in the story. One is about WIP limits – they are a long-term investment and every team adopting WIP limits should understand that before they start. Another one is that even if you don’t have explicit WIP limits understanding how the work is done and reducing how much work is started helps. In some way it is limiting work in progress too.

    You don’t have to use fancy techniques to limit WIP. As I often repeat: read the board from right to left and start with the stuff that is more to the right. To be precise, this should be: read the board from where the flow ends to where the flow starts, as there are many non-standard board designs, but the idea is basically the same.

    You don’t have to work hard to start more stuff – it happens almost without any conscious effort. You have to work hard to finish more stuff. And this is what WIP limits help you with.

  • Emergent Explicit Policies

    One of Kanban practices is introducing explicit policies. It is the policy that probably gets least publicity. I mean I could talk hours about visualization and don’t even let me started with WIP limits thing. Managing flow gives me a great starting point for the whole debate on measuring work and using the data to learn how the work is done. Finally, continuous improvement is the axis that the whole thing spins around and a link place to all sorts of beyond Kanban discussions.

    Note; I put introducing feedback loops aside for the sake of this discussion as it is still new kid on the block and thus it isn’t covered that well in different sources.

    On this background explicit policies look like a poor relative of the other Kanban practices. Seriously, I sometimes wonder why David Anderson put it on the original list back then when he was defining what Kanban method is. Not that explicit policies are unimportant, but their power is somewhat obscure.

    After all what does it mean that we have explicit policies? What does it take to have such a thing? When I’m training or coaching I like to use this example: if I take any member of the team ask what random things on Kanban board mean they should all answer the same. I ask about things like, what exactly is represented by a sticky in a specific place of the board or what the meaning of a specific visual signal is, e.g. pins, magnets, different marks on stickies etc.

    I don’t subscribe to a common advice that you have to write policies down and stick them to the board to make them explicit. I mean, this usually helps but it is hardly enough to start. Explicit policies are all about common understanding of how the work is done.

    And this is where real fun starts. If we are talking about common understanding we should rather talk about discovery process and not compliancy enforcement. If it is about discovery process we may safely assume two things:

    1. It has to be a common effort of the whole team. One person, a leader or not, just won’t know everything, as it is about how everyone works.
    2. It’s not one time effort. As the team approaches new situations they are essentially introducing new behaviors and new rules.

    This is a real challenge with explicit policies. Unless you get the whole team involved and make it a continuous process you’re doing suboptimal job with policies.

    What you aim for is to have emergent explicit policies. Any time that a team encounters a new situation that calls for a new rule you can add it to the list of policies you follow.

    By the way, this is where having policies written down proves useful. I would, however, argue that printed sheet rather discourages people to add something, while a set of handwritten sticky notes or a hand writing on a whiteboard does the opposite. This is why you may want to use more sketchy method of storing the list of explicit policies.

    Another thing is what should make it to the list. As a rule of thumb: the fewer, the better. I mean, who would read, and remember, a wall of text. Personally I would put there things which either prove to be repeatedly problematic or those that are especially important for the team.

    After all, your policies are emergent so if you missed something the team would add it soon, right? In fact, this is another thing to remember. The last thing a leader might want is to be considered the only person who is allowed to change the list of policies. Personally, I couldn’t be happier when I saw a new policy on the board that was scribbled there by someone else. It is a signal that people understand the whole thing. Not only do they understand, but they do give a damn too.

    Without this your policies are going to be like all those corporate rules, like a mission statement or a company vision or a quality policy. You know, all that meaningless crap introduced by company leaders, that has no impact whatsoever on how people really work.

    You wouldn’t like this to happen in your team, would you?