Tag: project management

  • Why Burn-up Chart Is Better Than Burn-down Chart

    The other day I was in the middle of discussion about visuals a team was going to use in a new project. When we came to the point of tracking completion of the project I advised a burn-up chart and intended to move on. The thing that stopped me was the question I was asked: why burn-up and not burn-down?

    Burn-down Chart

    First, some basics. Burn-down chart is an old idea I’ve learnt from Scrum. It is a simple graph showing amount of work on a vertical and timeline on a horizontal axis. As time progresses we keep track how much work is still not done. The goal is to hit the ground. The steepness of the curve can help us approximate when it’s going to happen or, in other words, when we’re going to be done with the whole work.

    When we think about quantifying work it should be anything we use anyway – story points, weighted T-shirt sizes, simple number of tasks or what have you.

    Burn-up Chart

    Burn-up chart’s mechanics is basically the same. The only difference is that instead of tracking how much work is left to be done, we track how much work we’ve completed, so the curve is going up, not down.

    The Difference

    OK, considering these two work almost identically, what’s the difference? Personally, I don’t buy all the crap like “associations of the word burn-down are bad.” We learned not to be afraid of failure and we can’t deal with a simple word? Give me a break.

    The real difference is visible when the scope changes. If we suddenly realize we have more work to do burn-down may look like this.

    Unfortunately, it can also look differently if we happen to be (un)lucky enough to complete some work at the same time when we learn about additional work.

    It becomes even trickier when the scope decreases.

    Have we just completed something or has a client cancelled that feature which we talked about yesterday? Not to mention that approximating the finish of work becomes more difficult.

    At the same time, burn-up chart makes it all perfectly visible as progress is tracked independently on scope change.

    You can see scope changes in both directions, as well as real progress. And this is exactly why choosing burn-up over burn-down should be no brainer.

  • Refactoring: Value or Waste?

    Almost every time I’m talking about measuring how much time we spend on value-adding tasks, a.k.a. value, and non-value-adding stuff, a.k.a. waste, someone brings an example of refactoring. Should it be considered value, as while we refactor we basically improve code, or rather waste, as it’s just cleaning after mess we introduced in code in the first place and the activity itself doesn’t add new value to a customer.

    It seems the question bothers others as well, as this thread comes back in Twitter discussions repeatedly. Some time ago it was launched by Al Shalloway with his quick classification of refactoring:

    The three types of refactoring are: to simplify, to fix, and to extend design.

    By the way, if you want to read a longer version, here’s the full post.

    Obviously, such an invitation to discuss value and waste couldn’t have been ignored. Stephen Parry shared an opinion:

    One is value, and two are waste. Maybe all three are waste? Not sure.

    Not a very strong one, isn’t it? Actually, this is where I’d like to pick it up. Stephen’s conclusion defines the whole problem: “not sure.” For me deciding whether refactoring is or is not value-adding is very contextual. Let me give you a few examples:

    1. You build your code according to TDD and the old pattern: red, green, refactor. Basically refactoring is an inherent part of your code building effort. Can it be waste then?
    2. You change an old part of a bigger system and have little idea what is happening in code there, as it’s not state-of-the-art type of software. You start with refactoring the whole thing so you actually know what you’re doing while changing it. Does it add value to a client?
    3. You make a quick fix to code and, as you go, you refactor all parts you touch to improve them, maybe you even fix something along the way. At the same time you know you could have applied just a quick and dirty fix and the task would be done too. How to account such work?
    4. Your client orders refactoring of a part of a system you work on. Functionality isn’t supposed to be changed at all. It’s just the client suppose the system will be better after all, whatever it means exactly. They pay for it so it must have some value, doesn’t it?

    As you see there are many layers which you may consider. One is when refactoring is done – whether it’s an integral part of development or not. Another is whether it improves anything that can be perceived by a client, e.g. fixing something. Then, we can ask does the client consider it valuable for themselves? And of course the same question can be asked to the guys maintaining software – lower cost of maintenance or fewer future bugs can also be considered valuable, even when the client isn’t really aware of it.

    To make it even more interesting, there’s another advice how to account refactoring. David Anderson points us to Donald Reinertsen:

    Donald Reinertsen would define valuable activity as discovery of new (useful) information.

    From this perspective if I learn new, useful information during refactoring, e.g. how this darn code works, it adds value. The question is: for whom? I mean, I’ll definitely know more about this very system, but does the client gets anything of any value thanks to this?

    If you are with me by this point you already know that there’s no clear answer which helps to decide whether refactoring should be considered value or waste. Does it mean that you shouldn’t try sorting this out in your team? Well, not exactly.

    Something you definitely need if you want to measure value and waste in your team (because you do refactor, don’t you?) is a clear guidance for the team: which kind of refactoring is treated in which way. In other words, it doesn’t matter whether you think that all refactoring is waste, all is value or anything in between; you want the whole team to understand value and waste in the same way. Otherwise don’t even bother with measuring it as your data will be incoherent and useless.

    This guidance is even more important because at the end of the day, as Tobias Mayer advises:

    The person responsible for doing the actual work should decide

    The problem is that sometimes the person responsible for doing the actual work can look at things quite differently than their colleague or the rest of the team. I know people who’d see a lot value in refactoring the whole system, a.k.a. rewriting from scratch, only because they allegedly know better how to write the whole thing.

    The guidance that often helps me to decide is answering the question:

    Could we get it right in the first place? If so then fixing it now is likely waste.

    Actually, a better question might start with “should we…” although the way of thinking is similar. Yes, I know it is very subjective and prone to individual interpretations, yet surprisingly often it helps to sort our different edge cases.

    An example: Oh, our system has performance problems. Is fixing it value or waste? Well, if we knew the expected workload and failed to deliver software handling it, we screwed this one up. We could have done better and we should have done better, thus fixing it will be waste. On the other hand the workload may exceed the initial plans or whatever we agreed with the client, so knowing what we knew back then performance was good. In this case improving it will be value.

    By the way: using such an approach means accounting most of refactoring as waste, because most of the time we could have, and should have, done better. And this is aligned with my thinking about refactoring, value and waste.

    Anyway, as the problem is pretty open-ended, feel invited to join the discussion.

  • Splitting Huge Tasks

    On occasions I deal with an issue small enough that it barely deserves a full-blown blog post, yet it is hard to pack it into 140 characters of a tweet. However, when I’m advising on such an issue for yet another time it is a clear signal that sharing an idea how to deal with it might be useful. So, following an experimentation mindset, I’m going to try short posts addressing this kind of issues and see how they are received.

    One of pretty common problems is with splitting tasks. For example a typical task for a team can take something between 4 hours and a couple of days. And then, there is this gargantuan task that takes 3 months. Actually, 3 months, 5 days and 3 hours. It is, however, quite a coherent work item. Basing on merits it does make sense to treat it as a single task.

    On a side note: for whatever reasons it happens more often in non-software development teams.

    The problem is, it heavily affects any metrics you may gather. Sometimes it affects metrics to the point when analyzing them doesn’t make much sense anymore. If you include this huge task in your metrics they all go mad. If you don’t, you basically hide the fact that a part of the team was working on something that isn’t accounted at all. So the question is: should you accept it and move on or do something with the task?

    I’m not orthodox but I’d rather split the task to smaller ones. Usually this is the point when new issues raise – for example the task can be split but for pieces so small that measuring them separately adds way too much hassle. An alternative can be grouping these tiny pieces into batches of a size that does make sense for the team.

    Anyway, I’d still go with splitting the task, even if division is artificial to some point. The knowledge you gain from metrics is worth the effort.

    In short: when in doubt – split.

  • Cadences and Iterations

    Often, when I’m working with teams that are familiar with Scrum, they find the concept of cadence new. It is surprising as they are using cadences, except they do it in a specific, fixed way.

    Let’s start from what most Scrum teams do, or should do. They build their products in sprints or iterations. At the beginning of each sprint they have planning session: they groom backlog, choose stories that will be built in the iteration, estimate them etc. In short, they replenish their to do queue.

    When the sprint ends the team deploys and demos their product to the client or a stakeholder who is acting client. Whoever is a target for team’s product knows that they can expect a new version after each timebox. This way there is a regular frequency of releases.

    Finally, at the very end of the iteration the team runs retrospective to discuss issues and improve. They summarize what happened during the sprint and set goals to another. Again, there is a rhythm of retrospectives.

    Then, the next sprint starts with a planning session and the whole cycle starts again.

    It looks like this.

    All practices – planning, release and retros – have exactly the same rhythm set by the length of timebox. A cadence is exactly this rhythm.

    However, you can think of each of practices separately. Some of us got used to the fact that frequency of planning, releases and retrospectives is exactly the same, but when you think about this it is just an artificial thing introduced by Scrum.

    Would it be possible to plan every second iteration? Well, yes, why not? If someone can tell in advance what they want to get, it shouldn’t be a problem.

    Would it be a problem if we had planning more often then? For many Scrum teams it would. However, what would happen if we planned too few stories for the iteration and we would be done halfway through the sprint? We’d probably pull more stories from backlog. Isn’t that planning? Or in other words, as long as we respect boundaries set by the team, wouldn’t it possible to plan more frequently?

    The same questions you can ask in terms of other practices. One thing I hear repeatedly is that more mature teams change frequency of retrospectives. They just don’t need them at the end of every single sprint. Another strategy is ad-hoc retro which usually makes them more frequent than timeboxes. Same with continuous delivery which makes you deploying virtually all the time.

    And this is where the concept of cadence comes handy. Instead of talking about a timebox, which fixes time for planning, releases and retrospectives, you start talking about a cadence of planning, a cadence of releasing and a cadence of retrospectives separately.

    At the beginning you will likely start with what you have at the moment, meaning that frequencies are identical and synchronized. Bearing in mind that these are different things you can perfectly tweak them in a way that makes sense in your context.

    If you have comfort of having product owner or product manager on-site, why should you replenish your to do queue only once per sprint? Wouldn’t it be better if the team worked on smaller batches of work, delivering value faster and shortening their feedback loops?

    On the other hand, if the team seems mature frequency of retros can be loosened a bit, especially if you see little value coming out of such frequent retros.

    At the same releases can be decided ad-hoc basing of value of stories the team has built or client’s readiness to verify what has been built or on weather in California yesterday.

    Depending on policies you choose to set cadences for your practices it may look like this.

    Or completely different. Because it’s going to be adjusted to the specific way of working of your team.

    Anyway, it is likely, that the ideal cycle of planning, releases and retrospectives isn’t exactly the same, so keeping cadences of all of these identical (and calling them iteration or timebox) is probably suboptimal.

    What more, thinking about a cadence you don’t necessarily need them to be fixed. As long as they are somewhat predictable they totally can be ad-hoc. Actually, in some cases, it is way better to have specific practice triggered on event basis and not on time basis. For example, a good moment to replenish to do queue is when it gets empty, a good moment to release is when we have a product ready, which may even be a few times a day, etc.

    Note: don’t treat it as a rant against iterations. There are good reasons to use them, especially when a team lacks discipline in terms of specific practices, be it running retros or regular deployments. If sprints work for you, that’s great. Although even then running a little experiment wouldn’t hurt, would it?

  • Trap of Estimation

    So we had this project which was supposed to end by the end of July. Unfortunately a simple burnup chart, which we used to track progress, seemed rather grim – it was consistently showing very beginning of September as a completion date. A month late.

    Suddenly, one day it started looking almost perfectly – end of July! Yay!

    Wait, wait, wait. What? I mean, what the hell happened on a single day that we suddenly recovered from a month-long slip? Something stinks here.

    After a while of digging we came up with the answer. Actually team invested some time to re-estimate all the work, including the work which was already done. Now, how the heck does it affect burnup chart?

    It seems the chart on the y axis, where the work items were shown, had a sum of estimates and not just a number of tasks. It means that changing estimates in retrospect affects the scope and percent complete of the project. It actually means that such changes can affect predicted completion date as well.

    This is called creative accounting and some people went to jail because of that, you know.

    My first question is whether such re-estimation changes real status of a project in terms of how much functionality is ready, what is done, how many bugs are fixed or lines of code written or any other creative, crazy or dumb measure you can come up with to say how much work has been done. Or does it change how much more work is there to be done?

    No! Double no, actually. It’s just a trick to tell us we aren’t screwed up that much. Actually, I accept the fact that we might have been OK in the first place and the chart was wrong. That would be awesome. But fixing the chart in such way, one, doesn’t change the status of work in any way and, two, just covers the real issue so it is harder to address it.

    What is the real issue then? Well, there are a couple of them. First, using time-based estimates to show how much work is to be done is asking for troubles. Unless you are a freaking magician and you can make your estimates right 5 months before you even start working on the task, that is. If you’re just a plain human, like me, and you assume your estimates are wrong using them as a basis for tracing project progress seems sort of dumb to me.

    It would be much better to count features or, if they vary much in size, count weight of features. Say S size is 3 times smaller than M and this is 3 times smaller than L or something like this. By the way, as you gather historical data you can pretty much fix these factors learning from past facts.

    Second, even if you decided to go with estimates to judge how much work is to be done, what makes you thinking that fixing estimates in retrospect pushes you forward for an inch in terms of the next project you’re going to run? Do you expect to know exactly, in advance how much time will take you to build features in future projects? Because this kind of knowledge you’re applying now to “fix” your estimates in current project.

    I would rather prefer a discussion on how to judge the scope better at the beginning of projects because this is going to be your benchmark. In this case precise estimates are almost useless. I will likely be pretty close in terms of telling how many features we have to build. It’s going to be trickier to say which of them will be small, medium or large. But I refuse to guess how many freaking hours each and every feature will take to build because such effort is utterly futile. It just so happens that I’ve forgotten to take my damn crystal ball with me, so sorry, that’s not going to work.

    In this case estimation brings us to a trap. Knowing exactly how much time each work item has taken it is easy to track the progress in retrospect. Average 8-year old child would be able to connect the dots. However, unless you’re a bloody superhero and you’re going to have such data at the beginning of your next project don’t treat it as viable method of tracking progress.

    Use any data that will be available in high quality at the beginning of a project. Number of features, maybe sized in some way if your team have some experience in sizing and you understand variability of work.

    Anyway, whatever you do, just don’t change the benchmark in retrospect as it’s going to mess your data and cover a real problem, which is that you should improve the way you set the benchmark in the first place.

    By the way: if you happen to work on time and material basis you can perfectly ignore the whole post, you lucky bastard. Actually, I doubt you even reached to this point of the post anyways.

  • Slack Time

    It often comes to me as a surprise that people misunderstand different concepts or definitions that we use in our writings or presentations. Actually, it shouldn’t. I have to remind myself over and over again that I do exactly the same thing – I start playing with different ideas and only eventually learn that I misused or misunderstood some definitions.

    Anyway, the context that brought me to these thoughts is slack time. When talking about WIP (Work In Progress) limits we often mention slack time and usually even (vaguely) point what slack time is for. However, basing on different conversations I’m having, people rarely get what slack time is when they first hear about it.

    How Slack Time Works

    Let’s start with basics. Imagine a very simple development process: we have backlog which we draw features from, then development and testing and after that we are done. For my convenience I split development stage into two sub-columns: ongoing and done, so we know which task is still under development and which can be pulled to testing.

    As you might notice we also have WIP limits that will trigger slack time in our process.

    In ideal situation a quality engineer would pull tasks from development done sub-column as soon as anything pops there and even if it won’t happen immediately we have a buffer for one feature in development (two developers and WIP limit of 3 in the column). I have a surprise for you: we don’t live in ideal world and ideal situations happen pretty rarely.

    There are two scenarios possible here. One is that developers won’t be able to feed quality engineer with features at rate high enough to keep him busy.

    In this situation quality engineer is idle. He just face slack time. He can’t pull another task from queue because it is empty so he needs to find a different task. He may try to help developers with their work (if possible) but he may also learn something new using slack time to sharpen his saw. He can also use this time to automate some of his work (and believe me: vast majority apps I saw would benefit heavily from automating some tests) so that the whole process is more effective in future.

    Another situation, and the one that is more interesting, would happen when the quality engineer is a bottleneck. It means that he can’t deal with features built by developers at the same rate they are flowing through development.

    In this case one of developers who just finished working on a feature has slack time. It is possible that another will soon finish his feature as well and both will be in the same situation. And again, they can use slack time, e.g. to do some refactoring or learn something new or help quality engineer doing his work. However, probably most valuable and at the same moment most desirable thing to do would be finding a way to improve the part of the process that is bottlenecked; here: testing.

    The effect might be some kind of automated test suite that reduces effort needed to test any single feature or maybe improvements in code quality policies that result in fewer bugs, thus less rework for each feature.

    What Slack Time Is

    By this point you should already understand when slack time happens and what it is roughly. Let’s try to pack it into some kind of definition then. Slack time happens when any team member for whatever reasons restrains to do the task they would typically do, e.g. a developer doesn’t develop new features. It is used to do other tasks, that usually result in improvements of all sorts.

    That’s a bit simplified definition of course. You may ask what if a developer has a learning task every once in a while put into their queue. Well, I would dare to say that it is just planning for learning and not introducing slack time, but we don’t deal here with strict definitions so I could totally agree if others interpret it differently.

    Note: we went through two different root causes for slack time emergence. One is when there aren’t enough tasks to feed one of roles. This is something that happens pretty naturally in many teams. If one of roles further downstream is able to process more tasks than roles upstream (in other words: bottleneck is somewhere upstream) they will naturally have moments of slack time.

    Some people may argue that this is not slack time and again I can perfectly accept such point of view. However, for me it suits the definition.

    Another root cause is when we intentionally limit throughput upstream to protect bottleneck that is somewhere downstream. This case is way more interesting for a couple of reasons. First, it doesn’t happen naturally so conscious action is required to introduce such slack time. Second, for many people introducing slack time there is counterintuitive, thus they resist to do it.

    A result of not limiting throughput before a bottleneck is a huge queue of work waiting for availability of bottleneck role, which makes the whole thing only worse. The team has to juggle more tasks at the same time introducing a lot of context switching and crippling own productivity.

    Nature of Slack Time

    One of specific properties of slack time is its emergent nature. We don’t carefully plan slack time as we don’t exactly know when exactly it’s going to happen. It appears whenever our flow becomes unbalanced, which sometimes means all the time.

    You can say that slack time is some sort of flow self-balancing mechanism. Whenever team members happen to have this time while they don’t do regular work they are encouraged to do something that improves the flow in future, usually balancing it better. At the same time the more unbalanced the team and the flow are the more slack time there will be.

    Remember though that in software development business our flow won’t be stable. Even when you reach equilibrium in one moment it will likely be ruined a moment later when you’re dealing with a special case, be it a non-standard feature, a team member taking a few days off or whatever else.

    It means that even in environment which is almost perfectly balanced slack time will be happening. Even better, such state means that you don’t have a fixed bottleneck – it will be flowing between two or more roles, meaning that every role in the team would have slack time on occasions.

    Another specific of slack time is that usually we aren’t told what exactly we should do in it. It doesn’t have to be that way in each and every case, however since slack time itself isn’t planned we can hardly plan to complete something during slack time in a reasonable manner. On the other hand there may be some guidance which activities are more desirable.

    It means that slack time seems to be a tool for teams that trust each other and are trusted by their leaders. It is true as slack time, thanks to its emergent nature, can’t be managed in command and control manner.

    However, for those of you who are control freaks, considering you have sensible WIP limits even if your people do nothing during slack time (and I mean virtually nothing) it should still have positive impact on team’s productivity. This is because 100% utilization is a myth. Note that in this case you lose self-balancing property of slack time – you don’t improve your flow. You just keep your efficiency on a bit higher level than you would otherwise.

    What Slack Time Is For

    I’ve already mentioned a few ideas what to use slack time for. Let’s try to generalize a bit here. Slack time, by definition, isn’t dedicated to do regular stuff, whatever “regular stuff” might mean in your case. If you think about examples I used and try to find a common part these all are important things: learning, improving team’s toolbox, balancing the flow or improving quality.

    At the same time none of them seems to be urgent. It is usually more urgent to complete a project on time (or as soon as possible) than to tweak tools team uses or introduce new quality policies.

    In short, slack time is for important and not urgent stuff. If something is urgent it likely is in the plan anyway and if something isn’t important what’s the point of doing it.

    If you’re interested in more elaborate version of that please read the post: what slack time is for.

    Other Flavors of Slack Time

    When talking about slack time it is easy to notice that there is some ambiguity around this concept. Why unplanned slack time is OK and planned time slots dedicated to the very same tasks are not?

    Personally I’m far from being orthodox over definitions. For me the key is understanding why and how slack time works and why it is valuable, so you can achieve similar effects using adjusted or completely different methods.

    Considering that a general goal of slack time is improvement you can introduce dedicated time slots for that or include improvement features in your backlog. The effect might be pretty similar and you may feel that you have more control over a process.

    What more, you may want to plan for improvement activities not leaving it to the random nature of slack time. Again, that’s fine for me even though personally I wouldn’t call that slack time.

    We may argue over naming, whether something already can be called slack time or not, but I’m not going to die for that really. Especially if we agree on purpose of these activities.

    I hope this helps to understand the concept of slack time – what it is, how it works and why it is valuable. Should you have any ideas what is missing or wrong here please leave a comment below. I’ll update the post.

  • The Project Portfolio Kanban Story: Better Board

    When applying Kanban on project portfolio level you’re dealing with different challenges than in case of standard Kanban implementation on a team level (if there even is such a thing). Both the flow dynamics and task granularity is very different, thus you need to focus on different aspects when designing Kanban board.

    This is basically why typical Kanban board design will often be suboptimal.

    Portfolio Kanban Board 2

    At the same time the biggest challenge you face on project portfolio level is defining and applying WIP limits. For the time being my thoughts on a subject are that depending on a specific environment teams would be choosing very different measures to manage WIP on portfolio level. However, as we as a community still lack experience from addressing different scenarios I’ll focus on the path I’m following. After all the story format I chose requires that.

    In my case the most reasonable thing I was able to come up with was a number of people involved in project work. Unfortunately the scale of the team (more than 130 people) didn’t allow me to use straightforward approach and scale up WIP limits with numbers.

    Instead of continuing my struggle to find a perfect measure that suits the current board I decided to redesign it completely.

    Whenever you read about Kanban you learn that it is an evolutionary approach. Kaizen and all the stuff. However one advice I have on designing a Kanban board in general is that when you start running circles with board evolution and see little or no improvements throw the whole board out and start from scratch. From one point of view it can be considered a revolution (kaikaku) but from another: you don’t really change your way of working, you just try to visualize it differently.

    Either way I took my own advice and started from a clean whiteboard. I also remembered another advice: not to stick to standard board designs. This is what I ended up with.

    First, there is no value stream or process in a way we understand it on a team level. Since the flow of index cards wasn’t dynamic I decided this isn’t information I should focus on that much.

    Second, on horizontal axis, instead of process there is a time line (monthly granularity). As variability of project sizes is huge I decided I need some sort of time boxes which I measure WIP against. With very few small engagements, ones that take us less than a few weeks, monthly time boxes looked reasonable.

    Third, I created swim lanes for teams. We don’t have 130 generic engineers, or Average Full-Time Equivalents, or whatever other inhumane label you can think of. We have teams that we strive to protect and teams do have specializations. It means the context of a team is very important and if it is important it goes to the board, thus swim lanes.

    Fourth, having such board construction I had to change my approach to index cards. Instead of having a single index card representing single project I ended up with territory-driven approach. A project covers a territory both in terms of team(s) and time. Looking at index cards with a project name you can instantly guess which team is working on the project and how long it’s going to take them. And the best part: a size of index card in any given month represents roughly how big part of a team would be involved in work on a project. This way we can easily show a few smaller projects as well as on big or any combination of those two.

    Fifth, as one of Kanban base concepts is pull it is represented by backlog area – green cards on the left side of the board. These are projects that aren’t yet started. The specific swim lane they are neighboring show preferable team to work on a project. However, it rarely is such a direct dependency: this team will do the project as there is no other one suitable to do the job. Most of the time we assume that another team can build the project too. Each time a project goes into development we decide, at last responsible moment, which team will take it.

    Of course there are some nice additional flavors here as well. Violet and yellow index cards differentiate maintenance projects from new ones. Green card are for projects that aren’t yet started and once they are we switch to yellow. Red index cards represent overrun in projects that are late. As we work on fixed price, fixed time projects we roughly know up front how much time and people we want to invest into a project. When something bad happens and this plan changes we show that. After all we put more attention to challenged projects.

    A simple fact that we are working on fixed time, fixed price projects doesn’t mean our arrangements never change. They do. Any time something changes we just update it on the board, same as we’d do with any other board. We just keep the board updated as other way its value would diminish.

    This Kanban board design definitely tells more than the initial one but I started this whole revolution to deal with WIP limits. What with them?

    Well, there still aren’t explicit WIP limits. However at this moment there are implicit WIP limits and information about them can be pretty easily extracted. Considering that I use territory-driven approach to index cards WIP limit is show by vertical axis of the board. Each team has a limit of one full-sized sticky note (representing whole team) per month which can be split in any way.

    In other words we won’t start another project unless there’s some white space that this project would fit into as white space represents our spare capabilities. Actually, it may be a bit of simplification as on rare occasions there are project we just start, no matter what. But even then we either can make such project fit the white space in a reasonable way or we need to make some more white space for it.

    Even though such WIP limits aren’t explicit, after some time of working with the board I can tell you they do the job. They do, not only in pulling new projects into development but also, and more importantly, in long-term planning as we have visibility a year ahead and can confront our capabilities with planned sales etc.

    With this board, for the first time from the beginning of my journey with project portfolio Kanban I felt satisfied.

    See how we come up to this point – read the whole story.

    Advertisement: Infragistics® NetAdvantage® for Windows Phone gives you rich, innovative mobile UI controls to build high-end mobile user experiences for Microsoft® Windows Phone®. Their high performance, eye-catching data visualizations, mobile form factor optimization, touch gesture support, and alignment with Metro usability guidelines take your social media connected mobile applications to the next level.

     

  • You Can Deliver Late

    It is a problem that never really goes away. You build your app and at the beginning everything seems to be as planned. Suddenly you realize you are late. For the sake of this post it doesn’t really matter whether late means 6 more months in 18-month long project or a day in a week-long sprint. Either way you realize you won’t make it.

    Then, people go crazy.

    Depending on team’s maturity you will notice a range of different behaviors. Anything from cutting corners on quality (“we have this buffer here we can use – it is called functional testing”), through abandoning best practices (“maybe we hit the window if we skip writing unit tests”), cheating client (“let’s deploy and we’ll be building the rest while they’re testing”), throwing features out (“oh, they are just saying it is crucial, they’ll manage without this feature and otherwise we won’t make it by the deadline”), to working team’s butts off beyond any limits (“hey guys, I know it’s the fifth weekend in a row, but we need to finish it and we aren’t anywhere close”).

    I have a question for you: how often do you consider being late as a viable option?

    My wild-ass guess answer: way too rarely.

    I mean, how many times in your life have you worked on a system that really had a deadline written in the stone? How many times there would be deadly serious consequences for your users and/or clients if you were late. Not tragically, hopelessly, beyond-any-recovery late, but simply late. Like a day every couple of weeks or a month every year.

    Personally I worked on such project only once. We were adjusting ERP system to new law after Poland joined EU. Deadline: May 1, 2004.

    Guess what. We were late. A week. And then, like hours after we released we found a bug which basically got medieval on the database. Almost another week to publish a hotfix that made the software usable again. And you know what? Nothing happened. The sun rose again, the moon was there at night, we didn’t lose our jobs, and our clients didn’t lose theirs. It was OK.

    It was OK, even though we missed the deadline that actually was written in the stone.

    Well, if we missed it by a couple of months we would probably be out of business but still, a couple of weeks were sort of acceptable.

    You can miss your deadlines too. They aren’t going to kill you for that I guess. And yes, I am well aware that being a supervisor of a dozen project teams it is unlikely that I am expected to state such opinions so openly.

    Yet still I believe that the price we pay for being on time when it can’t happen on reasonable terms more often than not is bigger than any value we might get by hitting the window. And talking about price I think about dollars, euros or whatever currency is on your paycheck. Actually most of the time we pay for decisions mentioned at the beginning of this post long way after the deadline passed. We pay in maintenance cost, we pay in discouraged users that can’t really use the app, and we pay in burned-out teams.

    So next time you’re going to make your call, consider this: you can be late. Even more, most of the time, being late is fairly OK.

    Advertisement: Infragistics® NetAdvantage® for jQuery expands the reach of your web applications by using the NetAdvantage jQuery controls that are cross browsers, cross platforms and cross devices supported. This fully client-side JavaScript & HTML5 based toolset is equipped with Line of Business UI applications needs with rich interactive dash boarding charts in HTML5.

     

  • The Project Portfolio Kanban Story: Why Standard Board Design Is Not a Good Choice

    When I was starting my journey with Kanban on project portfolio level I based on a classic board design I knew from my work with Kanban within project teams. I basically tried to map value stream to the board but on a different level.

    The effect was sort of predictable.

    Portfolio Kanban Board 2

    It was also naive.

    The basic strength of such design is it’s very intuitive. Well, at least for these parts of the world that read from left to right. The same way we read the board: whatever is closer to the right edge of the board is closer to completion. The value stream we map may be a bit simplified (as me make it linear) but then it isn’t that much of a problem – after all index cards are flowing through the board pretty smoothly.

    Unless you put on single index cards “tasks” that last a year or more, which is exactly what I have done.

    When you’re looking at very long lasting projects you look for different information than in case of several hour long tasks. It isn’t that important how an index card is flowing through the board. After all you expect it to sit in one place for months. If you find out that the status of the index card has changed a few days late it likely isn’t a problem at all.

    It is way more important, and interesting at the same time, to see teams’ capabilities in terms of undertaking new projects, i.e. how much more we can commit to our clients. Note: we aren’t talking about a single team of 7 (plus or minus 2). What we have here is 100+ people working on a couple dozen different projects concurrently. At this level capabilities are pretty damn difficult to estimate, especially given ever-changing business surroundings.

    This is a huge weakness of the common board design: it doesn’t really help you with estimating free capabilities. It would help if we were able to set reasonable WIP limits on such board. Unfortunately, it is (close to) impossible.

    A number of projects isn’t a good candidate to measure WIP, as projects differ in size hugely. If you use time-boxing you could try using a number of time-boxes as a measure. However in this case you don’t want to have a random team working on thirteenth iteration of a project that was build so far by the other team. With WIP limits measured by the number of iterations you would likely end up this way. Another idea was using money-related measures. This brings a question whether you sell all your work for the same prices. I guess that is true in very few cases and definitely mine is not one of them.

    The longer I thought about it the more often I was coming back to people. I mean a team could start another project if they had some free people who could deal with it in planned/expected time frame. I even thought for a moment of setting base WIP limit around 130 (roughly a number of people working on projects represented on the board) and having each index card to weigh as much as there were people involved in a project at the time. The problem was the hassle needed to manage such board would be horrifying.

    On the other hand measuring WIP in number of teams was way too coarse-grained as we had anything from multiple projects covered by a single team to multiple teams working on a single project.

    All in all I ended up with a belief that, in terms of project portfolio Kanban, standard board design isn’t a good choice. I was ready to redesign the board completely.

    If I piqued your interest read the whole project portfolio Kanban story.

  • How Much Work In Progress Do You Have?

    One of common patterns of adopting Kanban is that teams start just with visualization and, for whatever reasons, resist applying Work In Progress limits at the very beginning. While, and let me stress it, resignation from introducing WIP limits means drawing most of improvement power out of the system I understand that many teams feel safe to start this way.

    If you are in such point, or even a step earlier, when you’re just considering Kanban but haven’t yet started, and you are basically afraid of limits I have a challenge for you. Well, even if you use WIP I have the very same challenge.

    First, think of limits you might want to have.

    Second, measure how the tasks flow through your process. It’s enough to write down the date when you start working on a task and the date when you’re done with it – the difference would give you a cycle time.

    Third, after some time, check how many tasks in progress you really had every day. In other words: check what your WIP was.

    Odds are you will be surprised.

    One of my teams followed the “let’s just start with visualization and we’ll see how it goes” path. We even discussed WIP limits but eventually they weren’t applied. It is a functional team of 4 that juggles tasks which are pretty often blocked by their “clients,” i.e. beyond team’s control. The process, besides backlog and done bucket, is very simple: there’s only one column – ongoing.

    The discussion ended up with the idea of limit of 8, considering there are some rather longish tasks mixed with quite a few short but urgent tasks, e.g. “needs to be done today” sort, and of course there are frequent blockers. In other words rough limits two tasks per person should account for all the potential issues.

    As I’ve mentioned, WIP limits weren’t initially set. Even the WIP limit of 8 looked too scary at that point. After a few months we came back to the discussion. Fortunately, this time we had hard data from a hundred days.

    Guess what the worst WIP was.

    Seven. Over the course of a hundred days there wasn’t a single case that the scary limit of 8 was reached, let alone violated. What more, there were only 5 days where limit was higher than 6. In other words setting the limit of 6 and keeping it would be no sweat. A challenge starts at 5, which sounds very reasonable for such team.

    All of that considering that each and every blocked item was counted within the limit as at the moment the team doesn’t gather the data to show how long a task remains blocked.

    The lesson I got was that we can and should challenge our WIP limits basing on historical data. How often we hit WIP limits. How often we violate them. If it appears that we have enough padding that we barely scratch the ceiling on rare occasions it is a time to discuss reducing WIP limits. After all, it might mean that we are pursuing 100% utilization, which is bad.

    If WIP limits are barely and rarely painful, they aren’t working.