≡ Menu

Pawel Brodzinski on Software Project Management

Portfolio Visualization

Portfolio Visualization post image

I’ve been lucky enough that throughout my career I’ve had occasions to work on different levels: from personal (that’s pretty much everyone’s experience, isn’t it?), though team and project to program / PMO / portfolio level. Not only that. In most of my jobs I’ve been involved in all levels of work concurrently. This means I’m schizophrenic.

Um, actually this means that I’ve been pursuing goals that require focusing on different granularity of work.

Granularity of Work

Let me give you an example from my company – Lunar Logic. If I have to complete a task to close a new deal for the company, e.g. to have a call with a potential client, that’s a personal level. It is something that has to be done by myself and it involves only my work. Most of such stuff would be rather straightforward.

At the same time I run a project where I’m supposed to know what’s happening and for example share my estimates when the work is supposed to be finished with the client. That’s a different kind of work. I mean it’s not only me anymore – there’s a team. Also there are a lot of dependencies. Unless bugs are fixed we don’t ask a client for acceptance, this task has to be finished before the other, etc. Of course, tasks will be bigger – we don’t want to run 30-minute long tasks through our task or Kanban board. At least not as default.

Then there is whole effort of coordinating different projects run in the company. It is about understanding which are about to be finished making people available to work on something new, how we can cover for unexpected tasks, what kind of free capabilities we have, etc. At this level a user story or a feature is meaningless. We’re talking about big stuff here, like a project here and a project there.

Depending on what I do I may be interested in small personal tasks, user stories or whole projects.

Actually, there may be more than just three levels of stuff. It is a pretty frequent case. Imagine an org that employs a few thousand people. They will have teams, divisions, projects, programs and product lines. On any level beyond personal there will be a few different granularities of work.

There may be a user story as the smallest bit of work that a team gets done. It would be a part of an epic. The epic would be a part of one of these huge thingies – let’s call them saga stories. Sagas would build up a project. A set of projects would make a program. The program will be developed within a product line… Complicated, isn’t it? Well, I guess it’s still far from what Microsoft or Oracle has.

Focus

Now, the interesting part. On every level leaders are likely to be interested in understanding of what’s going on. They will want to visualize stuff that’s happening. Wait, what that “stuff” is exactly? I mean, haven’t I mentioned at the beginning that work items that interest me may be very different?

Um, yes. However, in each context, and at any given moment, there’s only one granularity of work that will get my attention. When I wear a hat of a company leader I don’t give a damn about a user story in a project XYZ. I just want to know whether that project is progressing well, what are the major risks attached to it and how it can impact other projects and our available capabilities.

When I go down to wear a hat of a project leader, I’m no longer interested in timelines of other projects. I have project tasks to be finished and this is my focus. Scope of risk management would be limited to only a single project too. This means that I will be paying attention to a different class of risks.

Then I go even further down to wear a hat of a leader of my own desk (not even a real one) and granularity of stuff changes once more.

Good news is that vast majority of people don’t have this switching dilemma – since they’re always working on the same class of stuff there’s always the same level of work that they pay attention to. Always a single level.

Well, almost…

Second Level

One level of stuff will always be a primary focus. In many cases there will be a secondary focus as well. It will be either one level up or one level down. For example, when I was a team leader my primary focus was the features we were building. However, on occasions I was going down to development tasks and bugs to understand better the progress we were making. Not that I needed the view on all the development tasks and all the bugs all the time – it would make information cluttered and less accessible. Sometimes I needed that data though.

Another example. In my previous job I led more than a dozen development teams. Obviously user stories or features were far beyond my focus. My primary area of interest was projects on PMO level. I was expected to juggle projects and teams so we build everything on time, on budget and on scope (yeah, right). Anyway, the atomic unit of work for me was a project. Occasionally I was going a level up though, to program management if you will. After all, we weren’t building random projects. Teams had business knowledge specialization. I had to take it all into account when planning work.

You can think of any role out there and it’s very likely that this pattern will be true: main focus of one level of work and possibly secondary focus on work that is either happening a level up or a level down. The secondary focus is likely to be sometimes on but sometimes off as well.

Why is this so important?

Visualizing Work

My work around Portfolio Kanban and discussion on the topic made me thinking of what we should be visualizing. Of course it is contextual. But then, again, in any given context there is this temptation to see more. After all, given that we visualize what’s happening with the projects, it’s not that hard to visualize the status of the epics within these projects. And of course we definitely want to see what’s happening on program level too…

No, we don’t.

Again, in any given role (or hat) you have only one primary area of interest. If these are projects, focus on them and not on programs or epics.

Otherwise the important bits of data will be flooded in all the rest of information that is needed only incidentally, if ever.

Portfolio Visualization

For whatever reasons we get that intuitively most of the when we manage work on a team level. I rarely see task or Kanban boards that attempt to map the whole universe (and more). Maybe it is because we typically have some sort of common understanding what an atomic task on a team level is.

At the same time we go on a project level and things get ugly. I mean, what is a project anyway? For one person it will be a Minimal Viable Product (MVP), which is smallest increment that allows verification of a product hypothesis and enables learning. For the other it would be defined by a monolithic scope that is worth 200 man months of work. And anything in between of course.

Unless we remember the rule of focusing on only one level of work, any meaningful visualization of that work would be impossible. This is one of the reasons why visualization in Portfolio Kanban is so tricky.

Instead of creating one huge wall of information and throw all the different bits of data there, think about your primary focus and deal only with this. And if you keep switching between different roles no one forces you to stick to a single board.

Right now I’m using portfolio board, personal one and a bunch of different project boards. For each of them, there is just one level of work visualized. So far I never needed to combine any two of them or put stuff from one at the other. It seems that even for such a schizophrenic guy as me the single focus rule applies as well.

in project management
0 comments

What Makes Project Attractive

What Makes Project Attractive post image

Throughout years of my professional career I’ve heard all sorts of ideas what makes a project attractive for people working on it. A favorite technology. Greenfield work with no legacy code. Scalability challenges. Number of users. Potential to change the world. A genuine idea. Big money involved. Freedom to choose any tools. Code quality. Probably a dozen different things too.

Since I joined Lunar Logic a number of projects I’m involved in a given time period went significantly up. It is so since on the top of a couple bigger endeavors we run quite a bunch of small projects. It’s great as I have an occasion to see, and learn from, many different environments.

This experience influenced my thinking of the factors that make a project attractive. In fact I’ve seen a project that scored really high in most of areas mentioned above, yet still everyone hated it (and vice versa).

Why? I think we focus on the wrong thing. No matter how cool is technology, scale or the idea if people involved in a project suck big time nothing is going to save the experience.

If you’re like “yeah, I should have thought about the team too” I’m not heading this direction really. In fact conflicts within teams are kind of rare and typically people learn how to cope with each other quickly.

However if a client is toxic, well, that’s an extreme case of bad luck.

I mean, extreme in terms of severity, not in terms of frequency. The fact is such clients are pretty damn common.

This is exactly why I pay so much attention to the people on client’s site when we consider a new project. Are folks who we’re going to cooperate with on a daily basis OK? Whether collaboration is going to be smooth? If so, I don’t care that much about how cool the code, the product or the technology is.

Even if there is legacy code with poor test coverage and the idea is boring like shit but the guys on the other side are great people would still like to work with them. Actually, it is likely that there isn’t any other side at all – we’re all in it together.

The opposite is also true. Take the coolest thing ever and work on it with a micromanager as a client and you will hate his guts in less than a week. With reciprocity, of course. And it’s not about micromanagement. Take any flawed client: a sociopath, an “I don’t give a damn” guy, a bureaucrat, or what have you. Same thing.

Actually when I go through all the stuff teams liked or hated to work on I see the same pattern – show me the client and I will tell you how happy the team is. In Lunar Logic this is seen even more vividly because given our extreme transparency we simply make ourselves vulnerable in front of our clients. That’s an awesome tool to build trust. At the same time it can be abused to bring misery upon us.

Fortunately, for whatever reasons, the latter is kind of rare.

in project management
0 comments

Limit Work in Progress Early

Limit Work in Progress Early post image

We’ve just started another project. One of things we’ve set up at the very beginning was a Kanban board. It wouldn’t be a real Kanban board if we haven’t had work in progress limits. There are two common approaches I see out there in terms of setting WIP limits.

One is to work for some time without WIP limits just to see how things go, get some metrics and get a decent idea what work in progress limits are suitable in the context.

The other approach is to start with pretty much anything that makes sense, like one task per person plus one additional to have some flexibility.

Personally, I like the latter approach. First, no matter which approach you choose your WIP limits are bound to change. There’s no freaking chance that you get all the limits right in the first approach. And even if you did the situation would change and so should the WIP limits.

Second, you can get much value thanks to limiting work in progress early even if the limits are set in a quick and dirty mode. In fact, this is exactly why I’m biased toward this approach.

In our case one last stage before “done” has been approval. We’ve had an internal client (me), thus we haven’t expected any issues with this part. We weren’t far into the project when first tasks started stacking up as ready to approval.

As I initiated the discussion about us hitting the limit in testing (the stage just before approval) I was quickly rebutted “maybe you, dear client, do your goddamn job and approve the work that is already done, so we can continue with our stuff.” Ouch. That hurt. Yet I couldn’t ask for a better reaction.

Lucky me, I asked about staging environment where I could verify all the stuff that we’ve built. And you know what? There was none. Somehow we just forgot about that. So I eagerly attached blockers to all the tasks that were waiting for me and the team could focus on setting up the staging environment instead of building more stuff.

An interesting twist in this story is that setting up the staging has proven to be way trickier than we thought and we found out a couple of flaws in the way we managed our demo servers. The improvements we’ve done in our infrastructure go way beyond the scope of the project.

There are two lessons here. One is that implementing WIP limits is a great knowledge discovery tool that works even if the limits aren’t yet right. Well-tuned WIP limits are awesome as they enable slack time generation, but even if you aren’t there yet, any reasonable limits should help you to discover any major problems with your process.

There’s another thing too. It’s about task sizing. In general the smaller the tasks are the more liquid the flow is and higher liquidity means that you discover problems faster. If it took a couple of weeks to complete first tasks we’d find out problem only after that time. With small tasks it took a couple of days.

So if you’re considering whether to start with limiting work in progress on a day 1 of a new project, consider this post as an encouragement to do so. Also you may want to size first few tasks small and make sure that they go throughout the whole process so you quickly have a test run. Treat it as a way of testing completeness of the process.

in kanban
4 comments

No Estimates Middle Ground

No Estimates Middle Ground post image

The idea of no estimates (or #NoEstimates) is all hot these days. People would choose different parties and fight a hell of fight just to prove their arguments are valid, they are right and the other party got it all wrong. I’d occasionally get into the crossfire by leaving a general comment on a thread on estimation in general, i.e. not steering a discussion for or against #NoEstimates.

And that’s totally not my intention. I mean, who likes to get into crossfire?

What No Estimates Mean

A major problem of no estimates is that everyone seems to have their own freaking idea what that is. Seriously. If you follow the discussions on the subject you will find pretty much anything you want. There are crusaders willing to ban all the estimates forever as they clearly are the source of all evil in the software world. There also are folks who find it a useful tool to track or monitor health of projects throughout their lifecycles. You definitely can find people who bring to the table statistical methods that are supposed to substitute more commonly used approaches to estimation.

And, of course, anything in between.

So which kind of #NoEstimates you support, or diss for that matter? Because there are many of them, it seems.

Once we know this I have another question: what is your context? You know, it is sort of important whether you work on multimillion dollar worth endeavor, an MVP for a startup or an increment of established application.

My wild-ass guess is this: if every party getting involved in #NoEstimates discussions answered above questions they’d easily find that they’re talking about different things. Less drama. More value. Less cluttered twitter stream (yeah, I’m just being selfish here).

Is this post supposed to be a rant against discussion on no estimates?

No, not really. One thing is that, despite all the drama, I believe that the discussion is valuable and helps to pull our industry forward. In fact, I see the value in the act of discussing as I don’t expect absolute answers.

Another thing is that I think there is #NoEstimates middle ground, which seems to be cozy and nice place. At least for me.

Why Estimating Sucks

Let me start with a confession: I hate estimating. Whoa, that’s quite a confession, isn’t it? I guess it is easily true for more than 90% of population. Anyway, as long as I can get out avoiding estimation I’d totally go for that.

I have good reasons. In vast majority of cases estimates I’ve seen were so crappy that a drunken monkey could have come up with something on par or only slightly worse. And last time I checked we were paying drunken monkeys way less than we do developers and project managers. Oh, and it was in peanuts, not dollars.

It’s not only that. Given that kind of quality of the estimates the time spent on them was basically waste, right?

It’s even worse. It was common when these estimates were used against the team. “You promised that it will be ready by the end of the month. It isn’t. It’s your fault.” Do I sense a blame game? Oh, well…

And don’t even get me started with all the cases when a team was under pressure to give “better” estimates as the original ones weren’t good enough.

Why We Estimate Then

At the same time, working closely with clients for years I perfectly understand why they need estimates. In the case of a fixed-price contract we have to come up with the price somehow. That’s where estimates come handy, don’t they? There also is a million dollar question: so how much will I spend on this thingamajig? I guess sometimes it is a million dollar question literally…

So as much as I would prefer not to estimate at all I don’t hide in a hole and pretend that I’m not there when I’m asked for an estimate.

All Sorts of Estimates

Another story is how I approach the estimation process when I do it.

I would always use a range. Most of the time pretty broad one, e.g. the worst case scenario may mean twice the cost / time than the best case scenario. And that’s still only an estimate meaning that odds are that we would end beyond the range.

Whenever appropriate I’d use historical data to come up with an estimate. In fact, I would even use historical data from a different setup, e.g. different team, different project. Yes, I am aware that it may be tricky. Tricky as in “it may bite you in the butt pretty badly.” Anyway, if, basing on our judgment, team setup and feature sizing is roughly similar I would use the data. This approach requires much of understanding the dynamics of different teams and can be difficult to scale up. In my case though, it seems to work pretty fine.

I’m a huge fan of Troy Magennis and his work. By the way, despite the fact that Troy goes under #NoEstimates banner, he couldn’t possibly be farther from folks advising just to build the stuff with no estimation whatsoever. One of most valuable lessons we can get from Troy is how to use simulations to improve the quality of estimates, especially in a case where little data is available.

Finally, I’m also fine with good old guesstimation. I would use it on a rather general level and wouldn’t invest much time into it. Nevertheless, it works for me as a nice calibration mechanism. If the historical data or a simulation shows something very different than an expert guess we are likely missing something.

Interestingly enough, with such an approach having more details in specifications doesn’t really help, but that’s another story.

On the top of that, whenever it is relevant, I would track how we’re doing against initial estimates. This way I get early warnings whenever we’re going out of track. I guess this is where you think “who, on planet Earth, wouldn’t do that?” The trick is that you need to have quite a few things in place to be able to do this in a meaningful way.

A continuous flow of work gives us a steady outcome of delivered features. An end-to-end value stream means that what is done is really done. At the same time without continuous delivery and a fully operational staging environment end-to-end value stream is simply wishful thinking. Limiting work in progress helps to improve lead time, shortens feedback loops and helps to build up pace early on. And of course good set of engineering practices allows us to build the whole thing feature by feature without breaking it.

Quite a lot of stuff just to make tracking progress sensible, isn’t it? Luckily they help with other stuff too.

Nevertheless, I still hate estimation.

And I’m lucky enough to be able to avoid it pretty frequently. It’s not a rare case when we have incremental funding and budgets so the only thing we need is keeping our pace rather steady. And I’m not talking here about particularly small projects only. Another context where estimation is not that important is when money burn-out rate is so slow (relatively) that we can afford learning what the real pace is instead of investing a significant effort into estimating what it might have been.

No Estimates Middle Ground

To summarize the whole post I guess my message is rather straightforward. There’s value in different approaches to estimation so instead of barking one at another we might as well learn how others approach this complex subject. For some reasons it works for them pretty well. If we understand their context, even if ours is different, we might be able to adapt and adopt these methods to improve our estimation process.

That’s why I think the discussion is valuable. However, in terms of learning and improving our estimation toolbox #NoEstimates notion doesn’t seem to be very helpful. I guess I’ll stay aside in the middle ground for the time being.

By the way, if we are able to improve our cooperation with the clients on the estimation I couldn’t care less whether we call it no estimates or something different.

in project management, software business
4 comments

Cumulative Flow Diagram

Cumulative Flow Diagram post image

One of charts that give you a quick overview of what’s happening in a project or product work is Cumulative Flow Diagram (CFD). On one hand in CFD you can find typical information about status of work: how much work is done, ongoing and in backlog, what is the pace of progress, etc. This is the basic stuff. On the other hand, once you understand the chart, it will help you to spot all sorts of issues that a team may be facing. This is where Cumulative Flow Diagram shows its real value.

Before we move to all the specific cases let me start with the basic stuff though (feel free to scroll down if you’re familiar with this part).

Cumulative Flow Diagram

The mechanism of Cumulative Flow Diagram is very simple. On a vertical axis we have a number of tasks. On a horizontal one we have a timeline. The curves are basically a number of items in any possible state shown in a time perspective. The whole trick is that they are shown cumulatively.

If the green curve shows stuff that is done it will naturally grow over time – that’s simple. If the blue line shows tasks that are in progress, and we have stable amount of work in progress it will still go up as it adds to the green line. In other words work in progress would be represented by the gap between the blue and the green lines… We’ll come back to that in a while.

Any line on CFD represents a specific stage. In the simplest example we’d have items that are to be done, stuff that is ongoing and things that are done.

For the sake of the rest of this article I’m going to use a simple process as a reference.

Workflow

We have a backlog, items that are in development or testing and stuff that is done. For the sake of Cumulative Flow Diagram examples it doesn’t matter whether tasks in development are ongoing or done and waiting for testing. However, as we will find later, there may be some indicators that would make tracking these two stages separately valuable.

With such a workflow our Cumulative Flow Diagram may look like this.

Cumulative Flow Diagram

First, the meaning of the lines. The green one shows how many items have been delivered over time. Everything that is between the blue and the green curves is stuff that is in testing. The area between the red and the blue lines shows how much stuff is in development (either ongoing or done). Finally, the top part below the orange line is the backlog – how many items weren’t yet started.

In a glimpse we can find a few important bits of information about this project. First, after slow start a pace of delivery is rather stable. Pretty much the same can be said about work that is in progress – the pace is stable and things go rather smoothly. We know that the scope has increased a couple of times, which we can tell looking at jumps of the orange line. Finally, comparing where the green line (done) and the orange line (scope) are on a vertical axis right now we can say that we’re not yet halfway through the project.

Quite a lot of information for the few seconds, isn’t it? Well, there is more.

Cumulative Flow Diagram

On this CFD a few things have been shown explicitly. One is a scope change. We’ve discussed it on the previous chart too. Another one is the space between the red and the green lines. It represents work in progress (WIP). Note, basing on Cumulative Flow Diagram only you can’t learn how much work in progress you have precisely; it is some sort of approximation. Pretty good one, but only approximation. It is a very good indicator how WIP is changing over time though. There also is an arrow labeled “prod. lead time” where “prod.” stands for production. It roughly shows how much time we need to complete an item. Again, it shouldn’t be used as the ultimate lead time indicator but it shows pretty well what lead time we’ve had and how it changes over time. Finally, we can approximate slope of done curve to roughly estimate the delivery time. Of course if the scope changes the delivery time will change as well thus the scope line (the orange one) is also approximated.

Now, we have even more information. Yay!

You will rarely see such nice Cumulative Flow Diagrams though. And that’s good news actually. I mean if CFD looks plain and nice all the time you can only learn that much from it. The real CFD magic is revealed when things don’t go so well.

Let’s go through several typical cases.

Cumulative Flow Diagram

In this situation the spread between the red and the green lines is growing over time. It indicates a really bad thing – we have more and more work in progress. That sucks. Increased WIP means increased lead time as well. Not only is time to market longer but also it is more and more difficult to deliver anything fast when we need it.

That’s not the worst thing. The worst thing is that with increased amount of work in progress we also increase multitasking thus we incur all the costs of context switching making the team less efficient.

Do we know that for sure? Um… no, not really. I make here an assumption that the team setup hasn’t changed, meaning that we have the same people spending similar amount of time on a project, etc. If it was Cumulative Flow Diagram for a team that is constantly growing and then it would be just OK. The chart may also present an increasing number of blocked tickets which definitely would be a problem but a different one then described above.

In either case such a situation is a call for more analysis before jumping to conclusions. The potential reasons I offer you with this and following charts are simply the likely ones; not the only available.

By the way, please treat all the following remarks keeping that in mind.

One more interesting observation about this Cumulative Flow Diagram is that we have no clues where the root cause for increasing WIP lays. Neither development nor testing part seems to be steadily attached to any other line over time. A further investigation is a must.

There are charts where we get some clues which stage of a process is problematic.

Cumulative Flow Diagram

Whoa, this time the development part compared to the testing part is really heavy. What can we learn from it? We don’t have problems with testing. Also, if a definition of testing is “testing and bug fixing,” which is a typical approach, it doesn’t seem that quality of work is much of an issue either. If we are to point fingers we’d point them to development part, wouldn’t we?

And we might have been wrong. Of course one thing that may be happening here is a lot of items in development but few of them ready to test. Another issue though may be that there is a lot of stuff waiting for testing but availability of testers is very limited and when they’re available they focus on finishing what they started.

How can we tell? We can’t unless we have more data. In fact, another line on the chart – one that distinguishes items in “development ongoing” from those in “development done” – would help. Without that the CFD is only an indicator of a problem and a call for a deeper analysis. After all, that’s what Cumulative Flow Diagrams are for.

Another flavor of a similar issue is on the next CFD.

Cumulative Flow Diagram

We can find two things here. Let’s start with the more obvious one – the shape of the green line. It looks like stairs, doesn’t it? Stairs are typical when the last stage, which commonly is some sort of deployment, is done in cadences, e.g. weekly, biweekly, etc. Building on that, a stairs-shaped delivery line mean that work in progress and lead time would vary depending on a moment of release cadence you’re in. Maybe it’s time to make a step toward continuous

There is one more thing here though. There is pretty significant, and increasing, number of items that are in testing but don’t get released. The gap between the blue and the green line is growing with each consecutive release.

This one is a real issue here. It may mean that we have a problem with quality and we can hardly reach a state when an item has all the bugs fixed. It may mean that developers simply don’t pay much attention to fixing bugs but tend to start new stuff; at the same testers would follow up on new stories as they wait for bug fixes for the old ones anyway. It may mean that a code base is organized in a way that doesn’t allow releasing everything that is ready. Once again, the root cause is yet to be nailed but at least we know where to start.

It seems we have more questions than answers. If you think that I’m not helping it will be no different with the next example.

Cumulative Flow Diagram

This would happen occasionally in almost every team. All the lines flatten out. What the heck? The first thing I do when I see that is I check for public holidays or company-wide event happening during that time. It may simply be time when no one was actually working on a project and there is a perfect explanation for that.

Sometimes it is not the case though. This is when things get interesting. If everyone was at work but the chart still indicates that no one got anything done it most likely tells a story about serious problems. A staging environment could have gone down so everyone has been focusing on bringing it back alive. Another project could have needed help and virtually everyone has been sucked there. There could have been a painful blocker that has forced everyone in the team to refocus for a while.

In either case, whatever it was it seems to be solved already as the team is back on track with their pace.

Another flavor of such a scenario would look a bit differently. It would give more hints too.

Cumulative Flow Diagram

There are two important differences between this and the previous Cumulative Flow Diagrams. One is that, in this case, only two lines flatten out; the development line keeps the healthy progress. The other is that ends of both the green and the blue line are as flat as a table top.

The latter suggests that whatever is the problem it isn’t solved yet. What the problem might be though? It seems that the team has no problem starting development of new items. They can’t, however, start testing, thus they clearly can’t deliver anything either. One of probable hypothesis would be that there is something seriously wrong with either the testing environment or the testers.

In the first case it just isn’t technically possible to verify that anything works as intended. In the second it seems something bad happen to our only tester (if there were more than one there would likely be some progress). There is another hint too. Developers don’t seem to care. They just start, and possibly complete, their stuff as if nothing happened.

I’d say that these guys have to deal first with the issue and then discuss how they collaborate. I sense a deeper problem here.

The same way as the previous example indicates an issue in the way people cooperate, the next one suggest a quality problem.

Cumulative Flow Diagram

Development line goes up in a stable and predictable manner. The testing curve? Not so much. And we better not mention the done line. Obviously we have more and more work in progress here over time – we’ve covered this one before.

But wait, then suddenly the magic happens and everything goes back on track. At the very end we have decently small amount of work in progress and much stuff delivered. The smell here is how the done (and testing to some point as well) curve skyrockets at the end.

How come that earlier such pace was impossible? I’d challenge the idea that the team suddenly become so fast. Of course they might have not kept the board up-to-date and then, out of the blue, have realized that they’ve had way more finished items that they’ve thought they had.

More likely scenario is that under pressure they just deployed whatever seemed at least remotely close to working. If that’s true the problem isn’t solved at all and it’s going to come back to bite them in their butt. A curious reader may try to draw a picture how further part of Cumulative Flow Diagram would look like in this case.

The next one is one of my favorites. I wonder why it is so far down the list. Oh well…

Cumulative Flow Diagram

This Cumulative Flow Diagram is surprisingly common. Let’s try to list a few things that we can find here. The development curve goes up aggressively. Halfway through more than 80% of items are started. Testing doesn’t go nearly that well. And delivery? Well, the start was crappy, I admit, but then it simply went through the roof. And it isn’t only a single day that would suggest delivery of uncompleted stuff. Odds are that these items are properly done. Wouldn’t bet the real money on that but wouldn’t be surprised if that it was so either.

Of course we have very high WIP in the middle of this CFD but at both ends the gap seems to be significantly smaller.

Ah, one more thing. It seems that at the end of the day we’ve delivered everything that was in the backlog. Yay!

Now, what would be the diagnosis in this case? Time boxing! This is one of classic visualizations of what typically happens over the course of an iteration. If a team is comfortable with planning and has rather stable velocity it’s likely that they’d fill backlog with reasonable amount of new features.

Then, given no WIP limits within the time box everyone does their own thing: developers quickly start many features having no pressure other than the end of the iteration to finish stuff. Eventually, the backlog is cleared so the team refocuses to finish stuff, thus the acceleration at the latter stages of the process.

If you pictured a series of such Cumulative Flow Diagrams attached one to another you’d see a nice chain going North-East. You’d find many of these in Scrum teams.

Another chart, despite some similarities to the previous two, suggests a different issue.

Cumulative Flow Diagram

In this case almost everything looks fine. Almost, as the done line barely moves above the horizontal axis. However, when it finally moves it goes really high though. What does it mean?

My guess would be that the team might have been ready with the stuff but, for whatever reasons, they wouldn’t deliver. In fact this is one of typical patterns in fixed price, fixed date projects, especially in bigger ones. Sometimes a basic measure that is tracked is how many items are done by the production team. No one pays attention whether it can possibly be deployed in production or even staging environment.

Eventually, it all gets deployed. Somehow. The deployment part is long, painful and frustrating though. Cumulative Flow Diagram representation of that pain and tears would be that huge narrow step of the done curve.

Talking about huge and narrow steps…

Cumulative Flow Diagram

Another chart has such a step too. We’ve already covered its meaning at the very beginning – it is the change of the scope. In this case it is not about the fact that such change has happened but about its scale and timing.

First, the change is huge. It seems to be more than a half of initial scope added on the top of it. Second, it happens out of the sudden and pretty late in the project. We might have been planning the end date and now, surprise, surprise, we barely are halfway through again.

Now, this doesn’t have to be a dysfunction. If you were talking with the client about the change or it is simply a representation of expected backlog replenishment that’s perfectly fine. In either case you it shouldn’t come as a surprise.

If it does, well, that’s a different story. First, if you happen to work on fixed prices contract… man, you’re screwed up big time. It isn’t even scope creep. Your scope has just got on steroids and beaten world record in sprint. That hurts. Second, no matter the case you likely planned something for these people. The problem is it’s not going to happen as they have hell lot of work to do in the old project, sorry.

So far lines on Cumulative Flow Diagrams were going only up or, at worst, were flat. After all, you’d expect that given the mechanism of creating the chart. That’s the theory. In reality a following chart shouldn’t be that much of a surprise for you.

Cumulative Flow Diagram

Whoa! What happened here? A number of stories in testing went down. The red line representing stuff in development followed but don’t be fooled. Since the gap between the red and the blue line is stable nothing really happened to the items in development; it’s only stuff in testing that was affected.

Now, where did it go? Definitely not to done bucket – the green line didn’t move. It didn’t disappear either as the total number of items (the orange line) seems to be stable. A few items had to go from testing to backlog then.

What could it mean? Without an investigation it’s hard to say. I have good news though. The investigation shouldn’t be long – such things don’t happen every other day. For whatever reasons stuff that was supposed to go past code complete milestone was marked as not started.

I sense a major architectural or functional change. What’s more, it’s quite probable that the change was triggered by the tests of aforementioned items. Unfortunately it also means that we’ve wasted quite some time building wrong stuff.

Another flavor of that problem looks a bit scarier.

Cumulative Flow Diagram

Again, the total scope didn’t change. On the other end every other line took a nosedive. Once again amount of stuff in development doesn’t seem to be affected. This time the same can be said about items in testing. It’s the delivered stuff that got back to the square one.

It means that something that we though was done wasn’t so. One thing is that we were building wrong stuff, exactly as in the previous example, only we discovered it later. We likely pay the order of magnitude bigger price for the late discovery.

There’s more in it though. This Cumulative Flow Diagram shows that we likely have problems with acceptance criteria and / or collaboration with a client. I mean how come that something that was good is not so anymore? Either someone accepted that without checking or we simply don’t talk to each other. No matter the case it sucks big time.

Would the orange line never move down then? Oh yes, it would.

Cumulative Flow Diagram

I mean, besides an obvious case where a few items are removed from backlog and the only line that moves down would be the orange one, we may find this case. Using the technique perfected in the previous examples we will quickly find that a few items that were in testing are… um, where they are actually?

Nowhere. They’ve disappeared. They haven’t been completed, they haven’t been moved back. These items are no more.

What does it mean? First, one more time we’ve been working on wrong stuff (fools we are). Second, we’ve figured it out pretty late (but could have been later). Third, the stuff doesn’t seem to be useful at all anymore.

It’s likely that we realized that we don’t know how exactly build this or that and we asked the client just to learn that they don’t need either of those anymore. It’s also likely that we’ve encountered a major technical issue and rethought how we tackle the scope possibly simplifying the whole approach. Whatever it was, if we had figured it out earlier it wouldn’t have been so costly.

Finally, one more Cumulative Flow Diagram I want to share with you.

Cumulative Flow Diagram

Think for a while what’s wrong with this one.

When compared to the previous charts it seems pretty fine. However, by now you should be able to say something about this one too.

OK, I won’t keep you in suspense. In the first part of this CFD work in progress was slowly but stably growing. However, it seems that someone noticed that and the team stopped starting new stuff. You can tell that seeing how relatively flat the red line has become somewhere in the middle of the chart.

Given some time testing and delivery, even though their pace hasn’t changed, caught up. Work in progress is kept at bay again and the team’s efficiency is likely improved.

As you can see, despite past several examples, you can see the effects on improvements at Cumulative Flow Diagrams too. It’s just that CFD is more interesting in terms of learning that you have a problem than finding confirmation that you’ve solved it. The latter will likely be pretty obvious anyway.

Congratulations! You made it through to what is probably the longest article in the history of this blog. Hopefully you now understand how to read Cumulative Flow Diagrams and what it may indicate.

I have bad news for you though. You will rarely, if ever, see such nice CFDs as shown in the examples. Most likely you will see an overlapping combination of at least a few patterns. This will likely make all the lines look like they were tracing a rollercoaster wagon.

Fear not. Once you get what may be happening under the hood of the chart you will quickly come up with the good ideas and the right places to start you investigation. After all, Cumulative Flow Diagram will only suggest you a problem. Tracking it down and finding an antidote is a completely different story.

However, if you’re looking for a nice health-o-meter for your team Cumulative Flow Diagram is a natural choice.

in kanban, project management
14 comments

Kanban Landscape and Portfolio Kanban

Kanban Landscape and Portfolio Kanban post image

One of reasons why Kanban Leadership Retreat (KLRAT) is such an awesome event is that it pushes our understanding of Kanban to a new level. No surprise that after the retreat there’s going to be much content related our work in Mayrhofen published here.

One of sessions was at KLRAT was dedicated to sort out the Kanban landscape – how we position different Kanban implementations in terms of both depth and scale.

Here’s the outcome of the session

Kanban Landscape

To roughly guide you through what’s there: the axes are maturity and scale. Maturity would differentiate implementations that are shallow and use only parts of Kanban from those that are characterized by deep understanding of principles and practices. Scale, on the other hand, represents a spectrum that starts with a single person and ends with all the operations performed by an organization.

If we use scale as staring point we would start with Personal Kanban. If you ask me I believe that the range of depths of Personal Kanban applications should be wider (thus a bit taller area highlighted on the following picture), but I guess it’s not who should take the stance here.

Kanban Landscape Personal Kanban

Then we have the whole lot of different Kanban applications on a team and a cross-team level. For most of attendees this was probably the most interesting part and I guess there will be much discussion about this part across the community.

Kanban Landscape Team Level Kanban

For me though a more thought-provoking bit was the last part (which by the way got pretty little coverage in discussion): Portfolio Kanban. After all, this is my recent area of interest.

Kanban Landscape Portfolio Kanban

Since we didn’t have enough time to sort out all the details during the session the final landscape was a sort of follow-up. Anyway, my first thought about the whole picture was that the range of depth of Portfolio Kanban implementation should be broader.

Given a simple fact that limiting work in progress on portfolio level is tricky at best, many teams start simply with visualization. Of course I don’t have anything against visualization. In fact, I consider visual management, which is implementation of the first Kanban practice, a tool that allows harvesting low-hanging fruits easily. In terms of improvements on portfolio level low-hanging fruits are rarely, if ever, a scarce resource.

Having said that, Portfolio Kanban or not, I don’t consider visual management a very mature or deep Kanban implementation. That’s why my instant reaction was that we should cover less mature implementations on portfolio level too.

Kanban Landscape Portfolio Kanban

That’s not all though. When we think about shallow Portfolio Kanban implementations, e.g. visualization and not much more, we should also think about the scale. It’s a quite frequent scenario that we start visualizing only a part of work that is done across the organization, e.g. one division or one product line. From this perspective, such implementations are closer to multi-service scale than to full-blown portfolio context.

That’s why I believe Portfolio Kanban implementations should cover even broader area, especially when we talk about low-maturity cases.

Kanban Landscape Portfolio Kanban

Finally, the picture would cover different Portfolio Kanban implementation I know. I guess this might mean that, in some point of future, we will go into more detail talking about maturity and scale of Portfolio Kanban implementations. However, as for now I think it is enough.

Interestingly enough the area I propose for a portfolio level implementations covers much of whitespace we’ve had on the picture. It is aligned with my general perception of Kanban as a method that can be scaled very flexibly throughout the broad spectrum of applications.

in kanban
0 comments

All Sorts of Kanban

All Sorts of Kanban post image

Some of you who pay more attention to what is happening in the Lean Kanban community may have noticed that there’s an ongoing discussion about what Kanban is or should be. It’s not about what do we use Kanban for but how exactly we define Kanban.

Another incarnation of this discussion was started by Al Shalloway with his post on how he typically approaches Kanban implementations and the explicit statement that he doesn’t support Kanban method anymore.

Al Shalloway Kanban

Obviously it sprung a heated discussion on Twitter, which is probably the worst possible medium for such a conversation. I mean how precisely can you explain yourself in 140 characters? That’s why I didn’t take part in it. However, since I do care about that here is my take on the whole thing – not just Al’s comments but the whole dispute altogether.

Let me start with the very old discussion about Scrum. I’ve always been a fanboy of ScrumBut. In fact, all sorts of ScrumButs. Not that I don’t see risks attached to using only part of the method (but that’s another story). I just think that any method is some sort of generalization and its application is contextual. It means that specific implementation has to take into account a lot of details that were simply not available or known when the method was defined.

Fast forward to the discussion around Kanban. My general attitude hasn’t changed. When I see more and more different approaches to Kanban adoption I have the same feelings as I had when I’ve seen people experimenting with different Scrum implementations. The more the merrier.

I totally respected, and learned from, the discussion about different order of adoption of Kanban practices that happened last year at Kanban Leadership Retreat. I totally respect, and learn from, Al’s approach as well. There’s no single flavor of Kanban.

The part I don’t respect is dissing other approaches to Kanban adoption. If we want to discuss why something works or doesn’t work let’s do that in a specific context. I guess we will quickly find out that different approaches are valid in different contexts.

So what is the definition of Kanban? Personally I base on the definition of Kanban method as described in David Anderson’s book (including all the later work that will hopefully make its way to the second version of the book). From my experience it is the most commonly used and most widely known. It is also covered extensively with experience reports and derivative work. So if we look for a benchmark, something we universally refer to when talking about Kanban, I think we already have one.

At the same time I’m perfectly OK when I see other flavors of Kanban. As far as we understand the method, why specific practices are there, we can start tweaking it so it fits our specific context better. This is exactly what is happening with Kanban these days. And my take is that it is still the same Kanban, no matter whether one prefers interpretation of David Anderson, Mary Poppendieck, Henrik Kniberg, Hakan Forss, Al Shalloway or someone else (we’ll see more of those, I’m sure). I’m OK with that, no matter that personally I have my preferences across the list.

There are different sorts of Kanban and that’s actually the best part. There is no one-size-fits-all approach and there never will. It is always contextual. And this is why we need diversity.

Personally, I’d love to see these discussions run in such a spirit and not following the “my way is better than yours” line.

in kanban
0 comments

Kanban Leadership Retreat: Portfolio Kanban

Kanban Leadership Retreat: Portfolio Kanban post image

This year’s Kanban Leadership Retreat (KLRAT), as always, was awesome. In fact, despite sharing some critical feedback during retro session at the very end of the event, I still consider it the best event of the year, hands down. This year I’ve come back home with the biggest homework ever: experiments to try out, ideas to play with, concepts to write down, etc. It means that coming back next year is a no-brainer to me.

One area that you’ll hear a lot about here is Portfolio Kanban. And this was also the subject of my session at the retreat.

The Goal

One of my goals for KLRAT this year was pushing forward the discussion on Portfolio Kanban. Answering questions like: what are the boundaries of the method? What are the gaps we still need to cover to make Portfolio Kanban thorough? How the implementations on portfolio level are aligned with the method as we know it?

During the session I wanted to talk about all these things. My expectation wasn’t to rule out all the doubts. I assumed that the outcome would include some answers as well as some new questions but overall would bring us to better understanding what Portfolio Kanban is.

The Hypothesis

I hypothesized that Kanban method with its principles and practices define well the approach we can use on portfolio level. In other words that we don’t need any other definition for Portfolio Kanban than the one we already have for Kanban. This is where we started.

The Process

I didn’t want to start with the Kanban definition and look for its possible applications (we would find those, wouldn’t we?). I asked participants for a brain dump practices, actions and techniques that we use manage portfolio. Then we sorted them into six buckets of Kanban practices. For example portfolio overview would be covered by visualization so it went to the visualize bucket, limiting number of ongoing projects would obviously go to the limit WIP bucket, etc.

Of course there were odd balls too. These went to the side as another group. Actually, this one was the most interesting one for me as they pointed us to possible gaps.

Everyone had a chance to briefly explain what they put on the wall and why it went to a specific bucket. Then we used the remaining time to discuss the challenges we see – which questions weren’t addressed and need further work.

The Outcomes

There are a few lessons learned from the exercise. I think the most important bit is that the hypothesis was confirmed. I am convinced that using constraints of Kanban method we can neatly define Portfolio Kanban.

Of course the specific techniques will be different. Interpretation of practices will vary too. But the same is true with team level Kanban applications in different contexts.

Things get more interesting once we go deeper into the details. Let’s look at the wall which is the documentation of the session (click for a larger version).

KLRAT Portfolio KanbanKLRAT Portfolio KanbanKLRAT Portfolio Kanban

After a glimpse you can see that one bucket is almost empty. Surprisingly enough is the improvement / evolution bucket. Does it mean that we don’t see a match between Kanban method and portfolio management in this aspect? Personally, I think that it would be too quick to draw such conclusions.

An observation made by Klaus Leopold was that quite a bunch of the stickies on the wall that could be placed not only in their original place but also in the improvement / evolution bucket. That’s obviously true. But then I can’t help but thinking that if we were doing the very same exercise with Kanban on team or service level the end result would look different.

I think that the answer is that evolution on a portfolio level involves different behaviors and different tools than on a team level. How exactly? Well, this is one of these loose strings we have after the session, so I don’t have an answer for this. Yet.

Finally, pretty obvious outcome of the session is the list of the challenges we will have to address (or explicitly leave unaddressed) to finalize definition of Portfolio Kanban

KLRAT Portfolio Kanban Challenges

Although we aren’t there yet in terms of defining what Portfolio Kanban is going to be, we made a big step forward. And this was exactly what I wanted to achieve. I didn’t want more divergence as I believe we’ve had enough of that so far. I didn’t expect more convergence either. Not just yet. Also I think that the time at KLRAT is just too scarce to spend it discussing the exact definition.

And that is how the end result of our work looked like.

KLRAT Portfolio Kanban

All in all there are going to be follow-up steps – those that will bring convergence. If you are interested in further work on the subject stay tuned.

in kanban, software business
0 comments

Manager-Free Organization

Manager-Free Organization post image

One of frequently mentioned management ideas these days is that we don’t need management. If I got a free beer every time I’ve heard the examples of W.L. Gore, Valve or GitHub and how we should act as they do I could stay drunk for weeks without needing to buy any alcohol. A common message is that if they can everyone can.

I don’t subscribe to that idea.

Well, being a manager myself that’s not really a surprise, isn’t it?

I mean I’m really impressed with what these companies do, especially W.L. Gore given its size. It doesn’t mean that I automatically think that it is the new management model that masses should adopt. I simply don’t treat is as the true north of management or leadership if you will.

First, such an approach is very contextual. You have to have a lot of right bits and pieces in place before it works. For the start it will fail miserably unless you have the right people on board and, let’s face it, most companies have way too many bad apples to make it work.

Second, scaling up a manager-free organization is a huge pain in the neck. This is why I respect so much W.L. Gore work.

Being a fan of evolution I also try to imagine how to become such an organization evolutionary. I guess I must be too dumb but in vast majority of companies it is beyond my imagination. Revolutions, on the other hand, have surprisingly crappy success rate so that’s not a feasible solution either.

So far you might have considered me as a skeptic.

Well, not really.

While I don’t think that the managerless approach is, or will be, for everyone there is a specific context where it is surprisingly easy to implement. If you run a small company, let’s say smaller than 30 people, there’s not that much of managerial work anyway. Unless you introduce tons of that crap, that is.

It just so happens that Lunar Logic is such a small company. When you think about small and stable size you can forget about scaling issue. It is much easier to find the right people too because you simply need fewer of them. While Valve needs few hundreds of them we’re perfectly fine with twenty-something. Besides, smaller teams generally tend to have fewer bad apples as everything, naturally, is more transparent. Everyone knows everyone else, sees others’ work, etc. There’s no place to hide.

Suddenly, the manager-free approach doesn’t seem so scary, does it?

It may be a hit for managers’ ego though.

I can hardly remember when I wasn’t a manager. Obviously there were countless occasions when I used my formal power to do what I believed was right. So yes, it took courage to intentionally strip myself off of power and just put myself in a row with everyone else. Not that I’m already done with that; it’s a gradual process. A nice thing is that it can be done in evolutionary fashion though.

While I still make salary and some other financial decisions, that’s basically it. The good part is that I’m forced to wear my manager’s hat very, very rarely. I spend the rest of my time fulfilling all the other roles I have which hopefully can be summarized as me helping others.

You know, all the fun stuff, like setting up daily conference calls with the clients, writing longish boring emails, keeping task boards up to date, solving mundane problems, etc. Typically, just being there and looking for ways to help the rest of the team doing their best. An interesting thing is it does feel damn good even if the tasks sound less-than-exciting. I help people to do their awesome job. What else could you ask for as a leader?

That’s why I can’t be happier when I witness others treating me just as a regular team member. It means we are closer to being a manager-free organization.

So while you shouldn’t expect me proposing the managerless office to everyone I definitely thing that this is something small, knowledge-based companies could try.

Would it work that easily if we were twice as big? I have no freaking idea. I mean we definitely aren’t yet where GitHub or Valve is. I don’t even know if we want to be there. If the company’s growth is a threat for the culture we grow and cultivate here, so much the worse for the growth.

And this basically summarizes why I think that the manager-free approach isn’t for majority. I think pretty few businesses would prefer to sacrifice growth just for the sake of preserving the culture.

By the way, do expect more on the subject soon.

in software business, team management
11 comments

MVP Thinking

MVP Thinking post image

One of the most valuable goals achieved by Eric Ries’ Lean Startup is popularizing the term Minimal Viable Product (MVP). Of course the concept isn’t novel. We were using the Minimal Marketable Feature (MMF) idea back in the early days of Kanban and it was coined based on the same way of thinking. There are more of such concepts too.

The basic premise is that we should build the smallest possible thing or run the smallest possible experiment that is still reasonable and adds value. Depending on the context the value may be something that helps users, but may as well be just knowledge discovery or verification a hypothesis.

One thing I’ve realized recently is how widely I apply this thinking. Initially, it was simply the way of breaking the scope down. I wanted my teams to build possibly small, but still valuable, chunks of software so a client can verify them and feed us back with useful information. Then, after Lean Startup, it was about running a product. What we want to build something new that kicks butts. What is the smallest feature that can prove or disprove the hypothesis that it would work at all?

Somehow I now use the same way of thinking, MVP thinking if you will, to discuss marketing ideas, define actionable items during retrospectives, etc. Often it is difficult to define a product per se, but there always is some kind of an expected outcome and definable minimal effort allowing the get that outcome.

So how would I define MVP thinking?

1. Define the next smallest valuable goal you want to achieve.
2. Define minimal effort that allows you to achieve that goal.
3. Execute.
4. Analyze the outcomes and learn.
5. Repeat.

A potentially tricky part here is defining the goal because it is totally contextual. It is also something that really appeals to me as I don’t like recipes. In fact, if there is anything new here it is basically extremely broad application of the pattern as the idea itself is anything but new. I mean, we usually close our context to working with the scope of a project, driving a product, running a business, etc. Then, we absolutely coin a new term and if it works we make our careers as overpriced consultants.

That’s totally not my goal. I’m just trying to broaden an applicable context of ideas we already know as I’ve personally found it helpful.

So if my problem is a roof leaking near a roof window my next minimal goal may be verifying whether the leakage has anything to do with an external window blind. Such a goal is nice because minimal effort may mean simply waiting for the next rain with the blind either opened or closed. I definitely don’t rush to repair the roof.

Talking about marketing? Let’s set a general goal that, say, TechCrunch will cover us. What would be the smallest valuable experiment that can bring us closer to achieving this kind of a goal? I guess reaching out and networking may be a very reasonable first step. It doesn’t even require having an idea, let alone a product, that we want to have covered.

How about a product? Well, this one was covered virtually everywhere. Building minimal functionality, possibly even fake, that allows verifying that the idea for the product makes sense.

Retrospectives? What is a single, smallest possible, change that will have a positive impact on the team? Try it. Verify it. Repeat.

Heck, I even buy my sailing gear this way. What is the smallest possible set that allows reasonable survival? Then I use the outcome and iterate, e.g. next time I need new gloves, long johns and waterproof case for a phone.

When you think about that it is basically Kaizen – systematically running small improvement experiments everywhere. So yes, it’s nothing new. It’s just the specific idea of Minimal Viable Product that spoke to me personally and gave me a nice constraint that can be used in different areas of my life.

By the way, despite it very open definition I also find Kaizen usually applied in a very limited context. So no matter which idea works for you, just remember you can use it in broader context.

in software business
0 comments