≡ Menu

Pawel Brodzinski on Software Project Management

(Sub)Optimizing Cycle Time

(Sub)Optimizing Cycle Time post image

There is one thing we take almost for granted whenever analyzing how the work is done. It is Little’s Law. It says that:

Average Cycle Time = Work in Progress / Throughput

This simple formula tells us a lot about ways of optimizing work. And yes, there are a few approaches to achieve this. Obviously, there is more than the standard way, used so commonly, which is attacking throughput.

A funny thing is that, even though it is a perfectly viable strategy to optimize work, the approach to improve throughput is often very, very naive and boils down just to throwing more people into a project. Most of the time it is plain stupid as we know from Brook’s Law that:

Adding manpower to a late software project makes it later.

By the way, reading Mythical Man-Month (the title essay) should be a prerequisite to get any project management-related job. Seriously.

Anyway, these days, when we aim to optimize work, we often focus either on limiting WIP or reducing average cycle time. They both have a positive impact on the team’s results. Especially cycle time often looks appealing. After all, the faster we deliver the better, right?

Um, not always.

It all depends on how the work is done. One realization I had when I was cooking for the whole company was that I was consciously hurting my cycle time to deliver pizzas faster. Let me explain. The interesting part of the baking process looked like this:

Considering that I’ve had enough ready-to-bake pizzas the first setp was putting a pizza into the oven, then it was baked, then I was pulling it out from the oven and serving. Considering that it was almost a standardized process we can assume standard times needed for each stage: half a minute for stuffing the oven with a pizza, 10 minutes of baking and a minute to serve the pizza.

I was the only cook, but I wasn’t actively involved in the baking step, which is exactly what makes this case interesting. At the same time the oven was a bottleneck. What I ended up doing was protecting the bottleneck, meaning that I was trying to keep a pizza in the oven at all times.

My flow looked like this: putting a pizza into the oven, waiting till it’s ready, taking it out, putting another pizza into the oven and only then serving the one which was baked. Basically the decision-making point was when a pizza was baked.

One interesting thing is that a decision not to serve a pizza instantly after it was taken out of the oven also meant increasing work in progress. I pulled another pizza before making the first one done. One could say that I was another bottleneck as my activities were split between protecting the original bottleneck (the oven) and improving cycle time (serving a pizza). Anyway, that’s another story to share.

Now, let’s look at cycle times:

What we see on this picture is how many minutes elapsed since the whole thing started. You can see that each pizza was served a minute and a half after it was pulled out from the oven even though the serving part was only a minute long. It was because I was dealing with another pizza in the meantime. Average cycle time was 12 minutes.

Now, what would happen if I tried to optimize cycle time and WIP? Obviously, I would serve pizza first and only then deal with another one.

Again, the decision-making point is the same, only this time the decision is different. One thing we see already is that I can keep a lower WIP, as I get rid of the first pizza before pulling another one in. Would it be better? In fact, cycle times improve.

This time, average cycle time is 11.5 minutes. Not a surprise since I got rid of a delay connected to dealing with the other pizza. So basically I improved WIP and average cycle time. Would it be better this way?

No, not at all.

In this very situation I’ve had a queue of people waiting to be fed. In other words the metric which was more interesting for me was lead time, not cycle time. I wanted to optimize people waiting time, so the time spent from order to delivery (lead time) and not simply processing time (cycle time). Let’s have one more look at the numbers. This time with lead time added.

This is the scenario with protecting the bottleneck and worse cycle times.

And this is one with optimized cycle times and lower WIP.

In both cases lead time is counted as time elapsed from first second, so naturally with each consecutive pizza lead times are worse over time. Anyway, in the first case after four pizzas we have better average lead time (27.75 versus 28.75 minutes). This also means that I was able to deliver all these pizzas 2.5 minutes faster, so throughput of the system was also better. All that with worse cycle times and bigger WIP.

An interesting observation is that average lead time wasn’t better from the very beginning. It became so only after the third pizza was delivered.

When you think about it, it is obvious. Protecting a bottleneck does make sense when you operate in continuous manner.

Anyway, am I trying to convince you that the whole thing with optimizing cycle times and reducing WIP is complete bollocks and you shouldn’t give a damn? No, I couldn’t be further from this. My point simply is that understanding how the work is done is crucial before you start messing with the process.

As a rule of thumb, you can say that lower WIP and shorter cycle times are better, but only because so many companies have so ridiculous amounts of WIP and such insanely long cycle times that it’s safe advice in the vast majority of cases.

If you are, however, in the business of making your team working efficiently, you had better start with understanding how the work is being done, as a single bottleneck can change the whole picture.

One thought I had when writing this post was whether it translates to software projects at all. But then, I’ve recalled a number of teams that should think about exactly the same scenario. There are those which have the very same people dealing with analysis (prior to development) and testing (after development) or any other similar scenario. There are those that have a Jack-of-all-trades on board and always ask what the best thing to put his hands on is. There also are teams that are using external people part-time to cover for areas they don’t specialize in both upstream and downstream. Finally, there are functional teams juggling with many endeavors, trying to figure out which task is the most important to deal with at any given moment.

So as long as I keep my stance on Kanban principles I urge you not to take any advice as a universal truth. Understand why it works and where it works and why it is (or it is not) applicable in your case.

Because, after all, shorter cycle times and lower WIP limits are better. Except then they’re not.

in kanban, project management
3 comments

On Transparency

On Transparency post image

One of things I’ve learned throughout my career is to assume very little and expect to learn very much whenever changing a job. In terms of learning, there always is a great lesson waiting there for you, no matter what kind of an organization you’re joining. If you happen to join a crappy org this is the least you can salvage; If you join a great one, it’s like a cherry on a cake. Either way, you should always aim to learn this lesson.

But why am I telling you this? Well, I have joined Lunar Logic very recently. From what I could say before, the company was a kick-ass Ruby on Rails development shop with a very open and straightforward culture. I didn’t even try to assume much more.

One thing hasn’t been a surprise; We really are a kick-ass Rails development shop. The other has been a surprise though. I mean, I expected transparency within Lunar Logic, but its level is just stunning. In a positive way of course.

An open discussion about monthly financials, which obviously are public? Fair enough. Questioning the value of running a specific project? Perfectly OK. Sharing critical opinions on a leader’s decisions? Encouraged. Regular lean coffees where every employee can come up with any subject, even one that would be considered embarrassing in almost any organization I can think of? You’re welcome. I can hardly come up with an example of a taboo topic. In all this, and let me stress this, everyone gets honest and straightforward answers.

Does it mean that the company is easier to lead? Um, no. One needs to think about each and every decision because it will be shared with everyone. Each piece of information should be handled as it was public. After all, it is public. So basically your goal, as a leader of such an organization, is to be fair, whatever you do. There’s no place for deception, trickery or lies.

One could think that, assuming goodwill, it is a default mode of running a company. It’s not. It’s very unusual to hear about, let alone work at, such an org. There are a number of implications of this approach.

  • It is challenging for leaders. You can’t hide behind “that’s not for you to know” answer or meaningless blah blah. People won’t buy it. This is, by the way probably, the number one reason why this approach is so uncommon.
  • It helps to build trust between people. Dramatically. I don’t say you get trust for free, because it never happens, but it is way easier.
  • It eliminates us versus them mentality. Sure, not everyone is equal and not everyone has the same role in the company, but transparency makes everyone understand better everyone else’s contributions, thus eliminates many sources of potential conflicts.
  • It heavily influences relationships with customers. It’s much easier to be open and honest with clients if this is exactly what you do every day internally. I know companies that wouldn’t treat this one as a plus, but being a client, well, ask yourself what kind of a vendor you’d like to work with.

All in all, transparency is like a health-meter of an organizational culture. I don’t say that it automatically means that the org is successful, too. You can have a great culture and still go out of business. I just say that if you’re looking for a great place to work, transparency should be very, very high on a list of qualities you value. Possibly on the very top of the list, like it is in my case.

By the way, if you are a manager or a company leader, ask yourself: how many things wouldn’t you reveal to your team?

This post wouldn’t be complete without giving credits to Paul Klipp, who is the creator of this unusual organizational culture. I can say that during first few weeks I’ve already learned more about building great teams and exceptional organizations from Paul than from any leader I worked with throughout my career. It goes way beyond just a transparency bit but that’s a completely different story. Or rather a few of them. Do expect me to share them soon.

in personal development, recruitment, software business
8 comments
Kitchen Kanban, or WIP Limits, Pull, Slack and Bottlenecks Explained post image

Have you ever cooked for twenty people? If you have you know how different the process is when compared to preparing a dinner just for you and your spouse. A few days ago I was preparing lunch for folks in my company and I’m still amazed how naturally we use concepts of pull, WIP limits, bottlenecks and slack when we are in situations like this.

I can’t help but wonder: why the hell can’t we use the same approach when dealing with our professional work?

OK, so here I am, cooking 15 pizzas for a small crowd.

Bottlenecks

If you read Eli Goldratt’s The Goal you know that if you want to make the whole flow efficient you need to identify and protect bottlenecks. Having some experience with preparing pizzas for a few people, I easily guessed that the bottleneck would be an oven.

The more interesting part is how, knowing what is the bottleneck, we automatically start protecting it. The very next thing I was doing after taking a pizza out from the oven was putting another one in. If I decided to serve the pizza first I would be making my bottleneck resource (the oven) idle, which would affect the whole process and its length.

Interestingly enough, protecting the bottleneck in this case resulted in longer cycle time and, with the first delivery, worse lead time too. That’s the subject for another story though.

The lesson here is about dealing with bottlenecked parts of our processes. One of the recent conversations I’ve had was about bringing more developers into a project where business analysis was a bottleneck. It would be like hiring waiters to help me serving pizzas and expecting it would make the whole process faster.

It’s even worse if you don’t know what your bottleneck is. In the story with business analysis I’ve mentioned the team learned where the problem is only after some time into the project. Before that they would actually be willing to hire more waiters and would expect that would improve the situation.

WIP Limits

Fifteen pizzas and one cook. If I acted as an average software development team I would prepare dough for all pizzas, then move to a tomato sauce, then to other ingredients. Three hours later I would realize that, because of a system constraint, I can’t bake in batch. I would switch my efforts to deal with a hungry and angry crowd focusing more on dealing with their dissatisfaction than on delivering value. Fortunately, eventually I would run a retrospective where I would learn that I made a mistake with the baking part so I would file a retro summary into a place no one ever looks again and congratulate myself a heroic effort of dealing with hungry clients.

Instead I limited amount of work invested into preparing dough and ingredients. I prepared enough to keep the oven running all the time.

Well, actually I prepared more. I started with WIP limit of 6 pizzas, meaning that I had 6 ready-to-bake pizzas when the oven was ready. Very soon I realized one obvious and two more, less obvious, issues with such a big WIP limit.

First, 6 pizzas take up a lot of space. Space which was limited. Even more so, when more people popped up in the kitchen waiting for their share. This is basically a cost of inventory. Unfortunately, in the software industry we deal with code so we don’t really see how it stacks up and take up space, until it’s too late and fixing a bug becomes a dreadful task no one is willing to undertake.

If only we had to find a place to store tiny physical zeros and ones for each bit of our code… The industry would rock and roll.

The other two issues weren’t that painful. If you keep unbaked pizza too long it’s not that good as it’s a bit too dry after baking. I also realized that I could easily manage to prepare new pizzas at a pace that doesn’t require such a big queue in front of the oven. I could prepare better (fresher) product and it still wouldn’t affect the flow.

So I quickly reduced my queue of pizzas in front of the oven to 4, 3 and eventually 2. Sure, it changed how I worked, but it also made me more flexible. I didn’t need so much space and could react to special requests pretty flexibly.

Surprisingly enough, WIP limits in a kitchen seem so intuitive. It’s often more convenient to work in small batches. Such an approach helps to focus on the bottleneck. If you’re dealing with physical inventory you also virtually see the cost of excessive inventory. Unfortunately, with code it’s not that visible even though it’s a liability too.

It doesn’t mean that the whole mechanism changes dramatically. Much unfinished work increases multitasking, inflicts a cost of context switching, lengthens feedback loops. It just isn’t that visible, which is why we don’t naturally limit work in progress.

Slack Time

When we are talking about WIP limits we can’t forget about slack time. Technically I could prepare an infinite queue of ready-to-bake pizzas in front of the oven. Of course no mentally healthy cook would do this.

Anyway, when I started limiting my pizzas in progress I was facing moments when, in theory, I could have been preparing another one but didn’t. I didn’t, even when there was nothing else to be done at the moment.

A canonical example of slack time.

I used my slack time to chat with people, so I was happier (and we know that happy people do a better job). I got myself a coffee so I improved my energy level. I also used slack to rearrange the process a bit so my work was about to become more efficient. Finally, slack time was an occasion to check remaining ingredients to learn what pizzas I can still deliver.

In short I was doing anything but pushing more work to the system. It wouldn’t help anyways as I was bottlenecked by the oven and knew my pace well enough to come up with reasonable, yet safe, WIP limits which were telling me when I should start preparing the next pizza.

There are two lessons for us here. First, learn how the work is being done in your case. This knowledge is a prerequisite to do reasonable work with setting WIP limits. And yes, the best way to learn what WIP limits are reasonable in a specific case is experimenting to see what works and what allows to keep the pace of the flow.

Second, slack time doesn’t mean idle time. Most of the time it is used to do meaningful stuff, very often system improvements that result in better efficiency. When all people hear from my argument for slack time is “sometimes it’s better to sip coffee than to write code” I don’t know whether I should laugh or cry. It seems they don’t even try to understand, let alone measure, the work their teams do.

Pull

And, finally, pull principle. As we already know the critical part of the whole process was the oven, let’s start there. Whenever I took out a pizza from the oven it was a signal to pull another pizza into the oven. Doing this I was freeing one space in my queue of pizzas in front of the oven. It was a pull signal to prepare another one. To do this I pulled dough, tomato sauce and the rest of ingredients. When I ran out of any of these I pulled more of them from fridge.

Pull all over the place. Getting everything on demand. I was chopping vegetables or opening the next pack of salami only when I needed them. There were almost no leftovers.

Assuming that I could replenish almost every ingredient during the time a pizza was being baked, I was safe. I could even base on an assumption that it’s close to impossible that I run out of all the ingredients at the same time. And even then I had a buffer of ready-to-bake pizzas.

The only exception was dough as preparing dough took more time. Dough was my epic story. This part of the work was common for a bunch of pizzas all derived from the same batch of dough. Same like stories derived from an epic. In this case I was just monitoring the inventory of dough so I could start preparing the next batch soon enough. Again, there was a pull signal but it was a bit different: there are only two pieces of dough left; I should start preparing another batch so it would be ready once I run out of the current one.

The lesson about pull is that we should think about the work we do “from right to left.” We should start with work items that are closest to being done and consider how we can get them closer to completion. Then, going from there, we’ll be able to pull work throughout the whole process as with each pull we’ll be creating free space upstream.

Once we deploy something we create free space so we can pull something to acceptance testing. As a result we free some space in testing and pull features that are developed, which makes it possible to pull more work from a backlog, etc.

When using this approach along with WIP limits we don’t introduce extensive amount of work to the system and we keep our flow efficient.

Once we learn that earlier stages of work may require more time than later ones we may adjust pull signals and WIP limits accordingly so we keep the pace of the flow.

Summary

I hope the story makes it easier to understand basic concepts introduced by Kanban. Actually, I’d say that if software was physical people would understand concepts of flow, WIP limits, pull or protecting bottlenecks way easier. They would see how their undelivered code clutter their workspace, impact their pace and affects their flow of work.

So how about this: ask yourself following questions.
Where is the oven in your team?
Who is working on this part of the process?
Do you protect them?
How many ready-to-bake pizzas do you have typically?
How many of these do you really need?
What do you do when you can’t put another pizza into the oven?
What kind of space do your pizzas occupy?
Do your pizzas taste the same, no matter how long they are queued?
Do you need all the ingredients prepared up front?
How much of ingredients do you typically have prepared?
How do you know whether you need dough and when you should start preparing it?

Look at your work from this perspective and tell me whether it helps you to understand your work better. It definitely does in my case, so do expect further pizza stories in the future.

in kanban
9 comments

Why Organizational Transformations Fail

Why Organizational Transformations Fail post image

You can’t reorganize the way a business thinks by reorganizing the business.

~Stephen Parry

I can safely state that every company I worked for was attempting to make an organizational transition during my time there. Motivations varied from simply surviving, through adjusting to a new environment, to improving the whole business. Approaches to run a transition also differed, but one common part was a reorganization.

Oh, reorganizations. Who doesn’t love reorgs? Shaking everyone around. Bringing in good old insecurity and fear of unknown. Quite an interesting strategy to introduce a positive change, although the one which is most prevalent and often inevitable. Unfortunately, a strategy that has a pretty low success rate too.

After all, coffee doesn’t become sweeter simply because you stir it.

The interesting part, however, is that I can come up with an example or two in which reorganizations helped to make a transition a success, or even make it possible in the first place.

How come? The answer is hidden in Stephen Parry’s words at the beginning of the post. It’s not about the reorganization itself; it’s about changing the way business thinks. The problem with most reorgs is that they’re driven from the top, which usually means that the top of hierarchy remains the same. It also means that the way business thinks, which spreads top-down, remains unchanged.

If the organization’s leaders’ mindset remains the same, any change that is introduced down there isn’t sustainable. Eventually, it will be reverted. Depending on how many layers of isolation there are it may take some time but it’s inevitable. Prevailing mindset just goes top-down and unless you can address its source it’s a battle you’re not going to win.

I can think of, and have been a part of, reorganizations that shook the very top of a company, introducing new leaders and thus enabling the new way of thinking. Yes, the business was reorganized but this was neither the only nor the most important part of the change.

Because coffee doesn’t become sweeter simply because you stir it. Unless you’ve remembered to add sugar before, that is.

The game-changer here is mindset; that has to change in order to enable the successful transition. And I have bad news for you: it has to change at the very top of the organization. You don’t necessarily have to start there, but eventually it either happens or things, in general, remain the same.

So if you consider a reorg as a way to change how your business works, ask yourself a question: does this change affect the mindset of the organization’s leaders? If not, I wouldn’t expect much chance of success.

Besides, there are many ways to skin a cat. A reorg isn’t the only tool you have to change mindset across the organization. Heck, it isn’t even a very good one. Remember that when you start the next revolution just to see that virtually nothing changes as a result.

By the way, there is a neat application of this idea in a bit different situation too. If you want to preserve mindset across the organization when changing leaders, pay very close attention how new leaders think. Your company can be a well-oiled machine, but when steering wheel is grabbed by a guy who neither understands nor cares about the existing mindset, the situation is going to deteriorate pretty quickly. You just don’t want to hire Steve Balmer to substitute for Bill Gates.

in software business
0 comments

WIP Limits Revisited

WIP Limits Revisited post image

One of things you can hear repeatedly from me is why we should limit work in progress (WIP) and how it drives continuous improvement. What’s more, I usually advise using rather aggressive WIP limits. The point is that you should generate enough slack time to create space and incentive for improvements.

In other words, the goal is to make people not doing project or product development work quite frequently. Only then, freed from being busy with regular stuff, they can improve the system which they’re part of.

The part which I was paying little attention to was the cost of introducing slack time. After all, these are very rare occasions when clients pay us for improvement work, so this is some sort of investment that doesn’t come for free.

This is why Don Reinertsen’s sessions during Lean Kanban Europe Tour felt, at first, so unaligned with my experience. Don advises to start with WIP limits twice as big as average WIP in the system. This means you barely generate any slack at all. What the heck?

Let’s start with a handful of numbers. Don Reinertsen points that WIP limit which is twice as big as average WIP, when compared to no WIP limit at all, ends up with only 1% idle time more and only 1% rejected work. As a result we get 28% improvement in average cycle time. Quite an impressive change for a very small price. Unfortunately, down the road, we pay more and more for further improvements in cycle time, thus the question: how far should we drive WIP limits?

The further we go the more frequently we have idle time, thus we waste more money. Or do we? Actually, we are doing it on purpose. Introducing slack to the system creates an opportunity to improve. It’s not really idle time.

Instead of comparing value of project or product work to idle time we should compare it to value of improvement work. The price we pay isn’t that high as it would initially seem basing simply on queuing theory.

Well, almost. If we look at the situation within strict borders of a single project value of improvement work is non-existent or intangible at best. How much better will the product be or how much faster will we build remaining features? You don’t know. So you can’t say how much value these improvements will add to the project.

However, saying that the improvements are of no value would be looking from a perspective of optimizing a part; in this case a single project. Often impact of such improvements will be broader than within borders of the project and it will last longer than the project’s time span.

I don’t say I have a method you may use to evaluate cost and value attached to non-project work. If I had I’d probably be a published author by now, had lots of grey hair and a beer belly twice as big. My point is that you definitely shouldn’t account all non-project work as waste. Actually, most of the time cost of this work will be smaller than value you get out of it.

If we based purely on Don Reinertsen’s data and assumed that whenever we hit WIP limit people are idle we could come up with such a chart:

On a horizontal axis we have WIP limits going from infinite (no WIP limit at all) to aggressive WIP limits inflicting much slack time. On a vertical axis we have overall impact on a system. As we introduce WIP limits (we go to the right side of the chart) we gain value thanks to shorter average cycle times and, at least at the beginning, improved throughput. At the same time we pay the cost of delay of rejected or queued work waiting to enter the system (in backlog) and the cost of idle time.

In this case we reach the peak point of the curve pretty quickly, which means that we get most value with rather loose WIP limits. We don’t want to introduce too much idle time to the system as it’s our liability.

However, if we start thinking in terms of slack time, not idle time, and assume that we are able to produce enough value during slack time to compensate the cost the chart will be much different.

In the case number two the only factor working against us is cost of delay of work we can’t start because of WIP limits. The organization still has to pay for people doing non-project work, but we base on assumption that they create equal value during slack time.

The peak of the curve is further to the right, which means that the best possible impact happens when we use more aggressive WIP limits than in the first case.

Personally, I’d go even further. Basing on my past experience I’d speculate that often slack time results in improvements that have positive overall impact on the organization. In other words it would be quite a good idea to fund them as projects as they simply bring or save money. It gives us another scenario.

In this case impact of slack time is positive so it partially compensates increasing cost of delay, as we block more items to enter the system. Eventually, of course, overall impact is negative in each case as at the end of horizontal axis we’d have WIP limit of 0, which would mean infinite cost of delay.

Anyway, the more interesting point to look at is the peak of each curve, as this is a sweet spot for our WIP limits. And this is something we should be looking for.

I guess, by this time you’ve already noticed that there are no numbers on the charts. Obviously, there can’t be any. Specific WIP limits would depend on a number of context-dependent factors, like a team size, process complexity or external dependencies, to mention only the most obvious ones.

The shape of curves will depend on the context as well. Depending on the work you do cost of delay can have different impact, same as value of improvements will differ. Not to mention that cost attached to slack time vary as well.

What I’m trying to show here is that introducing WIP limits isn’t just a simple equation. It’s not without a reason that no credible person would simply give you a number as an answer for a question about WIP limits. You just have to find out by yourself.

By the way, the whole background I drew here is also an answer for the question why my experience seemed so unaligned with ideas shared by Don Reinertsen. I just usually see quite a lot value gained thanks to wise use of slack time. And slack time, by all means, should be accounted differently than idle time.

in kanban
2 comments

Leadership, Fellowship, Citizenship

Leadership, Fellowship, Citizenship post image

There was a point in my career when I realized how different the concepts of management and leadership were, and that to be a good manager one had to be a good leader. Since then the idea of leadership, as I understand it, has worked for me very well. I even like to consider my role in organizations I work for as a leader, not a manager.

Perceptions of leadership shift these days. Bob Marshall proposes the concept of fellowship. The idea is based on the famous Fellowship of the Ring and builds on how the group operated and what values were shared among its members, so that they eventually could achieve their goal.

A common denominator here is that everyone is equal; there’s no single “leader” who is superior to everyone else. At different points in time different people take over the role of leader in a way that is the best for the group.

As Bob points, leadership doesn’t really help to move beyond an analytical organization (see: The Marshall Model). This means the concept of leadership is insufficient to deal with further challenges our companies face on a road of continuous improvement. We need something different to deal with our teams, thus fellowship.

Another, somehow related, concept comes from Tobias Mayer, who points us to the idea of citizenship. Tobias builds the concept on a balance between rights and responsibilities. It’s not that we, as citizens, are forced or told to keep our neighborhood clean – it’s that we feel responsible for it. This mechanism can be transferred to our workplaces and it would be an improvement, right?

I like both concepts. Actually, I even see how one can transit to the other, back and forth, depending on which level of an organization you are. On a team level, fellowship neatly describes desired behaviors and group dynamics. As you go up the ladder, citizenship is a nice model to describe representation of a group among higher ranks. It also is a great way to show that we should be responsible for and to the people we work with, e.g. different teams, and the organization as a whole.

Using ideas introduced by Tobias and Bob we can improve how our teams and organizations operate, that’s for sure.

Yet, I don’t get one thing here. Why fellowship and citizenship concepts are built in opposition to leadership?

OK, maybe my understanding of leadership is flawed and there is The Ultimate Leadership Definition written in the stone somewhere, only I don’t know it. Maybe fellowship and citizenship violate one of The Holy Rules of Leadership and I’m just not aware of them. Because, for me, both ideas are perfectly aligned with leadership.

Leadership is about making a team operate better. If it takes to be in the first line, fine. When someone needs to do the dirty work no one else is willing to do, I’m good with that as well. I’m even happier when others can take over the leader’s role whenever it does make sense. And what about taking responsibility for what we do, people around and an organization around? Well, count me in, no matter what hat I wear at the moment.

When I read Bob and Tobias I’m all: “hell, yeah!” Except the part with labels. Because I still call it leadership. This is exactly what leadership is for me. Personally, I don’t need another name for what I do.

I don’t say that we should avoid coining new terms. Actually, both citizenship and fellowship are very neat names. I just don’t see the point of building the opposition to ideas we already know. The more so as citizenship and fellowship are models, which are useful for many leaders.

I don’t buy an argument that we need a completely new idea as people are misusing concepts we already have. Well, of course they are. There are all kinds of flawed flavors of leadership, same as there will be flawed flavors of fellowship and citizenship when they become popular.

I don’t agree that leadership encourages wrong behaviors, e.g. learned helplessness. Conversely, the role of a leader is to help a team operate better, thus help eliminate such behaviors. A good leader doesn’t build followership; they build new leaders.

That’s why I prefer to treat citizenship and fellowship as enhancements of leadership, not substitutions of it.

in team management
1 comment

Why Burn-up Chart Is Better Than Burn-down Chart

Why Burn-up Chart Is Better Than Burn-down Chart post image

The other day I was in the middle of discussion about visuals a team was going to use in a new project. When we came to the point of tracking completion of the project I advised a burn-up chart and intended to move on. The thing that stopped me was the question I was asked: why burn-up and not burn-down?

Burn-down Chart

First, some basics. Burn-down chart is an old idea I’ve learnt from Scrum. It is a simple graph showing amount of work on a vertical and timeline on a horizontal axis. As time progresses we keep track how much work is still not done. The goal is to hit the ground. The steepness of the curve can help us approximate when it’s going to happen or, in other words, when we’re going to be done with the whole work.

When we think about quantifying work it should be anything we use anyway – story points, weighted T-shirt sizes, simple number of tasks or what have you.

Burn-up Chart

Burn-up chart’s mechanics is basically the same. The only difference is that instead of tracking how much work is left to be done, we track how much work we’ve completed, so the curve is going up, not down.

The Difference

OK, considering these two work almost identically, what’s the difference? Personally, I don’t buy all the crap like “associations of the word burn-down are bad.” We learned not to be afraid of failure and we can’t deal with a simple word? Give me a break.

The real difference is visible when the scope changes. If we suddenly realize we have more work to do burn-down may look like this.

Unfortunately, it can also look differently if we happen to be (un)lucky enough to complete some work at the same time when we learn about additional work.

It becomes even trickier when the scope decreases.

Have we just completed something or has a client cancelled that feature which we talked about yesterday? Not to mention that approximating the finish of work becomes more difficult.

At the same time, burn-up chart makes it all perfectly visible as progress is tracked independently on scope change.

You can see scope changes in both directions, as well as real progress. And this is exactly why choosing burn-up over burn-down should be no brainer.

in project management
22 comments

Refactoring: Value or Waste?

Refactoring: Value or Waste? post image

Almost every time I’m talking about measuring how much time we spend on value-adding tasks, a.k.a. value, and non-value-adding stuff, a.k.a. waste, someone brings an example of refactoring. Should it be considered value, as while we refactor we basically improve code, or rather waste, as it’s just cleaning after mess we introduced in code in the first place and the activity itself doesn’t add new value to a customer.

It seems the question bothers others as well, as this thread comes back in Twitter discussions repeatedly. Some time ago it was launched by Al Shalloway with his quick classification of refactoring:

The three types of refactoring are: to simplify, to fix, and to extend design.

By the way, if you want to read a longer version, here’s the full post.

Obviously, such an invitation to discuss value and waste couldn’t have been ignored. Stephen Parry shared an opinion:

One is value, and two are waste. Maybe all three are waste? Not sure.

Not a very strong one, isn’t it? Actually, this is where I’d like to pick it up. Stephen’s conclusion defines the whole problem: “not sure.” For me deciding whether refactoring is or is not value-adding is very contextual. Let me give you a few examples:

  1. You build your code according to TDD and the old pattern: red, green, refactor. Basically refactoring is an inherent part of your code building effort. Can it be waste then?
  2. You change an old part of a bigger system and have little idea what is happening in code there, as it’s not state-of-the-art type of software. You start with refactoring the whole thing so you actually know what you’re doing while changing it. Does it add value to a client?
  3. You make a quick fix to code and, as you go, you refactor all parts you touch to improve them, maybe you even fix something along the way. At the same time you know you could have applied just a quick and dirty fix and the task would be done too. How to account such work?
  4. Your client orders refactoring of a part of a system you work on. Functionality isn’t supposed to be changed at all. It’s just the client suppose the system will be better after all, whatever it means exactly. They pay for it so it must have some value, doesn’t it?

As you see there are many layers which you may consider. One is when refactoring is done – whether it’s an integral part of development or not. Another is whether it improves anything that can be perceived by a client, e.g. fixing something. Then, we can ask does the client consider it valuable for themselves? And of course the same question can be asked to the guys maintaining software – lower cost of maintenance or fewer future bugs can also be considered valuable, even when the client isn’t really aware of it.

To make it even more interesting, there’s another advice how to account refactoring. David Anderson points us to Donald Reinertsen:

Donald Reinertsen would define valuable activity as discovery of new (useful) information.

From this perspective if I learn new, useful information during refactoring, e.g. how this darn code works, it adds value. The question is: for whom? I mean, I’ll definitely know more about this very system, but does the client gets anything of any value thanks to this?

If you are with me by this point you already know that there’s no clear answer which helps to decide whether refactoring should be considered value or waste. Does it mean that you shouldn’t try sorting this out in your team? Well, not exactly.

Something you definitely need if you want to measure value and waste in your team (because you do refactor, don’t you?) is a clear guidance for the team: which kind of refactoring is treated in which way. In other words, it doesn’t matter whether you think that all refactoring is waste, all is value or anything in between; you want the whole team to understand value and waste in the same way. Otherwise don’t even bother with measuring it as your data will be incoherent and useless.

This guidance is even more important because at the end of the day, as Tobias Mayer advises:

The person responsible for doing the actual work should decide

The problem is that sometimes the person responsible for doing the actual work can look at things quite differently than their colleague or the rest of the team. I know people who’d see a lot value in refactoring the whole system, a.k.a. rewriting from scratch, only because they allegedly know better how to write the whole thing.

The guidance that often helps me to decide is answering the question:

Could we get it right in the first place? If so then fixing it now is likely waste.

Actually, a better question might start with “should we…” although the way of thinking is similar. Yes, I know it is very subjective and prone to individual interpretations, yet surprisingly often it helps to sort our different edge cases.

An example: Oh, our system has performance problems. Is fixing it value or waste? Well, if we knew the expected workload and failed to deliver software handling it, we screwed this one up. We could have done better and we should have done better, thus fixing it will be waste. On the other hand the workload may exceed the initial plans or whatever we agreed with the client, so knowing what we knew back then performance was good. In this case improving it will be value.

By the way: using such an approach means accounting most of refactoring as waste, because most of the time we could have, and should have, done better. And this is aligned with my thinking about refactoring, value and waste.

Anyway, as the problem is pretty open-ended, feel invited to join the discussion.

in project management, software development
9 comments

Code Better or Code Less?

Code Better or Code Less? post image

An interesting discussion (that might have happened):

I would rather students apply their effort to writing better code than to writing better comments.

~ Bob Martin

But…

I would rather students apply their efforts to writing less code than writing “better” code.

~ Bob Marshall

Because…

There is nothing so useless as doing efficiently that which should not be done at all.

~ Peter Drucker

Having read this, one realization is that better code often means less code. I don’t think about lines of code exactly, or something similarly stupid, but in terms of meaningful code. However, argument for less code isn’t about making code as compact as possible, avoid redundancy, etc.

The argument is about not writing code at all whenever reasonable or possible.

Should we focus on deciding what should and what should not built instead of polishing our software development craft then?

Yes and no.

Yeah, I know. Exactly the kind of answer you expected, isn’t it? Anyway, you can’t answer this question meaningfully without a context.

Better code

One perspective is the one of a developer. The developer in almost every medium-to-big organization, and in quite a lot of small ones too, is pretty much disconnected with product management/product ownership part of a project. It means that they have very little-to-no knowledge what actually should be built.

Of course being a developer I can, and should, share my concerns about usefulness of building specific features but it’s unlikely I have enough information to judge such situations correctly in many cases. By the way, even when I’m right and this darn feature shouldn’t be built odds are that it’ll be built anyway because a client says so. Sounds stupid? Sure, it does! Does it make the client change their minds? Not very often.

If you’ve ever worked on one of those big contracts where everything is (allegedly) specified upfront and no one on a client’s side is willing to change anything because of internal politics, you exactly know what I’m talking about. If you haven’t, well, damn you, you lucky bastard.

So it might be a great idea not to build a feature but developers either don’t have enough knowledge to be aware of the fact or aren’t allowed to skip the feature anyway. In this case a better thing to do is to focus on building better code, not less code, because one can hardly say what meaningful less is.

Less code

The other perspective is the one of product management folks, however this specific role is called in your org. For them, their first and main focus should be on building less code. Yes, product owners, product managers, etc. Yes, less code. And yes, I do know they don’t write code. It still should be their main goal.

You see, this is the place where meaningful decisions about not building features can be made. Product folks should know what adds value and what doesn’t. What’s more, they are usually better suited to start such discussions with clients, whenever needed. After all, it is so common that clients want, and pay for, unnecessary features and useless code.

Organizational-wise you get more value, or less waste, focusing on building less code. Given that you’re free to work on both: better code and less code across the organization, it would likely be wiser to choose the latter. At the same time efficiency of your efforts depends much on the part of the organization you work with and, locally, it may be a much better choice to focus quality of code and not quantity of code as an issue to tackle.

So if I could choose what kind of superhero posters are in rooms of my people I’d go with Peter Drucker for product folks and Bob Martin for developers.

in project management
13 comments

Kanban Coaching Professional

Kanban Coaching Professional post image

Frequent visitors might have noticed a new banner on the sidebar of the blog that says “Kanban Coaching Professional.” It might come as a surprise that I’ve decided to join the Kanban Coaching Professional program. After all, I used the word certifiction (no typo here) repeatedly, shared my concerns about the idea of certifying anything around Kanban and even showed my hatred to any certification at all. And now, I jump on KCP bandwagon.

Why, oh why?

Well, I must admit I like a couple things in the approach David Anderson and Lean Kanban University have chosen in the program. Peer review is one. To get through the process and become a Kanban Coaching Professional, you need to talk with folks who know the stuff.

On one hand it sounds sort of sectarian – we won’t let you in unless we like you. On the other, it is just a good old recommendation process. I trust Mike Burrows so I trust people who Mike trusts, etc. This way the title means something more than just a participation trophy. Also, the seed people who will be running the decision-making process are very decent.

There is a risk of leaving some people behind – those who are not active members of the Kanban community but are otherwise knowledgeable and smart folks. Well, I really do hope it won’t become some sort of coven who doesn’t let fresh blood in. It definitely is a risk Lean Kanban University should pay close attention to.

Another thing I like is that there’s been an option to be grandfathered into the program. By the way, otherwise I wouldn’t be a part of this. I just don’t feel like attending a course just to be approved. That’s just not my way of doing things.

I prefer to write and speak regularly about Kanban, showing that I do and know the stuff, instead of attending the course. Yeah, this is a hard way but it’s just the way I prefer. With such an attitude, there’s no way I’m going to be CSM, but it seems the Kanban community has some appreciation for non-standard cases such as myself.

Actually, I hope this option will be kept open. I mean I can imagine great people with an attitude similar to mine – willing to get their hands dirty (and prove it) – but not really willing to attend the course.

Because, when we are on the course, I’m not a fan of this requirement. I understand it is there for a reason and, to be honest, I don’t think I have a better idea for now, but I have the comfort of standing at the sideline and saying “I don’t like it.”

I guess it is supposed to be a business, so there needs to be a way of making money out of it. For the time being though, not the business which I’m a part of.

However, the simple reason that I could, and rather easily, become a Kanban Coaching Professional isn’t why I decided to give it a go. After all you still need pay some money for this, so we’re back to the question: “Why?”

As Kanban gets more and more popular, I see more people jumping on this bandwagon, offering training, coaching and what have you. The problem is that sometimes I know these people and I’m rather scared that they are teaching Kanban.

Not that I want to forbid anyone to teach Kanban, but I believe we arrived to the point where we need a distinction. The distinction between people who invest their time to keep in touch with the community, attend events, share experience, engage discussions, etc. and those who just add a Kanban course to their wide training portfolio because, well, people want to buy this crap.

This is exactly why I decided to get enrolled in the KCP program. For this very distinction.

I believe that, at least for now, it differentiates people who you’d like to hire to help you with Kanban from those who you can’t really tell anything about. This is where I see the biggest value of KCP. I really do hope it will stay this way.

Unless the situation changes the banner will hang there on the sidebar and I will use KCP title as a confirmation of my experience and knowledge about Kanban. Sounded a bit pompous, didn’t it? Anyway, if you look for help with Kanban, pay attention to these banners or KCP titles.

in kanban
0 comments