≡ Menu

Price versus Quality

Price versus Quality post image

“Price has no meaning without a measure of the quality being purchased.”

~W. Edwards Deming

It always fascinated me how price was the main axis of the game of closing software development deals. Unfortunately, in the vast majority of cases, pricing is used in total isolation from any other criteria, especially quality.

It was like this when I worked on off-the-shelf ERP software. In this case it was, to some point, understandable. You can’t universally define functionality-to-price and quality-to-price factors if you sell the same product to thousands customers. It will be different in many cases.

In such a case the question is what you deliver for the price you put on the tag attached to your product. Is it of high quality? And more importantly: do you keep, or even improve, the quality consistently?

As a matter of fact, we didn’t. We didn’t think in such terms. It was more of a chase for more functionality to satisfy new customers than a conscious effort to keep the quality of software stable. You may guess how that affected the quality. Let me just say I’m not proud of the end result.

It was still much better than what I experienced later, which was building custom projects for big clients. As a leader of software development and project management divisions I was often involved in the sales process. I was always asked how much time and people we need to build a thing. I was often asked about features the thing had or was going to have. I was never, ever, asked about the quality of the product or the tools we’d had in place to ensure this quality. Ever. Not a single time.

It isn’t an industry-specific observation. I went through a few industries, including banking, insurance and telecommunications, and it was always the same.

I’m compassionate for these poor chaps that were the decision makers. They followed their fears represented by the “nobody ever got fired for buying IBM” attitude. They just played it safe. In fact, I blame the system.

I blame the system that disconnects the price from the quality of purchased goods or services. In such a case price is almost meaningless, yet it is almost always used as a deciding factor.

The question I often wanted to ask the decision makers was: if they were buying the product with their own money would they be choosing the same? Obviously not. In fact, when we refer to our everyday lives, which I like doing, we never forget the relationship between quality and price.

When I buy sticky notes to use on whiteboards I choose 3M even though they are pretty expensive when compared to alternatives. Actually, given a choice, I take 3M Super Stickies, which are even more expensive. It’s just the quality I need and I’m willing to pay for.

When you think about the slow food movement being more and more popular over the course of the past 20 years, it follows the same pattern. Most of the premium products exploit the same behavior. Heck, how many of you, my dear readers, bought one of those overpriced smartphones or tablets? Apple fans, anyone? What is it if not paying for quality? And how do you choose a new car I wonder? By price tag only?

So why, the hell, do you become Mr. Hydes when you return to your workplace and you choose a vendor? Oh, the system, I forgot.

The system is, in fact, bigger than just an organization choosing a vendor. It spreads across most, if not all, of the vendors, too. A good picture of this would be a story my friend told me about the tender process in Romania in which he was involved as a representative of one of the bidders.

When all the offers were formally submitted, the client organized a meeting with all the potential vendors, announced what the lowest bid was and asked everyone to reconsider their offers giving them half an hour to resubmit proposals with new pricing.

When they had all the discounted bids re-submitted they did it again.

It’s just spreading a sick behavior throughout the whole ecosystem. Optimizing solely for price means cutting corners on quality. Heavy price optimizations mean painful quality tradeoffs.

And we know that low quality means a lot of rework and, as a result, higher overall cost.

Now, that I’ve changed jobs, I’m more directly involved in the sales process. And I couldn’t be happier to learn that we don’t take part in this game. We are (relatively) expensive. Don’t expect the lowest bid from us. We understand what it takes to build high quality software and are ready to walk away if someone expects us to drive the price down to the point when we compromise the quality.

Because, at the end of the day, low price is costly for both: a client and a vendor.

You just can’t consider price in isolation from quality.

in software business
3 comments

Against Rightshifting

Against Rightshifting post image

Rightshifting is a nice idea. In its origin it says about improving effectiveness of organizations. When an organization rightshifts it becomes aware of different approaches, methods and techniques that can be used to work better. Eventually, the company adopts some of them and starts treating them as non-optional. Oftentimes, it means that the organization refuses to work “the old ways” as it is clearly considered suboptimal.

I do like to think about rightshifting in terms of personal development. Every one of us can (arguably should) learn new approaches, methods and techniques. With such an attitude we eventually learn how to work better. Oftentimes, we’d love to reject to work differently, on occasions we even do, although this time the case is more complex.

Being a part of a bigger entity, we rarely have comfort do fully decide how work is being done. If you ever heard “do we really need to write unit tests (a client won’t pay for this)” discussion you know exactly what I mean.

What happens then? Well, usually we develop our frustrations regarding lack of workmanship. Some leave in pursuit to find another job that suits better their expectations and skills. Some stay and try to change the organization from the inside. Sometimes they even succeed, although more often a success is local (a team level) than global (an organization level). Anyway, besides these rare successful cases usually much frustration is involved.

Bad news is that a scenario involving frustration is a frequent one when we personally rightshift. More often than not, the pace of our personal improvements is better than the one of organizational improvements. On one hand it means that personal rightshifting introduces at least some dissatisfaction. On the other it should open new opportunities.

Yes, of course. The problem is that there are fewer and fewer of them, the further to the “right” you are. There aren’t enough great companies, or I should say mature enough companies. I mean, maybe there are, globally. But few of us operate on a fully global job market and jobs in general, and jobs at great companies specifically, aren’t distributed evenly across the world.

So sorry, in most places on earth “there’s no shortage of talent, only a shortage of companies that talent wants to work for” isn’t true. Even less so, when one has high expectations for craftsmanship and organizational standards.

In other words, in theory, you can rightshift yourself to the point where you’re practically unemployable because you aren’t willing to accept anything but your impossible-to-meet standards of work. I’m pretty sure it’s not only theory and quite a few folks out there could tell by their own experience.

I know by mine that definitely the more to the “right” you are the fewer companies you want to work for.

So my advice would be: don’t rightshift… too fast.

Be aware that rightshifting closes a few options here and there. Rapid rightshifting may close the option you’re exploiting at the moment too (also known as a current job). I wouldn’t call rightshifting a career-limiting move, although in some ways it might be considered this way.

Is it different when we look from a perspective of an organization? A bit. When the company rightshifts faster than individuals working there, there is frustration too. However, people tend to adjust to the way their company works. After all, it is one of conclusion that may be drawn from famous Deming’s work (95% of variability is caused by the system). In other words, improving the system (the organization) we naturally pull people to the “right” too. Most of them at least.

Unfortunately, raising standards means that there are fewer people that you’re willing to hire. It limits company’s pace of growth. It makes hiring people’s work way more difficult. Of course you can always fall back to good old growing people from basics but you can afford to have only that many of them.

So again, don’t rightshift… too fast.

in personal development, software business
3 comments

(Sub)Optimizing Cycle Time

(Sub)Optimizing Cycle Time post image

There is one thing we take almost for granted whenever analyzing how the work is done. It is Little’s Law. It says that:

Average Cycle Time = Work in Progress / Throughput

This simple formula tells us a lot about ways of optimizing work. And yes, there are a few approaches to achieve this. Obviously, there is more than the standard way, used so commonly, which is attacking throughput.

A funny thing is that, even though it is a perfectly viable strategy to optimize work, the approach to improve throughput is often very, very naive and boils down just to throwing more people into a project. Most of the time it is plain stupid as we know from Brook’s Law that:

Adding manpower to a late software project makes it later.

By the way, reading Mythical Man-Month (the title essay) should be a prerequisite to get any project management-related job. Seriously.

Anyway, these days, when we aim to optimize work, we often focus either on limiting WIP or reducing average cycle time. They both have a positive impact on the team’s results. Especially cycle time often looks appealing. After all, the faster we deliver the better, right?

Um, not always.

It all depends on how the work is done. One realization I had when I was cooking for the whole company was that I was consciously hurting my cycle time to deliver pizzas faster. Let me explain. The interesting part of the baking process looked like this:

Considering that I’ve had enough ready-to-bake pizzas the first setp was putting a pizza into the oven, then it was baked, then I was pulling it out from the oven and serving. Considering that it was almost a standardized process we can assume standard times needed for each stage: half a minute for stuffing the oven with a pizza, 10 minutes of baking and a minute to serve the pizza.

I was the only cook, but I wasn’t actively involved in the baking step, which is exactly what makes this case interesting. At the same time the oven was a bottleneck. What I ended up doing was protecting the bottleneck, meaning that I was trying to keep a pizza in the oven at all times.

My flow looked like this: putting a pizza into the oven, waiting till it’s ready, taking it out, putting another pizza into the oven and only then serving the one which was baked. Basically the decision-making point was when a pizza was baked.

One interesting thing is that a decision not to serve a pizza instantly after it was taken out of the oven also meant increasing work in progress. I pulled another pizza before making the first one done. One could say that I was another bottleneck as my activities were split between protecting the original bottleneck (the oven) and improving cycle time (serving a pizza). Anyway, that’s another story to share.

Now, let’s look at cycle times:

What we see on this picture is how many minutes elapsed since the whole thing started. You can see that each pizza was served a minute and a half after it was pulled out from the oven even though the serving part was only a minute long. It was because I was dealing with another pizza in the meantime. Average cycle time was 12 minutes.

Now, what would happen if I tried to optimize cycle time and WIP? Obviously, I would serve pizza first and only then deal with another one.

Again, the decision-making point is the same, only this time the decision is different. One thing we see already is that I can keep a lower WIP, as I get rid of the first pizza before pulling another one in. Would it be better? In fact, cycle times improve.

This time, average cycle time is 11.5 minutes. Not a surprise since I got rid of a delay connected to dealing with the other pizza. So basically I improved WIP and average cycle time. Would it be better this way?

No, not at all.

In this very situation I’ve had a queue of people waiting to be fed. In other words the metric which was more interesting for me was lead time, not cycle time. I wanted to optimize people waiting time, so the time spent from order to delivery (lead time) and not simply processing time (cycle time). Let’s have one more look at the numbers. This time with lead time added.

This is the scenario with protecting the bottleneck and worse cycle times.

And this is one with optimized cycle times and lower WIP.

In both cases lead time is counted as time elapsed from first second, so naturally with each consecutive pizza lead times are worse over time. Anyway, in the first case after four pizzas we have better average lead time (27.75 versus 28.75 minutes). This also means that I was able to deliver all these pizzas 2.5 minutes faster, so throughput of the system was also better. All that with worse cycle times and bigger WIP.

An interesting observation is that average lead time wasn’t better from the very beginning. It became so only after the third pizza was delivered.

When you think about it, it is obvious. Protecting a bottleneck does make sense when you operate in continuous manner.

Anyway, am I trying to convince you that the whole thing with optimizing cycle times and reducing WIP is complete bollocks and you shouldn’t give a damn? No, I couldn’t be further from this. My point simply is that understanding how the work is done is crucial before you start messing with the process.

As a rule of thumb, you can say that lower WIP and shorter cycle times are better, but only because so many companies have so ridiculous amounts of WIP and such insanely long cycle times that it’s safe advice in the vast majority of cases.

If you are, however, in the business of making your team working efficiently, you had better start with understanding how the work is being done, as a single bottleneck can change the whole picture.

One thought I had when writing this post was whether it translates to software projects at all. But then, I’ve recalled a number of teams that should think about exactly the same scenario. There are those which have the very same people dealing with analysis (prior to development) and testing (after development) or any other similar scenario. There are those that have a Jack-of-all-trades on board and always ask what the best thing to put his hands on is. There also are teams that are using external people part-time to cover for areas they don’t specialize in both upstream and downstream. Finally, there are functional teams juggling with many endeavors, trying to figure out which task is the most important to deal with at any given moment.

So as long as I keep my stance on Kanban principles I urge you not to take any advice as a universal truth. Understand why it works and where it works and why it is (or it is not) applicable in your case.

Because, after all, shorter cycle times and lower WIP limits are better. Except then they’re not.

in kanban, project management
3 comments

On Transparency

On Transparency post image

One of things I’ve learned throughout my career is to assume very little and expect to learn very much whenever changing a job. In terms of learning, there always is a great lesson waiting there for you, no matter what kind of an organization you’re joining. If you happen to join a crappy org this is the least you can salvage; If you join a great one, it’s like a cherry on a cake. Either way, you should always aim to learn this lesson.

But why am I telling you this? Well, I have joined Lunar Logic very recently. From what I could say before, the company was a kick-ass Ruby on Rails development shop with a very open and straightforward culture. I didn’t even try to assume much more.

One thing hasn’t been a surprise; We really are a kick-ass Rails development shop. The other has been a surprise though. I mean, I expected transparency within Lunar Logic, but its level is just stunning. In a positive way of course.

An open discussion about monthly financials, which obviously are public? Fair enough. Questioning the value of running a specific project? Perfectly OK. Sharing critical opinions on a leader’s decisions? Encouraged. Regular lean coffees where every employee can come up with any subject, even one that would be considered embarrassing in almost any organization I can think of? You’re welcome. I can hardly come up with an example of a taboo topic. In all this, and let me stress this, everyone gets honest and straightforward answers.

Does it mean that the company is easier to lead? Um, no. One needs to think about each and every decision because it will be shared with everyone. Each piece of information should be handled as it was public. After all, it is public. So basically your goal, as a leader of such an organization, is to be fair, whatever you do. There’s no place for deception, trickery or lies.

One could think that, assuming goodwill, it is a default mode of running a company. It’s not. It’s very unusual to hear about, let alone work at, such an org. There are a number of implications of this approach.

  • It is challenging for leaders. You can’t hide behind “that’s not for you to know” answer or meaningless blah blah. People won’t buy it. This is, by the way probably, the number one reason why this approach is so uncommon.
  • It helps to build trust between people. Dramatically. I don’t say you get trust for free, because it never happens, but it is way easier.
  • It eliminates us versus them mentality. Sure, not everyone is equal and not everyone has the same role in the company, but transparency makes everyone understand better everyone else’s contributions, thus eliminates many sources of potential conflicts.
  • It heavily influences relationships with customers. It’s much easier to be open and honest with clients if this is exactly what you do every day internally. I know companies that wouldn’t treat this one as a plus, but being a client, well, ask yourself what kind of a vendor you’d like to work with.

All in all, transparency is like a health-meter of an organizational culture. I don’t say that it automatically means that the org is successful, too. You can have a great culture and still go out of business. I just say that if you’re looking for a great place to work, transparency should be very, very high on a list of qualities you value. Possibly on the very top of the list, like it is in my case.

By the way, if you are a manager or a company leader, ask yourself: how many things wouldn’t you reveal to your team?

This post wouldn’t be complete without giving credits to Paul Klipp, who is the creator of this unusual organizational culture. I can say that during first few weeks I’ve already learned more about building great teams and exceptional organizations from Paul than from any leader I worked with throughout my career. It goes way beyond just a transparency bit but that’s a completely different story. Or rather a few of them. Do expect me to share them soon.

in personal development, recruitment, software business
8 comments
Kitchen Kanban, or WIP Limits, Pull, Slack and Bottlenecks Explained post image

Have you ever cooked for twenty people? If you have you know how different the process is when compared to preparing a dinner just for you and your spouse. A few days ago I was preparing lunch for folks in my company and I’m still amazed how naturally we use concepts of pull, WIP limits, bottlenecks and slack when we are in situations like this.

I can’t help but wonder: why the hell can’t we use the same approach when dealing with our professional work?

OK, so here I am, cooking 15 pizzas for a small crowd.

Bottlenecks

If you read Eli Goldratt’s The Goal you know that if you want to make the whole flow efficient you need to identify and protect bottlenecks. Having some experience with preparing pizzas for a few people, I easily guessed that the bottleneck would be an oven.

The more interesting part is how, knowing what is the bottleneck, we automatically start protecting it. The very next thing I was doing after taking a pizza out from the oven was putting another one in. If I decided to serve the pizza first I would be making my bottleneck resource (the oven) idle, which would affect the whole process and its length.

Interestingly enough, protecting the bottleneck in this case resulted in longer cycle time and, with the first delivery, worse lead time too. That’s the subject for another story though.

The lesson here is about dealing with bottlenecked parts of our processes. One of the recent conversations I’ve had was about bringing more developers into a project where business analysis was a bottleneck. It would be like hiring waiters to help me serving pizzas and expecting it would make the whole process faster.

It’s even worse if you don’t know what your bottleneck is. In the story with business analysis I’ve mentioned the team learned where the problem is only after some time into the project. Before that they would actually be willing to hire more waiters and would expect that would improve the situation.

WIP Limits

Fifteen pizzas and one cook. If I acted as an average software development team I would prepare dough for all pizzas, then move to a tomato sauce, then to other ingredients. Three hours later I would realize that, because of a system constraint, I can’t bake in batch. I would switch my efforts to deal with a hungry and angry crowd focusing more on dealing with their dissatisfaction than on delivering value. Fortunately, eventually I would run a retrospective where I would learn that I made a mistake with the baking part so I would file a retro summary into a place no one ever looks again and congratulate myself a heroic effort of dealing with hungry clients.

Instead I limited amount of work invested into preparing dough and ingredients. I prepared enough to keep the oven running all the time.

Well, actually I prepared more. I started with WIP limit of 6 pizzas, meaning that I had 6 ready-to-bake pizzas when the oven was ready. Very soon I realized one obvious and two more, less obvious, issues with such a big WIP limit.

First, 6 pizzas take up a lot of space. Space which was limited. Even more so, when more people popped up in the kitchen waiting for their share. This is basically a cost of inventory. Unfortunately, in the software industry we deal with code so we don’t really see how it stacks up and take up space, until it’s too late and fixing a bug becomes a dreadful task no one is willing to undertake.

If only we had to find a place to store tiny physical zeros and ones for each bit of our code… The industry would rock and roll.

The other two issues weren’t that painful. If you keep unbaked pizza too long it’s not that good as it’s a bit too dry after baking. I also realized that I could easily manage to prepare new pizzas at a pace that doesn’t require such a big queue in front of the oven. I could prepare better (fresher) product and it still wouldn’t affect the flow.

So I quickly reduced my queue of pizzas in front of the oven to 4, 3 and eventually 2. Sure, it changed how I worked, but it also made me more flexible. I didn’t need so much space and could react to special requests pretty flexibly.

Surprisingly enough, WIP limits in a kitchen seem so intuitive. It’s often more convenient to work in small batches. Such an approach helps to focus on the bottleneck. If you’re dealing with physical inventory you also virtually see the cost of excessive inventory. Unfortunately, with code it’s not that visible even though it’s a liability too.

It doesn’t mean that the whole mechanism changes dramatically. Much unfinished work increases multitasking, inflicts a cost of context switching, lengthens feedback loops. It just isn’t that visible, which is why we don’t naturally limit work in progress.

Slack Time

When we are talking about WIP limits we can’t forget about slack time. Technically I could prepare an infinite queue of ready-to-bake pizzas in front of the oven. Of course no mentally healthy cook would do this.

Anyway, when I started limiting my pizzas in progress I was facing moments when, in theory, I could have been preparing another one but didn’t. I didn’t, even when there was nothing else to be done at the moment.

A canonical example of slack time.

I used my slack time to chat with people, so I was happier (and we know that happy people do a better job). I got myself a coffee so I improved my energy level. I also used slack to rearrange the process a bit so my work was about to become more efficient. Finally, slack time was an occasion to check remaining ingredients to learn what pizzas I can still deliver.

In short I was doing anything but pushing more work to the system. It wouldn’t help anyways as I was bottlenecked by the oven and knew my pace well enough to come up with reasonable, yet safe, WIP limits which were telling me when I should start preparing the next pizza.

There are two lessons for us here. First, learn how the work is being done in your case. This knowledge is a prerequisite to do reasonable work with setting WIP limits. And yes, the best way to learn what WIP limits are reasonable in a specific case is experimenting to see what works and what allows to keep the pace of the flow.

Second, slack time doesn’t mean idle time. Most of the time it is used to do meaningful stuff, very often system improvements that result in better efficiency. When all people hear from my argument for slack time is “sometimes it’s better to sip coffee than to write code” I don’t know whether I should laugh or cry. It seems they don’t even try to understand, let alone measure, the work their teams do.

Pull

And, finally, pull principle. As we already know the critical part of the whole process was the oven, let’s start there. Whenever I took out a pizza from the oven it was a signal to pull another pizza into the oven. Doing this I was freeing one space in my queue of pizzas in front of the oven. It was a pull signal to prepare another one. To do this I pulled dough, tomato sauce and the rest of ingredients. When I ran out of any of these I pulled more of them from fridge.

Pull all over the place. Getting everything on demand. I was chopping vegetables or opening the next pack of salami only when I needed them. There were almost no leftovers.

Assuming that I could replenish almost every ingredient during the time a pizza was being baked, I was safe. I could even base on an assumption that it’s close to impossible that I run out of all the ingredients at the same time. And even then I had a buffer of ready-to-bake pizzas.

The only exception was dough as preparing dough took more time. Dough was my epic story. This part of the work was common for a bunch of pizzas all derived from the same batch of dough. Same like stories derived from an epic. In this case I was just monitoring the inventory of dough so I could start preparing the next batch soon enough. Again, there was a pull signal but it was a bit different: there are only two pieces of dough left; I should start preparing another batch so it would be ready once I run out of the current one.

The lesson about pull is that we should think about the work we do “from right to left.” We should start with work items that are closest to being done and consider how we can get them closer to completion. Then, going from there, we’ll be able to pull work throughout the whole process as with each pull we’ll be creating free space upstream.

Once we deploy something we create free space so we can pull something to acceptance testing. As a result we free some space in testing and pull features that are developed, which makes it possible to pull more work from a backlog, etc.

When using this approach along with WIP limits we don’t introduce extensive amount of work to the system and we keep our flow efficient.

Once we learn that earlier stages of work may require more time than later ones we may adjust pull signals and WIP limits accordingly so we keep the pace of the flow.

Summary

I hope the story makes it easier to understand basic concepts introduced by Kanban. Actually, I’d say that if software was physical people would understand concepts of flow, WIP limits, pull or protecting bottlenecks way easier. They would see how their undelivered code clutter their workspace, impact their pace and affects their flow of work.

So how about this: ask yourself following questions.
Where is the oven in your team?
Who is working on this part of the process?
Do you protect them?
How many ready-to-bake pizzas do you have typically?
How many of these do you really need?
What do you do when you can’t put another pizza into the oven?
What kind of space do your pizzas occupy?
Do your pizzas taste the same, no matter how long they are queued?
Do you need all the ingredients prepared up front?
How much of ingredients do you typically have prepared?
How do you know whether you need dough and when you should start preparing it?

Look at your work from this perspective and tell me whether it helps you to understand your work better. It definitely does in my case, so do expect further pizza stories in the future.

in kanban
9 comments

Why Organizational Transformations Fail

Why Organizational Transformations Fail post image

You can’t reorganize the way a business thinks by reorganizing the business.

~Stephen Parry

I can safely state that every company I worked for was attempting to make an organizational transition during my time there. Motivations varied from simply surviving, through adjusting to a new environment, to improving the whole business. Approaches to run a transition also differed, but one common part was a reorganization.

Oh, reorganizations. Who doesn’t love reorgs? Shaking everyone around. Bringing in good old insecurity and fear of unknown. Quite an interesting strategy to introduce a positive change, although the one which is most prevalent and often inevitable. Unfortunately, a strategy that has a pretty low success rate too.

After all, coffee doesn’t become sweeter simply because you stir it.

The interesting part, however, is that I can come up with an example or two in which reorganizations helped to make a transition a success, or even make it possible in the first place.

How come? The answer is hidden in Stephen Parry’s words at the beginning of the post. It’s not about the reorganization itself; it’s about changing the way business thinks. The problem with most reorgs is that they’re driven from the top, which usually means that the top of hierarchy remains the same. It also means that the way business thinks, which spreads top-down, remains unchanged.

If the organization’s leaders’ mindset remains the same, any change that is introduced down there isn’t sustainable. Eventually, it will be reverted. Depending on how many layers of isolation there are it may take some time but it’s inevitable. Prevailing mindset just goes top-down and unless you can address its source it’s a battle you’re not going to win.

I can think of, and have been a part of, reorganizations that shook the very top of a company, introducing new leaders and thus enabling the new way of thinking. Yes, the business was reorganized but this was neither the only nor the most important part of the change.

Because coffee doesn’t become sweeter simply because you stir it. Unless you’ve remembered to add sugar before, that is.

The game-changer here is mindset; that has to change in order to enable the successful transition. And I have bad news for you: it has to change at the very top of the organization. You don’t necessarily have to start there, but eventually it either happens or things, in general, remain the same.

So if you consider a reorg as a way to change how your business works, ask yourself a question: does this change affect the mindset of the organization’s leaders? If not, I wouldn’t expect much chance of success.

Besides, there are many ways to skin a cat. A reorg isn’t the only tool you have to change mindset across the organization. Heck, it isn’t even a very good one. Remember that when you start the next revolution just to see that virtually nothing changes as a result.

By the way, there is a neat application of this idea in a bit different situation too. If you want to preserve mindset across the organization when changing leaders, pay very close attention how new leaders think. Your company can be a well-oiled machine, but when steering wheel is grabbed by a guy who neither understands nor cares about the existing mindset, the situation is going to deteriorate pretty quickly. You just don’t want to hire Steve Balmer to substitute for Bill Gates.

in software business
0 comments

WIP Limits Revisited

WIP Limits Revisited post image

One of things you can hear repeatedly from me is why we should limit work in progress (WIP) and how it drives continuous improvement. What’s more, I usually advise using rather aggressive WIP limits. The point is that you should generate enough slack time to create space and incentive for improvements.

In other words, the goal is to make people not doing project or product development work quite frequently. Only then, freed from being busy with regular stuff, they can improve the system which they’re part of.

The part which I was paying little attention to was the cost of introducing slack time. After all, these are very rare occasions when clients pay us for improvement work, so this is some sort of investment that doesn’t come for free.

This is why Don Reinertsen’s sessions during Lean Kanban Europe Tour felt, at first, so unaligned with my experience. Don advises to start with WIP limits twice as big as average WIP in the system. This means you barely generate any slack at all. What the heck?

Let’s start with a handful of numbers. Don Reinertsen points that WIP limit which is twice as big as average WIP, when compared to no WIP limit at all, ends up with only 1% idle time more and only 1% rejected work. As a result we get 28% improvement in average cycle time. Quite an impressive change for a very small price. Unfortunately, down the road, we pay more and more for further improvements in cycle time, thus the question: how far should we drive WIP limits?

The further we go the more frequently we have idle time, thus we waste more money. Or do we? Actually, we are doing it on purpose. Introducing slack to the system creates an opportunity to improve. It’s not really idle time.

Instead of comparing value of project or product work to idle time we should compare it to value of improvement work. The price we pay isn’t that high as it would initially seem basing simply on queuing theory.

Well, almost. If we look at the situation within strict borders of a single project value of improvement work is non-existent or intangible at best. How much better will the product be or how much faster will we build remaining features? You don’t know. So you can’t say how much value these improvements will add to the project.

However, saying that the improvements are of no value would be looking from a perspective of optimizing a part; in this case a single project. Often impact of such improvements will be broader than within borders of the project and it will last longer than the project’s time span.

I don’t say I have a method you may use to evaluate cost and value attached to non-project work. If I had I’d probably be a published author by now, had lots of grey hair and a beer belly twice as big. My point is that you definitely shouldn’t account all non-project work as waste. Actually, most of the time cost of this work will be smaller than value you get out of it.

If we based purely on Don Reinertsen’s data and assumed that whenever we hit WIP limit people are idle we could come up with such a chart:

On a horizontal axis we have WIP limits going from infinite (no WIP limit at all) to aggressive WIP limits inflicting much slack time. On a vertical axis we have overall impact on a system. As we introduce WIP limits (we go to the right side of the chart) we gain value thanks to shorter average cycle times and, at least at the beginning, improved throughput. At the same time we pay the cost of delay of rejected or queued work waiting to enter the system (in backlog) and the cost of idle time.

In this case we reach the peak point of the curve pretty quickly, which means that we get most value with rather loose WIP limits. We don’t want to introduce too much idle time to the system as it’s our liability.

However, if we start thinking in terms of slack time, not idle time, and assume that we are able to produce enough value during slack time to compensate the cost the chart will be much different.

In the case number two the only factor working against us is cost of delay of work we can’t start because of WIP limits. The organization still has to pay for people doing non-project work, but we base on assumption that they create equal value during slack time.

The peak of the curve is further to the right, which means that the best possible impact happens when we use more aggressive WIP limits than in the first case.

Personally, I’d go even further. Basing on my past experience I’d speculate that often slack time results in improvements that have positive overall impact on the organization. In other words it would be quite a good idea to fund them as projects as they simply bring or save money. It gives us another scenario.

In this case impact of slack time is positive so it partially compensates increasing cost of delay, as we block more items to enter the system. Eventually, of course, overall impact is negative in each case as at the end of horizontal axis we’d have WIP limit of 0, which would mean infinite cost of delay.

Anyway, the more interesting point to look at is the peak of each curve, as this is a sweet spot for our WIP limits. And this is something we should be looking for.

I guess, by this time you’ve already noticed that there are no numbers on the charts. Obviously, there can’t be any. Specific WIP limits would depend on a number of context-dependent factors, like a team size, process complexity or external dependencies, to mention only the most obvious ones.

The shape of curves will depend on the context as well. Depending on the work you do cost of delay can have different impact, same as value of improvements will differ. Not to mention that cost attached to slack time vary as well.

What I’m trying to show here is that introducing WIP limits isn’t just a simple equation. It’s not without a reason that no credible person would simply give you a number as an answer for a question about WIP limits. You just have to find out by yourself.

By the way, the whole background I drew here is also an answer for the question why my experience seemed so unaligned with ideas shared by Don Reinertsen. I just usually see quite a lot value gained thanks to wise use of slack time. And slack time, by all means, should be accounted differently than idle time.

in kanban
2 comments

Leadership, Fellowship, Citizenship

Leadership, Fellowship, Citizenship post image

There was a point in my career when I realized how different the concepts of management and leadership were, and that to be a good manager one had to be a good leader. Since then the idea of leadership, as I understand it, has worked for me very well. I even like to consider my role in organizations I work for as a leader, not a manager.

Perceptions of leadership shift these days. Bob Marshall proposes the concept of fellowship. The idea is based on the famous Fellowship of the Ring and builds on how the group operated and what values were shared among its members, so that they eventually could achieve their goal.

A common denominator here is that everyone is equal; there’s no single “leader” who is superior to everyone else. At different points in time different people take over the role of leader in a way that is the best for the group.

As Bob points, leadership doesn’t really help to move beyond an analytical organization (see: The Marshall Model). This means the concept of leadership is insufficient to deal with further challenges our companies face on a road of continuous improvement. We need something different to deal with our teams, thus fellowship.

Another, somehow related, concept comes from Tobias Mayer, who points us to the idea of citizenship. Tobias builds the concept on a balance between rights and responsibilities. It’s not that we, as citizens, are forced or told to keep our neighborhood clean – it’s that we feel responsible for it. This mechanism can be transferred to our workplaces and it would be an improvement, right?

I like both concepts. Actually, I even see how one can transit to the other, back and forth, depending on which level of an organization you are. On a team level, fellowship neatly describes desired behaviors and group dynamics. As you go up the ladder, citizenship is a nice model to describe representation of a group among higher ranks. It also is a great way to show that we should be responsible for and to the people we work with, e.g. different teams, and the organization as a whole.

Using ideas introduced by Tobias and Bob we can improve how our teams and organizations operate, that’s for sure.

Yet, I don’t get one thing here. Why fellowship and citizenship concepts are built in opposition to leadership?

OK, maybe my understanding of leadership is flawed and there is The Ultimate Leadership Definition written in the stone somewhere, only I don’t know it. Maybe fellowship and citizenship violate one of The Holy Rules of Leadership and I’m just not aware of them. Because, for me, both ideas are perfectly aligned with leadership.

Leadership is about making a team operate better. If it takes to be in the first line, fine. When someone needs to do the dirty work no one else is willing to do, I’m good with that as well. I’m even happier when others can take over the leader’s role whenever it does make sense. And what about taking responsibility for what we do, people around and an organization around? Well, count me in, no matter what hat I wear at the moment.

When I read Bob and Tobias I’m all: “hell, yeah!” Except the part with labels. Because I still call it leadership. This is exactly what leadership is for me. Personally, I don’t need another name for what I do.

I don’t say that we should avoid coining new terms. Actually, both citizenship and fellowship are very neat names. I just don’t see the point of building the opposition to ideas we already know. The more so as citizenship and fellowship are models, which are useful for many leaders.

I don’t buy an argument that we need a completely new idea as people are misusing concepts we already have. Well, of course they are. There are all kinds of flawed flavors of leadership, same as there will be flawed flavors of fellowship and citizenship when they become popular.

I don’t agree that leadership encourages wrong behaviors, e.g. learned helplessness. Conversely, the role of a leader is to help a team operate better, thus help eliminate such behaviors. A good leader doesn’t build followership; they build new leaders.

That’s why I prefer to treat citizenship and fellowship as enhancements of leadership, not substitutions of it.

in team management
1 comment

Why Burn-up Chart Is Better Than Burn-down Chart

Why Burn-up Chart Is Better Than Burn-down Chart post image

The other day I was in the middle of discussion about visuals a team was going to use in a new project. When we came to the point of tracking completion of the project I advised a burn-up chart and intended to move on. The thing that stopped me was the question I was asked: why burn-up and not burn-down?

Burn-down Chart

First, some basics. Burn-down chart is an old idea I’ve learnt from Scrum. It is a simple graph showing amount of work on a vertical and timeline on a horizontal axis. As time progresses we keep track how much work is still not done. The goal is to hit the ground. The steepness of the curve can help us approximate when it’s going to happen or, in other words, when we’re going to be done with the whole work.

When we think about quantifying work it should be anything we use anyway – story points, weighted T-shirt sizes, simple number of tasks or what have you.

Burn-up Chart

Burn-up chart’s mechanics is basically the same. The only difference is that instead of tracking how much work is left to be done, we track how much work we’ve completed, so the curve is going up, not down.

The Difference

OK, considering these two work almost identically, what’s the difference? Personally, I don’t buy all the crap like “associations of the word burn-down are bad.” We learned not to be afraid of failure and we can’t deal with a simple word? Give me a break.

The real difference is visible when the scope changes. If we suddenly realize we have more work to do burn-down may look like this.

Unfortunately, it can also look differently if we happen to be (un)lucky enough to complete some work at the same time when we learn about additional work.

It becomes even trickier when the scope decreases.

Have we just completed something or has a client cancelled that feature which we talked about yesterday? Not to mention that approximating the finish of work becomes more difficult.

At the same time, burn-up chart makes it all perfectly visible as progress is tracked independently on scope change.

You can see scope changes in both directions, as well as real progress. And this is exactly why choosing burn-up over burn-down should be no brainer.

in project management
22 comments

Refactoring: Value or Waste?

Refactoring: Value or Waste? post image

Almost every time I’m talking about measuring how much time we spend on value-adding tasks, a.k.a. value, and non-value-adding stuff, a.k.a. waste, someone brings an example of refactoring. Should it be considered value, as while we refactor we basically improve code, or rather waste, as it’s just cleaning after mess we introduced in code in the first place and the activity itself doesn’t add new value to a customer.

It seems the question bothers others as well, as this thread comes back in Twitter discussions repeatedly. Some time ago it was launched by Al Shalloway with his quick classification of refactoring:

The three types of refactoring are: to simplify, to fix, and to extend design.

By the way, if you want to read a longer version, here’s the full post.

Obviously, such an invitation to discuss value and waste couldn’t have been ignored. Stephen Parry shared an opinion:

One is value, and two are waste. Maybe all three are waste? Not sure.

Not a very strong one, isn’t it? Actually, this is where I’d like to pick it up. Stephen’s conclusion defines the whole problem: “not sure.” For me deciding whether refactoring is or is not value-adding is very contextual. Let me give you a few examples:

  1. You build your code according to TDD and the old pattern: red, green, refactor. Basically refactoring is an inherent part of your code building effort. Can it be waste then?
  2. You change an old part of a bigger system and have little idea what is happening in code there, as it’s not state-of-the-art type of software. You start with refactoring the whole thing so you actually know what you’re doing while changing it. Does it add value to a client?
  3. You make a quick fix to code and, as you go, you refactor all parts you touch to improve them, maybe you even fix something along the way. At the same time you know you could have applied just a quick and dirty fix and the task would be done too. How to account such work?
  4. Your client orders refactoring of a part of a system you work on. Functionality isn’t supposed to be changed at all. It’s just the client suppose the system will be better after all, whatever it means exactly. They pay for it so it must have some value, doesn’t it?

As you see there are many layers which you may consider. One is when refactoring is done – whether it’s an integral part of development or not. Another is whether it improves anything that can be perceived by a client, e.g. fixing something. Then, we can ask does the client consider it valuable for themselves? And of course the same question can be asked to the guys maintaining software – lower cost of maintenance or fewer future bugs can also be considered valuable, even when the client isn’t really aware of it.

To make it even more interesting, there’s another advice how to account refactoring. David Anderson points us to Donald Reinertsen:

Donald Reinertsen would define valuable activity as discovery of new (useful) information.

From this perspective if I learn new, useful information during refactoring, e.g. how this darn code works, it adds value. The question is: for whom? I mean, I’ll definitely know more about this very system, but does the client gets anything of any value thanks to this?

If you are with me by this point you already know that there’s no clear answer which helps to decide whether refactoring should be considered value or waste. Does it mean that you shouldn’t try sorting this out in your team? Well, not exactly.

Something you definitely need if you want to measure value and waste in your team (because you do refactor, don’t you?) is a clear guidance for the team: which kind of refactoring is treated in which way. In other words, it doesn’t matter whether you think that all refactoring is waste, all is value or anything in between; you want the whole team to understand value and waste in the same way. Otherwise don’t even bother with measuring it as your data will be incoherent and useless.

This guidance is even more important because at the end of the day, as Tobias Mayer advises:

The person responsible for doing the actual work should decide

The problem is that sometimes the person responsible for doing the actual work can look at things quite differently than their colleague or the rest of the team. I know people who’d see a lot value in refactoring the whole system, a.k.a. rewriting from scratch, only because they allegedly know better how to write the whole thing.

The guidance that often helps me to decide is answering the question:

Could we get it right in the first place? If so then fixing it now is likely waste.

Actually, a better question might start with “should we…” although the way of thinking is similar. Yes, I know it is very subjective and prone to individual interpretations, yet surprisingly often it helps to sort our different edge cases.

An example: Oh, our system has performance problems. Is fixing it value or waste? Well, if we knew the expected workload and failed to deliver software handling it, we screwed this one up. We could have done better and we should have done better, thus fixing it will be waste. On the other hand the workload may exceed the initial plans or whatever we agreed with the client, so knowing what we knew back then performance was good. In this case improving it will be value.

By the way: using such an approach means accounting most of refactoring as waste, because most of the time we could have, and should have, done better. And this is aligned with my thinking about refactoring, value and waste.

Anyway, as the problem is pretty open-ended, feel invited to join the discussion.

in project management, software development
9 comments