≡ Menu

Value of MVP and Knowledge Discovery Process

Value of MVP and Knowledge Discovery Process post image

By now Minimal Viable Product (MVP) is for me mostly a buzzword. While I’m a huge fan of the idea since I learned it from Lean Startup, these days I feel like one can label anything an MVP.

Given that Lunar Logic is a web software shop we often talk with startups that want to build their product. I think I can recall one or maybe two ideas that were really minimal in a way that they would validate a hypothesis and yet require least work to build. A normal case is when I can easily figure out a way of validating a hypothesis without building a half or even two thirds of an initial “MVP”.

With enough understanding of business environment it’s fairly easy to go even further than that, i.e. cut down even more features and still get the idea (in)validated.

A prevalent approach is still to build fairly feature-rich app that covers a bunch of typical scenarios that we think customers would expect. The problem is it means thinking in terms of features not in terms of customer’s problems.

Given that Lunar is around for quite a long time – it’s going to be the 11th birthday this year – we also have a good sample of data how successful these early products are. Note, I’m focusing here more on whether an early version of a product survived, rather than whether it was a good business idea in the first place.

Roughly 90% of apps we built are not online anymore. It doesn’t mean that all these business ideas weren’t successes. Some eventually evolved away from the original code base. Others ended up making their owners rich after they sold the product to e.g. Facebook. The reasons vary. Vast majority simply didn’t make the cut though.

From that perspective, the only purpose these products served was knowledge discovery. We learned more about business context. We learned more about real problems of customers and their willingness to pay for solving them. We learned that specific assumptions we’d had were completely wrong and others were right on spot.

In short, we acquired information.

In fact, we bought it, paying for building the app.

This is a perspective I’d like our potential clients to have whenever we’re discussing a new product. Of course we can build something that will cost 50 thousand bucks and only then release it and figure out what happens. Or maybe, we can figure out how to buy the same knowledge for much less.

There are two consequences of such approach.

One is that most likely there will be a much cheaper way to validate assumptions than building the app. The other is that we introduce one more intermediate step before deciding to build something.

The step is answering how much knowing a specific thing is worth for us. How much would we pay to know whether our business idea would work or not. This also boils down to: how much it will be worth if it plays out.

I can give you an example. When we were figuring out whether our no estimation cards make sense as a business idea we discussed the numbers. How much we may charge for a deck. What volumes we can think of. The end result of that discussion was that we figured that potential business outcomes don’t even justify turning the cards into a product on its own.

esimtaion cards

We simply abandoned the productization experiment as the cost of learning how much we could earn selling the cards was bigger that potential gain. Validating such a hypothesis wasn’t economically sensible.

By the way, eventually we ended up building the site and made our awesome cards available but with a very different hypothesis in mind.

In this case it wasn’t about defining what is a Minimal Viable Product. It was rather about figuring out how much potential new knowledge is worth and how much we’d need to invest to learn that knowledge. The economic equation didn’t work initially so we put any effort on hold till we pivoted the idea.

If we turned that into a simple puzzle it would be obvious. Imagine that I have 2 envelopes. There is a hundred dollar bill inside one and the other is empty. How much would you be willing to pay for information where is the money? Well, mathematically speaking no more than 50 dollars. That’s simple.

If only we could have such a discussion about every feature that we build in our products we would add much less waste to software. Same thing is true for products.

Next time someone mentions an MVP you may ask what hypothesis they’re going to validate with the MVP and how much validating that hypothesis is worth. Only then a discussion about the cost of building the actual thing will have enough context.

By the way the more unsure about the outcomes of validating the hypothesis they are the more valuable the actual experiment will be.

And yes, employing such attitude does mean that many of what people call MVPs wouldn’t be built at all. And yes, I just said that we commonly encourage our potential clients to send us much less work than they initially want. And yes, it does mean that we get less money building these products.

And no, I don’t think it affect the financial bottom line of the business. We end up being recommended for our Lean approach and taking care of best interest of our clients. It is a win-win.

in software business
1 comment

Why We Want Women in Teams

Why We Want Women in Teams post image

One of the messages that I frequently share is that we need more women in our teams. By now I’ve faced the whole spectrum of reactions to this message, from calling me a feminist to furious attacks pointing how I discriminate women. If nothing else people are opinionated on that topic and there’s a lot of shallow, and unfair, buzz when it comes to role of women in IT.

Personally, I am guilty too. I’ve been caught off guard a few times when I simply shared the short message – “we need more women in our teams” – and didn’t properly explained the long story behind.

Collective Intelligence

The first part of the story is the one about collective intelligence. We can define the core of our jobs as solving complex problems and accomplishing complex tasks. We do that by writing code, testing it, designing it, deploying it, but the outcome is that we solved a problem for our customer. In fact, I frequently say that often the best solution doesn’t mean building something or writing code.

If we agree on problem solving frame a perfect proxy for how well we’re dealing with it is collective intelligence. Well, at least as long as we are talking about collaborative work.

Anita Woolley’s research pointed factors responsible for high collective intelligence: high empathy, evenness of communication in a group and diversity of cognitive styles. These are not things that we, as the industry, pay attention to during hiring. Another conclusion of the research is that women are typically stronger in these aspects and thus the more women in a team the higher collective intelligence.

Role of Collaboration

There are two follow up threads to that. One is that the research focused only on one aspect of work, which can be translated to collaboration. That’s not all that counts. We can have a team that collaborates perfectly yet doesn’t have the basic skills to accomplish a goal. Of course all the relevant factors should be balanced.

This is why at Lunar Logic, during hiring process, we verify technical competences first. This way we know that a candidate won’t be a burden for a team when they join. Once we know that somebody’s technical skills are above the bar, we focus on the more important aspects, but the first filter is: “can you do the job?”

The decision making factors are those related to the company culture and to collaboration.

Correlation and Causation

Another thread is that “more women” message is a follow up to an observation that women tend to do much better in terms of collective intelligence. I occasionally get flak for mentioning that women are more empathetic. It would typically be a story about a very empathetic man or a woman who was a real bitch and ruined the whole collaboration in a team.

My answer to that is I don’t want to hire women. I want to hire people who excel at collaboration. If I ended up choosing between empathetic man and a cold-blooded female killer it would be a no-brainer to me. I’d go with the former.

What is important though is that statistically speaking women are better if we take into consideration aforementioned aspects. It’s not like: every woman would be better than any man. It’s like: if we’ve been hiring for these traits we’d be hiring more women than men.

And that’s where a discussion often gets dense. People would imply that I say that women are genetically better in, say, collaboration. Or pretty much the opposite, they’d say that in our societies we raise women in a way that their role boils down to “good collaborators” and not “achievers.”

My answer to that is: correlation doesn’t mean causation. I never said that being a women is a cause of being empathetic and generally functioning better in a group. What I say is that there is simply correlation between the two.

The first Kanban principle says “start with what you have” and I do start with what I have. I’m not an expert in genetics and I just accept the situation we have right now and start from there.

The Best Candidate

A valid challenge for “hire more women” argument is that it may end up with positive discrimination. My point in the whole discussion is not really hire women over men. In fact, the ultimate guidance for hiring remains the same: hire the best candidate you can.

It just so happens that, once you start thinking about different contexts, the definition of “the best candidate” evolves. A set of traits and virtues of a perfect candidate would be different than what we are used to.

And suddenly we will be hiring more women. Not because they are women. Simply, because they are the best available candidates.

Such a change is not going to happen overnight. Even now at Lunar I think we are still too much biased toward technical skills. And yet our awareness and sensitivity toward what constitutes a perfect candidate is very different than it was a few years ago. That’s probably why we end up hiring fairly high percentage of women, and yet we’re not slaves to “hire women” attitude.

Finally, I’d like to thank Janice Linden-Reed for inspiration to write this post. Our chats and her challenges to my messages are exactly the kind of conversations we need to be having in this context. And Janice, being a CEO herself and working extensively with IT industry, is the perfect person to speak up on this topic.

in recruitment, team management
1 comment

Culture Pockets

Culture Pockets post image

Organizational culture is one of these areas that I pay a lot of attention to. Over years I started valuing the role of the culture increasingly more and more. The biggest difficulty though is that organizational culture is a challenging beast to control.

Organizational Culture

organizational culture
the behavior of humans who are part of an organization and the meanings that the people react to their actions
includes the organization values, visions, norms, working language, systems, symbols, beliefs, and habits

If we look at how organizational culture is defined there are two things that are crucial. One thing is that is a culture is formed of behaviors of all people in an organization. The other is that it’s not only about behaviors but also about what drives these behaviors.

When we look at it we realize that there’s no easy way to mandate a culture change. We can’t simply say: from now on we are a learning organization or that we will value collaboration starting on June the 1st.

If we want to see a change of a culture we need to see change in behaviors. Bad news though is that change of behaviors can’t really be mandated either. I mean we can install a policeman who will make sure that everyone behaves according to the new policy we issued. What would happen when a policeman is gone? We can safely assume that over time more and more people would retreat back to the old status quo – behaviors they knew and were comfortable with. The change would be temporary and ephemeral.

Identifying Culture

If we want to approach a cultural change we first need to understand the existing culture. What is valued? What principles the organization lives by? How is it reflected in everyday behaviors? Without understanding the starting point changes would be rather random and doomed to fail.

How to identify the culture then? Look at behaviors. Ultimately the culture is a sum of behaviors of people who are a part of the organization.

There is a serious challenge that we’re facing on that front. Not everyone has equal influence over organizational culture. In fact, the higher in the hierarchy someone is the more influence they typically have.

The mechanism is simple. Higher up in the hierarchy I have more positional power and my decisions affect more people. One specific type of decision I make, or at least strongly influence, is who gets promoted in my team. Given all my biases, I will likely promote people who are similar to me, share similar values, and behave in similar way. I perpetuate and strengthen the existing culture.

That’s by the way the rationale behind an advice I frequently share: if you want to figure out what the organizational culture of a company is look at its CEO. The CEO typically has the most positional power and thus their influence over the company is the biggest one. The way they behave will be copied and mimicked across the board.

Of course we need to pay attention to everyday behaviors and not to what is the official claim of the CEO. Very frequently there would be a gap between the two. That’s something I call authenticity gap. An organization claims one thing but everyday behaviors show another. For example they claim to care about customer satisfaction and then they bullshit their customers when it comes to share the project status.

This alone says something about culture too (and not a good thing if you need to ask).

Culture Change

How do we influence the cultural change then? If we can influence the factors that drive behaviors, and thus the culture, resulting changes would influence the culture. It’s even better. When we’re changing organizational constraints we potentially influence change of behaviors across the board and not only in an individual case.

We already established though that not everyone has equal influence on the culture. People at the top, in the long run, will have an upper hand. First, they control who gets promoted and as a consequence who has positional power. Second, that power is needed to change organizational constraints: introduce new rules, change the existing ones, and establish what acceptable and what’s not.

A simple answer how to change organizational culture would be to get top management on board, and help them understand what it takes to influence the culture.

Unfortunately, few have comfort of doing that.

Does it mean that we are doomed? Does it mean that without enlisting top ranks any attempt to change organizational culture will fail? Not necessarily so.

Culture Pockets

I believe I learned about the concept of culture pockets from Dave Snowden in one of his presentations. The basic idea is that within a bigger, overarching culture we can develop and sustain a different culture.

Another label that is used to describe this concept is a culture bubble.

When we think about this frame, from the top of our heads we can come up with some examples. One would be multinational organizations that have offices all around the world. Because of geography and cultural differences each of the local offices will have at least slightly different organizational culture. You would expect to see a different vibe in an office in India, in Poland, and in USA, even if they are the parts of the same company. Even if that company has pretty uniform culture.

There are examples of introducing culture pockets or culture bubbles when everyone works in the same building too.

One such idea is Lean Startup. One obvious context of applying Lean Startup ideas are startups. Another, and quite a common one, is when big organizations decide to build their product according to Lean Startup principles.

Such a team would operate very differently and very independently from the rest of the organization. Constraints would be different and so would be everyday behaviors. We’d have a culture pocket.

Another similar example is Skunkworks. It’s an idea developed by Lockheed Martin and it boils down to a similar pattern. Lockheed Martin would occasionally run a project in Skunkworks – a very independent team that has a lot of freedom and autonomy. Clearly without all the typical constraints enforced by the company their culture is different than one seen in majority of the company. By the way, a project in this case means designing and building a whole new fighter aircraft or something of similar complexity.

If we go by that analogy, every team can be a culture bubble. It is enough that the constraints within which that team operates are different from those that are standard for the whole organization. This type of culture pocket can go only as far as the team has positional power to redesign their constraints of course. The more positional power there is the bigger the difference of what is happening within and outside of a culture bubble.

Creating a Culture Bubble

Creating and maintaining a culture pocket is a balancing act. One thing is kicking off the change. That would typically mean someone defining different rules for a part of an organization. It can simply be a team of a few people.

Normally any positional power would be an attribute of a manager. This means that such a change needs to involve that manager. They need to change rules, norms, and expected behaviors. Alternatively they need to let others decide about such stuff, i.e. give up on the power they’ve been assigned.

There is another role for mangers in a setup too. They main responsibility is to sustain the culture bubble. When a culture pocket is established there’s effort needed to keep it going within broader, sometimes even unfriendly, culture of the whole organization.

To give you an example, from a perspective of the whole organization it doesn’t matter at all how decisions are made in a team. What matters that there is no problem with indecisiveness and accountability. The way most organizations understand these concepts would mean that a manger has to be decisive and can be kept accountable. It may still be true even if decisions are made by the whole team using e.g. a decision making process.

Fragility of Culture Pockets

The biggest risk related to culture pockets is that they are fragile. Typically they base on the fact that some people, who were in power, distributed that power for a better good. It doesn’t mean, however, that when they are replaced with someone else a new person will keep a similar attitude.

A safe thing in such a situation is to adjust to whatever is the overarching culture of the whole organization. It means that a culture bubble is gone as there’s no longer anyone who take cares of translating the two cultures back and forth.

The message I have is twofold. On one hand if we want to see a fundamental and sustainable cultural change we need to get top ranks involved eventually. Without that we won’t address the risk of fragility of culture pockets. On the other hand, a simple fact that in a big organization we can’t simply change the culture of the whole company doesn’t mean that we have no options whatsoever.

From my experience culture pockets, even if fragile and to some point ephemeral, are a perfect vehicle for self-realization of people inside. For people in leadership and management positions they are sometimes the only way to maintain internal integrity.

Finally, sometimes it is the only option if we want to influence the cultural change.

in team management
1 comment

Portfolio Kanban Board

Portfolio Kanban Board post image

One thing that I learned quickly when I started experimenting with Portfolio Kanban is that a classic, flow-driven board design isn’t particularly good in vast majority of cases.

Board Designs

Long story short, I ended up redesigning the board structure completely and it worked much better. In fact, it worked so well that I started proposing such a design as a starting point whenever working with portfolio Kanban.

Portfolio Kanban Board

Interestingly enough, as Kanban adoption of portfolio level progressed I started seeing completely different approaches to visualization. Not that they were worse. They just focused on different aspects of work.

One that popped up early was two-tier board that addresses different granularity of tasks at the same board. We can track the roots of this design to David Anderson’s time at Corbis. Since then it was picked up to manage portfolios.

Portfolio Kanban Board

Another example came from Zsolt Fabok, who was inspired by Chris Matts. What he proposed was a board that stresses expected delivery dates and how an organization is doing against these dates. Again the board design is completely different from the ones we’ve seen so far.

Portfolio Kanban Board

Another interesting example that I like is a portfolio board that visualized non-homogenous flow of work. This still is one of the most unusual board designs I’ve seen and yet it makes a perfect sense given the context.

Portfolio Kanban Board

By that time it was perfectly clear that there is no such thing as a standard design of Portfolio Kanban board. Each of these designs was fairly optimal if we considered the context. At the same time each of the boards was designed to stress a different aspect of work.

The design I ended up with in my Portfolio Kanban story revolved around available capabilities and commitments. The two-tiered board design focused on flow of coarse-grained items and breaking work down to fine grained items. The deadline driven board based on an assumption that the most critical aspect of work are delivery dates and monitoring delays. Finally, non-homogenous flow board design addressed the issue of different flows of work in each of the projects.

Which design is most useful then? It depends. To address that question we first need to answer which aspect of our work is the most important to track on a regular basis. To get that answer we need to discuss risks.

Risk Management

Obviously, risk management is a multi-dimensional issue. Some dimensions would be more interesting than others. The word “interesting” typically translates to the fact that we are more vulnerable to a specific class of risks or that we are doing especially badly against managing that class of risks.

A typical example in the context of portfolio management would be overburdening. We commit to more projects or products than we can chew. We end up having our teams juggling all the concurrent endeavors. As a result we see a lot multitasking, context switching, and huge inefficiencies.

In such a case the most interesting dimension of risks would be one related to managing available capabilities and ongoing commitments. And that would exactly be information that we’d like to focus on most when designing Portfolio Kanban board.

That’s by the way almost exactly the process I went through when I proposed capability-focused board design. Of course, back then the thought process wasn’t that structured and it was more trial and error.

There are some additional aspects of the story, like the huge variability of size of the projects that we typically see. This would affect the details of the board design as well. In this case relative size is visualized as well.

The most important bit is that we start with the most important risk dimension. This should define the whole structure of Portfolio Kanban board.

Coming back to different visualizations I mentioned we can easily figure out what was the key class of risks in each design.

In two-tiered board the biggest concern was smooth flow of coarse-grained items (feature sets). We can also figure out that variability in size of feature sets wasn’t that much of a problem. Given that we’re talking about product development organization and that they are in full control of how they define feature sets, it does make a lot of sense.

Delivery date driven board stressed how important risks related deadlines and timeliness of delivery were. We may also notice that there isn’t much stress on flow of work and not that much focus on addressing potential overburdening either.

The design with non-homogenous flow, as its name suggests, pinpoints that most important risk dimension was managing flow. On the other hand risks related to capability management and overburdening don’t seem so important.

Optimal Design

The structure of Portfolio Kanban board can show only that much. We can’t visualize all the risk dimensions using the board structure alone. David Anderson in his Enterprise Service Planning talk points that it is common that organizations track 4-8 different dimensions of risks. The board design can address one or two.

Make it the two that matter most.

Where would others go? Fortunately we still have items on our board, whatever we decide them to be. We can track information relevant for other risk dimensions using information on index cards. The design of the items on the board is no less important than the design of the board itself.

Designing Portfolio Kanban board is not an obvious task. We don’t even have a standard approach – something similar to a flow-based design we commonly use on a team level. Understanding how we manage risks is the best guidance that can lead to fairly optimal board design quickly.

Of course one alternative is to go through a trial and error process. Eventually you’d land with similar outcomes. A quicker way though is to start with understanding risks.

in kanban, project management
0 comments

The Fallacy of Shu-Ha-Ri

The Fallacy of Shu-Ha-Ri post image

Shu-Ha-Ri is frequently used as a good model that shows how we adopt new skills. The general idea is pretty simple. First, we just follow the rules. We don’t ask how the thing works, we just do the basic training. That’s Shu level.

Then we move to understanding what we are doing. Instead of simply following the rules we try to grasp why the stuff we’re doing works and why the bigger whole was structured the way it was. We still follow the rules though. That’s Ha level.

Finally, we get fluent with what we do and we also have deep understanding of it. We are ready to break the rules. Well, not for the sake of breaking them of course. We are, however, ready to interpret a lot of things and use our own judgement. It will sometimes tell us to go beyond the existing set constraints. And that’s Ri level.

I’ve heard that model being used often to advise people initially going with “by the book” approach. Here’s Scrum, Kanban or whatever. And here’s a book that ultimately tells you what to do. Just do it the way it tells you, OK?

Remember, you start at Shu and only later you’d be fluent enough to make your own tweaks.

OK, I do understand the rationale behind such attitude. I’ve seen enough teams that do cherry picking without really trying to understand the thing. Why all the parts were in the mechanism in the first place. What was the goal of introducing the method in the first place. On such occasions someone may want to go like “just do the whole damn thing the way the book tells you.”

It doesn’t solve a problem though.

In fact, the problem here is lack of understanding of a method or a practice a team is trying to adopt.

We don’t solve that problem by pushing solutions through people’s throats. The best we can do is to help them understand the method or the practice in a broader context.

It won’t happen on Shu level. It is actually the main goal of Ha level.

I would go as far to argue that, in our context, starting on a Shu level may simply be a waste of time. Shu-Ha-Ri model assumes that we are learning the right thing. This sounds dangerously close to stating that we can assume that a chosen method would definitely solve our problems. Note: we make such an assumption without really understanding the method. Isn’t it inconsistent?

Normally, the opposite is true. We need to understand a method to be able to even assess whether it is relevant in any given context. I think here of rather deep understanding. It doesn’t mean going through practices only. It means figuring out what principles are behind and, most importantly, which values need to be embraced to make the practices work.

Stephen Parry often says that processing the waste more effectively is cheaper, neater, faster waste. It is true for work items we build. It is true also for changes we introduce to the organization. A simple fact that we become more and more proficient with a specific practice or a method doesn’t automatically mean that the bottom line improves in any way.

That’s why Shu-Ha-Ri is misguiding. We need to start with understanding. Otherwise we are likely to end up with yet another cargo cult. We’d be simply copying practices because others do that. We’d be doing that even if they aren’t aligned with principles and values that our organization operates by.

We need to start at least on Ha level. Interestingly enough, it means that the whole Shu level is pretty much irrelevant. Given that there is understanding, people will fill the gaps in basic skills this way or the other.

What many people point is how prevalent Shu-Ha-Ri is in all sorts of areas: martial arts, cooking, etc. I’m not trying to say it is not applicable in all these contexts. We are in a different situation though. My point is that we haven’t decided that Karate is the way to go or we want to become a perfect sushi master. If the method was defined than I would unlikely object. But it isn’t.

Are there teams that can say that Scrum (or whatever else) is their thing before they really understand the deeper context? If there are then they can perfectly go through Shu-Ha-Ri and it will work great. I just don’t seem to meet such teams and organizations.

in personal development
18 comments

The Cost of Too Many Projects in Portfolio

The Cost of Too Many Projects in Portfolio post image

I argued against multitasking a number of times. In fact, not that long ago I argued against it in the context of portfolio management too. Let me have another take on this from a different perspective.

Let’s talk about how much we pay for introducing too many concurrent initiatives in our portfolios. I won’t differentiate here between product and project portfolios because for the sake of this discussion it doesn’t matter that much.

Let’s imagine that the same team is involved in four concurrent initiatives. Our gut feel would suggest that this is rather pessimistic assumption, but when we check what organizations do it is typically much worse than that. For the sake of that discussion and to have nice pictures let’s assume that all initiatives are similarly sized and start at the same time. The team’s effort would be distributed roughly like that.

Portfolio planning

The white space between the bars representing project work would be cost of multitasking. Jerry Weinberg suggests that for each concurrent task we work on we pay the tax of 20% of the time wasted on context switching. Obviously, in the context of concurrent projects and not concurrent tasks the dynamics will be somewhat different so let me be optimistic with what the cost in such scenario would be.

If we reorganize the work so that we limit the number of concurrent initiatives to two we’d see slightly different picture.

Portfolio planning

Suddenly we finished faster. Where’s the difference? Well, we wasted much less time on context switching. I assumed some time required for transition from one project to another yet still, it shouldn’t be close to what we waste on context switching.

In fact, we can move it even further than that and limit the work to a single project or product at the same time.

Portfolio planning

We improved efficiency even more. That’s the first win, and not the most important one.

Another thing that happened is we started each project with the exception of the first one in presence of new information. We could have, and should have, learned more about our business so that we are better equipped to run another initiative.

Not only that. It is likely that technology itself or our understanding of technology advanced over the course of running the first project and thus we will be more effective building another one. These effects stack up with each consecutive project we run.

Portfolio planning

The total effect will be further improvement of the total time of building our projects or products. This is the second win.

Don Reinertsen argues that the longer the project is the longer the budget and schedule overrun. In other words, if we decided to go with all the concurrent initiatives we’d likely to go longer that we assumed.

In short it means that we do end up doing more work that we would do otherwise. Projects are, in fact, bigger than we initially assumed.

Portfolio planning

The rationale for that is that the longer the project lasts the bigger the incentive to cram more stuff into it as the business environment keeps evolving and we realize that we have new market expectations to address.

Of course there’s also an argument that with bigger initiatives we have more uncertainty so we tend to make bigger mistakes estimating the effort. While I don’t directly refer to estimates here, there’s an amplification effect for scope creep which is driven by overrunning a schedule. When we are late the market doesn’t stand still. To make up for that we add new requirements, which by the way make the project even later so we add even more features, which again hit the schedule…

A bottom line is that with bigger projects scope creep can get really nasty. With fewer concurrent initiatives and shorter lead times we get the third win.

Let’s assume that we’ve had deadlines for our projects.

Portfolio planning

What happens when we’re late? Well, we pull more people from other teams. Well, maybe there was one guy who said that adding people to the late project makes it later but, come on, who reads such old books?

Since in this case all our projects are late we’d pull people from another part of an organization. That would make their life more miserable and their project more likely to be late and eventually they will reciprocate taking our people from our future projects in a futile attempt to save theirs. That would introduce more problems in our future projects. No worries, there will be payback time when we steal their people again, right?

It’s a kind of reinforcement loop that we can avoid with fewer concurrent initiatives. That’s a fourth win.

Finally, we can focus on economies of delivering our products or projects. A common sense argument would be to bring time to market as an argument in a discussion. Would we prefer shorter or longer time to market? The answer is pretty much obvious.

To have a meaningful discussion on that we may want to discuss Cost of Delay. How much it costs us to delay each of these projects. It may translate to the situation when we don’t generate revenues or the one when we lose the existing ones. It may translate to the situation when we won’t optimize cost or fail to avoid new costs.

In either case there’s an economic value of delivering the initiative later. In fact knowing the Cost of Delay will likely change the order of delivering projects. If we assume that the last project had the biggest Cost of Delay, the first the smallest (4 times smaller) and the middle ones the same in the middle of the spectrum (a half) we’ll end up building our stuff in another order.

Portfolio planning

The efficiency of using the teams is the same. The economic effect though is vastly different. This is the biggest win of all. Including all other effects we roughly cut down the total delay cost by two thirds.

The important bit of course is understanding the idea of Cost of Delay. However, this couldn’t have been enabled if we’d kept running everything in parallel. In such a situation everything would be finished at the same time – at the latest possible moment. In fact, if we avoid concurrent work even the ultimately wrong choice of the order of the projects would yield significantly better economic results than building everything at the same time.

What we look at is a dramatic improvement in the bottom line of the business we run. The effects of limiting a number of concurrent initiatives stack up and reinforce one another.

Of course, it is not always possible to delay start of specific batch of work or limit the number of concurrent projects to very low number. The point is though that this isn’t a binary choice: all or nothing. It is a scale and typically the closer we can move toward the healthy end of it the bigger the benefits are.

in project management
8 comments

Why We Fail to Change

Why We Fail to Change post image

I’d love to get a beer each time I hear a story about management imposing a change on teams and facing strong resistance. It would be like an almost unlimited source of that decent beverage. Literally every time I’d fancy a beer I’d be like “Hey, does anybody have an agile implementation story to share?”

One common excuse is that people don’t like the change. That is surprising given how adaptable humankind has proven to be. I rather subscribe to the idea that people don’t mind the change; they don’t like being changed.

Unfortunately being changed part is the story of oh so many improvement initiatives. Agile implementations are among most prominent examples of these change programs of course.

So how is it really with responding to changes?

First, it really helps to understand typical patterns of introducing change. The model I find very relevant is Virginia Satir’s Change Model. Let me walk you through it.

We start with existing status quo that translates to a performance level. Then we introduce something new, which we call a foreign element.

Virginia Satir's Change Model 1

Then we see an expected improvement and they lived happily ever after. Actually, not really. In fact whenever I draw that part of the model and ask what happens next people intuitively give pretty good answers.

After introducing a change performance drops.

Virginia Satir's Change Model

It is kind of obvious. We need time to learn how to handle a new tool, practice, method or what have we. Eventually, we get better and better at that and we start seeing the results of promised improvements. Finally, we internalize the change and the cycle is finished.

Because of its shape the curve is called a J-curve.

It is an idealized picture though. In reality it is never such a nice curve.

Virginia Satir's Change Model

What we really see is something much bumpier. It is bumpy already when we maintain status quo. It gets much bumpier when we start messing with stuff. It’s not only that rough average goes down but also worst case scenario goes down and by much more.

It’s pretty much chaos. In fact, that’s exactly how this phase is called in the original Virginia Satir’s model.

Virginia Satir's Change Model

An interesting observation we can make is that the phase called resistance is a short one that happens just after introducing a foreign element. Does it mean that we should expect resistance against the change to be short-lived?

Yes and no. Yes, if we consider only “I’m not even going to try that new crap” type of resistance. It is typically driven by lack of understanding why the whole change was proposed in the first place. There is however the whole range of behaviors that happen later in the process that we would commonly call resistance too.

Some people aren’t ready to see, even temporary, drop in performance and once they face it they propose to get back to the old status quo. When facing a stressful situation many people retreat back to what they know best and the old ways of doing things is exactly what they know best. There are also those who are impatient and not willing to give people enough time to learn the ropes. The last group often includes managers who funded the change in the first place.

In either case the result, eventually, is the same. More resistance.

Virginia Satir's Change Model

Inevitably we reach a pivotal moment. We’ve been through the bumpy ride for quite some time already and yet we haven’t gotten better. In fact, we’ve gotten worse. Not only that. We’ve gotten worse and less predictable. The whole change doesn’t seem like such a good idea after all.

So what do we do?

Virginia Satir's Change Model

Of course we reverse the change and go back to the old status quo. Oh, and we fire or at least demote that bastard who tricked us into starting the whole thing.

One interesting caveat of the whole process is that a change is not always simply reversible. When we changed specific behavior and yet didn’t get expected outcomes reverting the behaviors may be difficult if not impossible.

For the sake of the discussion let’s assume we are lucky and the change is reversible. We are back to the late status quo and we simply wasted some time trying something new. Oh, and we built a stronger case for resisting the next change. We petrified the existing situation just a little bit more.

One reason why changes are reverted so often is the perceived risk of the change.

Virginia Satir's Change Model

Pretty good proxy for perceived risk is predictability. Typically the more unpredictable a team or a process is the more risky it is considered. In this case, the important thing that comes along with a foreign factor is how much predictability changes. Not only does performance drops but it also becomes much less predictable.

While the former alone might have been bearable, both factors combined contribute to the perception that the change was wrong in the first place.

There is another dimension that is very interesting for the whole discussion. It is the scale of change. How much we want to change the existing environment: team, process, practices, etc.

Virginia Satir's Change Model

We can imagine a series of small changes, each modifying the context only slightly. The whole series lead to a similar outcome as one big change rolled out at once.

We can call one approach evolutionary and the other revolutionary. We can use inspiration from Lean and call evolutionary approach Kaizen and revolutionary one Kaikaku.

Virginia Satir's Change Model

Fundamentally the J-curve in both approaches would be shaped the same. The difference is in the scale. The revolutionary change means one big leap and rolling out all the new stuff at once. This means a single big J-curve.

The evolutionary approach introduces a lot of tiny J-curves one after the other. In fact it is possible to have a few of changes run concurrently but let’s not complicate the picture any more.

What are the implications?

Virginia Satir's Change Model

Let’s go back to the scale of the risk we undertake. With Kaikaku unpredictability we introduce is much higher than what we’ve seen in the late status quo.

Kaizen on the other hand typically go with the changes small enough that we don’t destabilize the system nearly as much. In fact it is pretty likely that unpredictability introduced by each of the small changes will be almost invisible given that we don’t deal with fully predictable process anyway.

The risks we take with evolutionary approach are much more bearable than ones that we deal with rolling out one big change.

That’s not all though.

Virginia Satir's Change Model

Another thing is how much destabilization lasts. In other words what is cycle time of change.

Big change, naturally, has much longer cycle time as it requires people to internalize much more new stuff. It means that exposure to the risks is longer. Given that the risks are also bigger it raises the odds that the change will be reverted before we see its results.

With small changes cycle time is shorter and so is exposure to the risks. Again, not only are the risks much smaller but also they are mitigated much faster.

One last thing worth mentioning here is that so far we optimistically assumed that all the proposed changes have positive outcome. That is not always true.

With the evolutionary approach even if some of the changes don’t yield expected results we still gain from introducing others. With a revolutionary approach each part that doesn’t work simply increase likeliness of reverting the whole thing altogether.

It is not to say that Kaizen is always superior to Kaikaku. In fact both evolutionary and revolutionary approaches have their place. Stuart Kauffman’s Fitness Landscape helps to explain that.

Stuart Kauffman Fitness Landscape

Imagine a landscape that roughly shows how fit for purpose your organization is. It should simply translate to factors such as productivity etc. The higher you are the better.

The simplest and safest way to climb up would be to make small steps uphill.

Stuart Kauffman Fitness Landscape

While the approach works very well, eventually we reach a local peak. If we continue our small steps in any direction it would result in lower fitness for purpose. Simply said we wouldn’t perform as well as we did at the peak.

If we look only at the closest terrain we might as well say that we’re already perfect and there’s no need to go further.

Stuart Kauffman Fitness Landscape

Obviously, someone saying that wouldn’t be treated seriously. Well, not unless we are discussing a patient of a mental facility.

The solution is seen when we look at the big picture. If we moved to the slope of another hill we can get better than we are.

Stuart Kauffman Fitness Landscape

That’s exactly when we need a big jump. It doesn’t have to automatically land us in a better situation than the one we’ve been at initially. The opposite would often be the case. What is important though is that we land on the hill that is higher. That translates to bigger potential for improvement.

Stuart Kauffman Fitness Landscape

Once there we can retreat back to good old strategy of small steps that allow us to climb up. Eventually we reach the peak that is higher than the previous one. Then we can repeat the whole cycle looking for even a bigger hill.

Of course, similarly to the case of J-curves the picture here is idealistic in a way that each change, be it small or big, is a successful one. In reality it is more of experimentation. Some of the changes would work, some not.

Stuart Kauffman Fitness Landscape

As you might have guessed, small steps here represent the evolutionary approach or Kaizen. A big jump is an equivalent of revolutionary change or Kaikaku. Depending on the context one or the other will be more useful.

In fact, there are situations when one of the strategies will be basically useless. That’s why introducing change without understanding current context is simply begging for failure.

One more implication of the picture is that, given lack of any other guidance, evolutionary approach is both less risky and more likely to succeed. That’s why I prefer to start with when I’m unsure about the context which I’m operating within.

One last remark on the Fitness Landscape. What you’ve seen here is a heavily oversimplified view. In reality fitness landscape wouldn’t be two-dimensional. Stuart Kauffman discussed it as three-dimensional model although I tend to think of it as of a multi-dimensional model.

It means that each change can improve our situation in some dimensions and have an opposite result in others. We will have different combination of effects in different dimensions – some more desirable and some less.

If that wasn’t enough the whole landscape is dynamic and it is continuously changing over time. In other words, even after reaching local optimum we will need further continuous improvements to maintain our fitness for purpose. The peak will be moving over time.

I know the post got long by now (thank for bearing with me that far by the way). This is however the starting point for the discussion why introducing the change very often triggers resistance. It provides pretty good explanation why some many improvement initiatives fail. This is also one of my answers to the question why many agile or lean adoptions are doomed to failure from the day one.

Trying to significantly change an organization without understanding some underlying mechanisms is simply begging for frustration and failure.

Finally, understanding the change models will influence the choice of the methods and tools we’d use to drive our change programs.

in entrepreneurship, software business, team management
2 comments

Story Points and Velocity: The Good Bits

Story Points and Velocity: The Good Bits post image

You get what you measure. The old truth we keep forgetting about so often.

Story Points and Velocity

One relevant context to remember this is when we measure progress of project teams. The set that was wildly popularized along with Scrum is story point estimation, most typically with Planning Poker as a method to come up with the estimates, and measuring velocity. In such a set velocity, which simply is a number of story points completed in an iteration, is a primary measurement of pace.

I don’t say the whole set is evil. What is evil though is how it is frequently used. Story point is pretty much meaningless – the same story can be estimated 2 or 8 and both are perfectly valid sizes. This means that the moment someone starts expecting specific velocity they will get it. In fact, continuous improvement in velocity is as easy as pie. It’s known as story point inflation.

The same thing will happen when someone starts comparing teams basing on velocity.

And then there’s expectation for velocity to be predictable, which translates to low variability. If that’s the goal story point estimates will be gamed so velocity looks predictable.

How much does it have to do with any real sense of sizing?

OK, I hear the argument that these all are dysfunctions related to velocity. Fair enough. Let’s assume for the rest of this article that we are doing it right.

Measuring Progress

The problem is, that the whole activity of estimating story points doesn’t provide much value, if any. What Larry Maccherone’s research shows is that the best measure of throughput is simply counting the stories or features done.

Let me stress that: it doesn’t matter what size we initially though a story or a feature would be. What matters is that it’s either completed or not. That’s it.

Larry knows what he’s talking about. The data sample he had was from ten thousand agile teams, vast majority of them being Scrum teams. If there had been a quality signal in story point estimations and measuring velocity he would have seen it. He didn’t.

So even if we do the whole thing right it’s just a complete waste of time. Or is it?

Estimation

One part of Planning Poker, or any other discussion about story point estimates, is validating all sorts assumptions about a feature or a story. Another is addressing gaps in knowledge about the task.

These discussions can provide two valuable bits of data. Sometimes we realize that the chunk of work hidden behind a story is simply too big and should be split it. Typically it’s after someone provides input about complexity of a scenario.

The outcome of such a scenario would be simply splitting a story.

A different case is when we realize we simply don’t know enough to come up with any meaningful sizing. It may be either because we need more input or simply we are discovering a new land and thus our level of uncertainty is higher. The reason doesn’t matter.

What matters is that in such a case we deal with more uncertainty than normally thus we introduce more risk.

In both cases we get additional valuable information. Beyond that a discussion whether something is worth 5 or 8 story points is irrelevant.

No Bullshit Estimation Cards

That’s basically rationale for no bullshit estimation cards. I like to think of it as of “Story Points and Velocity: The Good Bits.”

Instead of focusing on futile discussion and potentially driving dysfunctional behaviors we get a neat tool that keeps few valuable parts of the approach. And you get a little bit of sense of humor for free.

By the way, there’s a more serious a.k.a. politically correct version too.

It saves time. It provides value. And most of all, it makes you focus on the right challenges.

And in case you wondered, you can get your own deck (or as many as you want really).

esimtaion cards
in project management
1 comment

Decision Making Process

Decision Making Process post image

I’m a strong proponent of participatory leadership model where everyone takes part in leading a team or even an organization. A part of leading is making decisions. After all if all decisions still have to be made, or at least approved, by a manager it isn’t much of participatory leadership.

(Benevolent) Dictatorship

The most typical starting point is that someone with power makes all decisions. As a result commonly seen hierarchies are just complicated structures of dictatorships. As a manager within my small kingdom I can do what I want as long as I don’t cross the line drawn by my overlord.

Of course there are managers who invite the whole team to share their input or even distribute particular decisions to team members. There are leaders who use their power for the good of their people. It may be benevolent dictatorship. It is still dictatorship though.

This model works fairly well as long as we have good leaders. Indecisiveness isn’t a super-common issue and if it is there’s at least one person who clearly is responsible. Often leaders have fair experience in their roles thus they are well-suited to make the calls they make.

The model isn’t ideal form a perspective of promoting participatory leadership. If we want more people to be more involved in leading a team or an organization we want them to make decisions. And I mean truly make decisions. Not as in “I propose to do this but I ask you, dear manager, to approve this so that responsibility is, in fact, on you.” I mean situations when team members make their calls and feel accountable for them.

I’d go even further and propose that in truly participatory leadership model team members acting as leaders would make calls that their managers wouldn’t.

This isn’t going to happen with a classic decision making process.

Consensus

A natural alternative is a consensus-driven decision making process. A situation where we look for a solution that everyone agrees on.

This one definitely allows escaping dictatorship model caveats. It doesn’t come for free though.

Looking for consensus doesn’t mean looking for the best option, but rather looking for the least controversial option. These two are very rarely synonymous. Another issue is the tiredness effect. After a long discussion people switch to “I don’t care anymore, let someone make that decision finally and move on.”

Not to mention that the whole decision making process suddenly gets really time-consuming for many people.

While in theory consensus solves accountability problem – everyone agreed to a decision – in practice the picture isn’t that rosy. If I didn’t take active part in the discussion or my objections were ignored I don’t feel like it’s my decision. Also if the decision was made by a group I will likely feel that responsibility is distributed and thus diluted.

One interesting flavor of consensus-driven decision making is when people really care about the decision even though it is controversial. It’s not that they want to avoid participation or even responsibility. It’s just consensus is unlikely, if even possible.

Such a discussion may turn into an unproductive shit storm, which doesn’t help in reaching any common solution and yet it is emotionally taxing.

Advisory Process

There is a very interesting middle ground.

My pursuit of participatory leadership decision making became a major obstacle. I declined to use my dictatorship power on many occasions encouraging people to make their own calls. The answer for a question starting with “Can I…” would simply be “Well, can you?” That worked up to some point.

It builds the right attitude, it helps to participate in leading and it makes people feel accountable. The problem starts when such a decision would affect many people. In such a case we tend to retreat back to one of the previous models: we either seek consensus or look for a dictator to make that call for us.

Not a particularly good choice.

I found the solution while looking at how no management companies deal with that challenge. Basically, everyone acts as they had dictatorship power (within constraints). However, before anyone makes their call they are obliged to consult with people who have expert knowledge on the subject as well as with those who will be affected by the

This is called advisory process. We look for an advice from those who can provide us valuable insight either because they know more about the subject or because their stakes are in play. Ultimately, a decision is made by a single person though. Interestingly enough, a decision-maker doesn’t have to take all the insight from advisory process into account. Sometimes it is not even possible.

Accountability is clearly there. Healthy level of discussion about the decisions is there as well.

Constraints

The key part of going with such decision making scheme is a clear definition of constraints. Basically, a dictator, whoever that is in a given context, gives up power for specific types of decisions.

The moment a team member makes a call that is vetoed the whole mechanism is pretty much rendered irrelevant. It suggests that people can make the decisions only as long as a manager likes them. This isn’t just a form of dictatorship but a malicious one.

These constraints may be defined in any sort of way, e.g. just a set of specific decisions or decisions that don’t incur cost beyond some limit, etc. Clarity is important as misunderstanding on that account can have exactly the same outcomes as ignoring the rules. After all if I believe I could have made a decision and it turns are not to be true I will be disappointed and disengaged. It doesn’t matter what exactly was the root cause.

Setting constraints is also a mechanism that allows smooth transition from benevolent dictatorship to a participatory model. One super difficult challenge is to learn that I, as a manager, lost control and some decisions will be made differently than I’d make them. It’s better to test how it works with safe to fail experiments before applying the new model to serious stuff.

It also addresses a potential threat of someone willing to exploit the system for their own gain.

Learning the ropes is surprisingly simple. It doesn’t force people to go too far out of their comfort zones and yet it builds a sense of leadership across the board. Finally it provides a nice option for transition from the old decision making scheme.

And the best thing of all – it is applicable on any level of organization. It can be at the very top of the company, which is what no management organizations do, but it can be done just within a team by its manager.

in team management
1 comment

Economic Value of Slack Time

Economic Value of Slack Time post image

I ranted on 100% utilization a few years ago already. Let me add another thread to that discussion. We have a ton of everyday stories that show how brain-dead the idea of maximizing utilization is. Sometimes we can figure out how it translates to work organization as well. Interestingly, what Don Reinertsen teaches us is that queuing theory says exactly the same.

As we go up with utilization lead time or wait time goes up as well. Except the latter grows exponentially. It looks roughly like that.

Cost of high utilization

But wait, does it mean that we should strive to have as low utilization as possible? I mean, after all that’s where lead times are the shortest. This doesn’t sound sensible, right?

And it doesn’t make sense indeed. Cost of waiting is only one part of this equation. The other part is cost of idle capacity. We have people doing nothing thus they don’t produce value yet they cost something. From that perspective we have two cost components: delay cost related to long lead time and cost of idle capacity related to low utilization.

Cost of high utilization

Of course the steepness of the curves would differ depending on the context. The thing is that the most interesting part of the chart is the sum of the costs which, naturally, is optimal at neither end of scale.

Cost of high utilization

There is some sort of economic optimum for how much a system should be utilized to work most cost efficiently. There’s very good news for us though. The cost curve is the U-curve with flat bottom. That means that we don’t need to find the ideal utilization as a few percent here or there doesn’t make a huge difference.

We’d naturally think that the optimum is rather toward the more utilized part of the scale. That’s where the interesting part of the discussion starts.

Economically optimal utilization

We have pretty damn good idea how much idle time or slack time costs us. This part is easy. Now, the tricky question: how much is shorter lead time worth?

Imagine yourself as a Product Owner in a funded startup providing an online service. Your competitor adds a new feature that generates quite a lot of buzz on social media. How long are you willing to wait to provide the same feature in your app? Would keeping an idle team all the time just in case you need to build something super-quickly be justified?

Now imagine that your house is on fire. How long are you willing to wait for a fire brigade? Would keeping an idle fire brigade just in case be justified?

Clearly, there are scenarios where slight differences in lead time are of huge consequences. We don’t want our emergency calls to be queued for a couple of weeks because a fire brigade or an ambulance service is heavily utilized. In other words steepness of one of the curves varies a lot.

Let’s look how different scenarios change the picture.

Economically optimal utilizationEconomically optimal utilization

This sets the economically optimal utilization at very different levels. There are contexts where a lot of slack is perfectly justified. The ultimate example I can come up with are most of armies. We don’t expect them to be fully engaged in wars all the time. In fact the more slack armies have the better. Somehow we don’t come up with an idea that if an army has no war to run we better find them one.

Of course it does matter how they use their slack time, but that’s another story.

We don’t have that drastic examples of value of slack in software industry. However, we also deal with very different steepness of delay cost curve. Even if we don’t expect instant delivery we need to move quicker and quicker as everyone else does the same.

The bottom line is that our intuition about what is the cost of wait time (delay cost) is often flawed. This means that even if we are able to go beyond the myth of 100% utilization we still tend to overload our teams too much.

Oh, and if you wondered, at Lunar Logic our goal is to keep team’s utilization between 80% and 90%.

in project management, team management
8 comments