Author: Pawel Brodzinski

  • Value for Money

    There’s one observation that I pretty much always bring to the table when I discuss the rates for our work at Lunar Logic. The following is true whenever we are buying anything, but when it comes to buying services the effect is magnified. A discussion about the price in isolation is a wrong discussion to have.

    What we should be discussing instead is value for money. How much value I get for what I pay. In a product development context, the discussion is interesting because value is not provided simply by adding more features. If it was true, if the following dynamics worked—the more features the better the product—we could distill the discussion down to efficient development.

    For anyone with just a little bit of experience in product development such an approach would sound utterly dumb.

    Customers who will use a product don’t want to have more features or more lines of code but they want their problem to be solved. The ultimate value is in understanding the problem and providing solutions that effectively address it.

    Less is more mantra has been heard for years. But it’s not necessarily about minimalism, but more about understanding the business hypothesis, the context, the customer and the problem and proposing a solution that works. Sometimes it will be “less is more”. Sometimes the outcome will be quite stuffed. Almost always the best solution will be different that the one envisioned at the beginning.

    I sometimes use a very simple, and not completely made up, example. Let’s assume you talk to a team that is twice as expensive as your reference team. They will, however, guide you through the product development process, so that they’ll end up building only one third of the initial scope. It will be enough to validate, or more likely invalidate, your initial business hypothesis. Which team is ultimately cheaper?

    They first team is not cheaper if you take into account the cost of developing an average feature. Feature development is, however, neither the only nor the most important outcome they produce. Looking from that perspective the whole equation looks very differently, doesn’t it?

    This is a way of showing that in every deal we trade different currencies. Most typically, but not necessarily so, one of these currencies is money. We already touched two more: functionality or features and validation of business hypothesis. We could go further: code quality, maintainability, scalability, and so on and so forth.

    Now, it doesn’t mean that all these currencies are equally important. In fact, to stick with the example I already used, rapid validation of business hypothesis can be of little value for a client who just needs to replace an old app with a new one, that is based on the same, proven business model.

    In other words in different situation different currencies will bear different value for a purchasing party.

    The same is true for the other side of the deal. It may cost us differently to provide a client scalable application than to build a high quality code. This would be a function of skills and experience that we have available at the company.

    The analogy goes even further than that. We can pick any currency and look how much each party values that currency. The perception of value will be different. It will be different even if we are talking about the meta currency—money.

    If you are an unfunded startup money is a scarce resource for you. If at the same time we are close to our ideal utilization (which is between 80% and 90%) additional money we’d get may not even be a good compensation for lost options and thus we’d value money much less than you do.

    On the other hand, if your startup just signed round B funding abundance of available money will make you value it much less. And if we just finished two big projects and have nothing queued up and plenty developers are slacking then we value money more than you do.

    This is obviously related to current availability of money and its reserves (put simply: wealth) in a given context. Dan Kahneman described it with a simple experiment. If you have ten thousand dollars and you get a hundred dollars that’s pretty much meh. If you have a hundred dollars and you get a hundred dollars, well, you value that hundred much, much more.

    Those two situations create a very different perception of the offer one party provides to the other. They also define two very different business environments. In one it is highly unlikely that the collaboration would be satisfying for both parties, even if it happens. In the other, odds are that both sides will be happy.

    This observation creates a very interesting dynamics. The most successful deals will be those when each party trades currency that is low-valued for the one that is valued highly.

    In fact, it makes a lot of sense to be patient and look for the deals where there is a good match on this account than to jump on anything that seems remotely attractive.

    Such an attitude requires a lot of organizational self-consciousness on both sides. At Lunar Logic we think of ourselves as of product developers. It’s not about software development or adding features. It’s about finding ways to build products effectively. It requires broader skills set and different attitude. At the same time we expect at least a bit of Lean Thinking on account of our clients. We want to share understanding that “more code” is pretty much never the answer.

    Only then we will be trading the currencies in a way that makes it a good deal for parties.

    And that’s exactly the pattern that I look for whenever I say “value for money.”

  • Portfolio Management: Role of Autonomy

    I’m a huge fan of Real Options. Along with Cynefin, it is one of the models that can be very universally applied in different domains. No wonder that some time ago I proposed application of Real Options as a sense making mechanism that connects different levels of work being done in an organization.

    Simply put, potential work, be it projects or products, are options. We rarely, if ever, can effectively work on all the potential initiatives we have on our plates. That’s why we end up picking, a.k.a. committing to, only a subset of options we have.

    Each commitment to start an initiative instantly generates a set of options on a lower level of work. Once we commit to run a project there are so many ways we can structure the work and so many possible feature sets that we can end up building. We again have a set of options available and again eventually commit to execute some of them. That in turn generates the options on a layer or finer granularity work items, say individual features. It goes all the way down to the most atomic work items we have.

    Portfolio Management Real Options

    We need an accompanying mechanism to close a full feedback loop between the layers of work. We simply need to provide information back to the higher level of work. Think of situations like a project taking longer than expected. We obviously want that information to be taken into account when we are making commitments on a portfolio level. Ultimately, it means that available capabilities have changed and thus it influences the set of options we have on a portfolio level.

    Again, the similar dynamics will be seen between any of the two neighboring layers of work. Specific technical choices for features will influence how other features are built or how much time we’d need to make changes in a product.

    Portfolio Management Real Options

    The model can be easily scaled up to reflect all the layers of work that are present in an organization. In big companies there will be multiple layers of work even in the area of portfolio management only.

    The underlying observation is that we very, very rarely need information to be escalated farther than between neighboring levels of work. In other words a single feature that is late will not affect decision-making process on portfolio level. By the same token commitment to start a new project, as long as it takes into account available capabilities, will be of little interest to a feature team involved in an ongoing initiative.

    There is, however, one basic assumption that I subconsciously made when proposing this model. The assumption is about autonomy.

    Work flows down to the finer-granularity level is through a commitment at a coarser-granularity level. The commitment, however, is not only expressing good will that we want to build something. If we make a commitment to run a project we need to fund and staff it. The part of the commitment is providing people, skills and resources required to accomplish that project within expected constraints, be it time, budget, scope, etc.

    If there are other constraints that are important they need to be explicitly described once the commitment is being made. One example that comes to my mind would be around the ultimate goals for a product or a project. It can be about technical constraints – for whatever reasons technologies that a product will be built in may be fixed. Another common case would be about high level dependencies, e.g. between two interconnected systems.

    Such constraints need to be explicit and need to be expressed when the commitment is being made simply because they influence what options we will have in the lower level of work.

    There’s also another important reason why we want explicit constraints. When we move our perspective to a different level of work we also change the team that is involved in work. In the most common scenario the team context will change from PMO, through a project team to a feature team as we go down through the picture.

    And that’s exactly when autonomy kicks in. Commitment on a higher level of work generates options on a lower level. What kind of options we get depends on the constraints we set. These are all prerogatives of a team making decisions on a higher level.

    The specific choice among the available options, on the other hand, is responsibility of a team that operates on a lower level.

    Obviously, we don’t want PMO leader to tell developers how to write unit tests. That’s the extreme example though and I see violation of autonomy all over the place.

    Let’s start from the top. The role of PMO in such a scenario would be to pick initiatives that we want to run, a.k.a. make project- or product-level commitments. The part of the process would be defining relevant constraints for each commitment. These would be things like manning and funding the new initiative, sharing expectations deadlines, etc. This is supposed to provide fair amount of predictability and safety to the team that will be doing the actual work.

    One crucial part of defining constraints is making the goals of the initiative explicit. What we are trying to achieve with this product or project. In other words why we decided to invest time of that many people and that much money and we believe it was a good idea.

    And now the final part. Then PMO should get out of the way. Options are there in a product team or a project team. That team should have autonomy to pick the ones they believe are the best. Interference from the top will disable autonomy and as such will be a source of demotivation and disengagement. It is very likely that such interference would yield suboptimal choice of options too.

    The pattern remains the same when we look at any two neighboring layers of work. For example, we will see similar dynamics between a product team and a feature team.

    Portfolio Management Real Options Autonomy

    The influence on which options get executed happens through definition of constraints and not by enforcing a specific choice of options. Those different levels of work are, in a way, isolated between each other by the mechanism of commitment that yields options on a lower level, feedback loops going up and finally by distributing authority and maintaining autonomy to make decisions within own sphere of influence.

    Unsurprisingly the latter gets abused fairly commonly, which is exactly why we need to be more aware and mindful about the issue.

  • Don’t Limit Work in Progress

    I’m a long-time fan of visual management. Visualizing work helps to gather low-hanging fruits: it makes the biggest obstacles instantly visible and thus helps to facilitate the improvements. By the way, these early improvements are typically fairly easy to apply and have big impact. That’s why we almost universally propose visualization as a practice to start with.

    At the same time a real game-changer in the long run is when we start limiting work in progress (WIP). That’s where we influence the change of behaviors. That’s where we introduce slack time. That’s where we see emergent behaviors. That’s where we enable continuous improvements.

    What’s frequently reported though, is that introducing WIP limits is hard for many teams. There’s resistance against the mechanism. It is perceived as a coercive practice by some. Many managers find it really hard to go beyond the paradigm of optimizing utilization. People naturally tend to do what they’ve always been doing: just pull more work.

    How do we address that challenge? For quite some time the best idea I’ve got was to try it as an experiment with a team. Ultimately there needs to be team buy-in to make WIP limits work. If there is resistance against a specific practice, e.g. WIP limits, there’s not much point in enforcing it. It won’t work. However, why not give it a try for some time? There doesn’t have to be any commitment to go on after the trial.

    The thing is that it usually feels better to have less work in progress. There’s not that much of multitasking and context switching. Cycle times go down so there’s more of sense of accomplishment. The pressure on the team often goes down as well.

    There’s also one thing that can show that the team is actually doing much better. It’s enough to measure start and end dates for work items. It will allow to figure out both cycle times (how much time it takes to finish a work item) and throughput (how many work items are finished in a given time window).

    If we look at Little’s Law, or rather its adoption in Kanban context, we’ll see that:

    Little's Law

    It means that if we want to get shorter cycle time, a.k.a. quicker delivery and shorter feedback loops, we need either to improve throughput (which is often difficult) or cut work in progress (which is much easier).

    We are then in a situation where we understand that limiting WIP is a good idea and yet realize that introducing such a practice is not easy. Is there another way?

    One strategy that worked for me very well over years was to change the discussion around WIP limits altogether. What do we expect when WIP limits are in place? Well, we want people to pull less work and instead to focus on finishing items rather than starting them.

    So how about focusing on these outcomes? It’s fairly simple. It’s enough to write a simple guidance. Whenever you finished work on an item first check whether there are any blockers. If there are attempt to solve them. If there aren’t any look at any unassigned items starting from the part of the board that’s closest to the done column (typically the rightmost part of the board). Then gradually go toward the beginning of value stream. Start a new item only when there’s literally nothing you can do with ongoing items.

    Kanban Board

    If we took a developer as an example the process might look like this. Once they finish coding a new work item they look at the board. There is one blocked item. It just so happens that the blocker is waiting for a response from a client. A brief chat in the team may reveal that there’s no point in pestering the client for feedback on the ticket for now. At the same time it may be a good time to remind the client about the blocker.

    Then the developer would go through the board starting at the point closest to the doneness. If the process in place was something like: development, code review, testing and acceptance by the client, the first place would be acceptance. Are there any work items where the client shared feedback, i.e. we need to implement some changes? If so that’s the next task to focus on. In fact, it doesn’t matter whether the developer was the author of the ticket but if there are any tickets that used to be his it may be input for prioritizing.

    If there’s nothing to act upon in acceptance, then we have testing. Are there any bugs from internal testing than need to be fixed? Are there any tickets that wait for testing that the developer can take care of? Of course we have a century-old discussion about developers not willing to do the actual testing. I would however point that fixing bugs for the fellow developers is equally valuable activity.

    If there’s nothing in testing that can be taken care of then we move to code review and take care of anything that’s waiting for code review or implement feedback from code review that’s been done already.

    Then we move to development and try to figure out whether the developer can help with any ongoing work item. Can we pair with other team members? Or maybe there are obstacles where another pair of eyes may be useful?

    Only after going through all the steps the developer moves to the backlog and pulls a new ticket.

    The interesting observation is that in vast majority of cases there will be something to take care of. Just try to imagine a situation where there’s literally nothing that’s blocked, requires fixing or improvements and nothing that’s in waiting queue. If we face such a situation we likely don’t need to limit work in progress any further.

    And that’s the whole trick. Instead of introducing an artificial mechanism that yields specific outcomes we can focus on these outcomes. If we can guide the team to adopt simple guidance for choosing tasks we effectively made them limit work in progress and likely with much less resistance.

    Now does it matter that we don’t have explicit WIP limits? No, not really. Does it matter that the actual limits may fluctuate a bit more than in a case when the process has hard limits? Not much. Do we see actual improvements? Huge.

    So here’s an idea. Don’t focus on practices. Focus on understanding and outcomes. It may yield similarly good or better results and with a fraction of resistance.

  • Don’t Mess with Culture

    When I’m writing these words I’m on my way home from Lean Agile Scotland. While summarizing the event Chris McDermott mentioned a few themes, two of them being organizational culture and experimentation.

    Experimentation is definitely my thing. I am into organizational culture too. I should be happy when Chris righteously pointed both as the themes of the event. At the same at that very moment time alarm lights went off in my head.

    We refer a lot to safe to fail experiments. We talk about antifragile or resilient environments. And then we quickly turn into organizational culture.

    The term culture hacking pops up frequently.

    And I’m scared.

    The reason is that in most cases there is no safe to fail experiment when we talk about an organizational culture. The culture is an outcome of everyone’s behaviors. It is ultimately about people. In other words an experiment on the culture, or a culture hack if you will, means changing people behaviors.

    If you mess it up, more often than not, there’s no coming back. We may introduce a new factor that would influence how people behave. However, removing that factor does not bring the old behaviors back. Not only that though. Often there’s no simple way to introduce another factor that would bring back the old status quo.

    There’s a study which showed that introducing a fine for popping up late at a daycare to pick up a child resulted in in more parents being late, as they felt excused for their behavior. This was quite an unexpected outcome of the experiment. However, even more interesting part is that removing the fine did not affect parents’ behaviors at all – they kept popping up late more frequently than before the experiment.

    It’s natural. Our behaviors are outcome of the constraints of the environment and our experience, knowledge and wisdom.

    We will affect behaviors by changing the constraints. The change is not mechanistic though. We can’t exactly predict what’s going to happen. At the same time the change affects our experience, knowledge and wisdom and thus irreversibly changes the bottom line.

    I can give you a simple example. When we decided to go transparent with salaries at Lunar Logic it was a huge cultural experiment. What I knew from the very beginning though was there was no coming back. Ultimately, we can make salaries “non-transparent” again. Would that change what people learned about everyone’s salary? No. Would that change that they do look at each other through the perspective of that knowledge?

    It might have affect the way they look at the company in a negative way, as suddenly some of the authority that they’d had was taken away. In other words, even from that perspective they’d have been better if such an experiment hadn’t been run at all than if it was tried and rolled back.

    I’m all for experimentation. I definitely do prefer safe to fail experiments. I am however aware that there are whole areas where such experiments are impossible most of the time, if not all of the time.

    The culture is one such area. It doesn’t mean that we shouldn’t be experimenting with the culture. It’s just that we should be aware of the stakes. If you’re just flailing around with your culture hacks there will be casualties. Having experimentation mindset is a lousy excuse.

    I guess the part of my pet peeve with understanding the tools and the methods is exactly this. When we introduce a new constraint, and a method or a tool is a constraint, we invariably change the environment and thus influence the culture. Sometimes irreversibly.

    It get even trickier when the direct goal of the experiment is to change the culture. Without understanding what we’re doing it’s highly likely that such a culture hack will backfire. Each time I run an experiment on a culture I like to think that the change will be irreversible and then I ask myself once again: do I really want to run it?

    If not I simply don’t mess with the culture.

  • Context Switching: The Good and the Bad

    Multitasking is bad. We know that. Sort of. Yet still, we keep fooling ourselves that we can do efficiently a few things at the same time.

    When I talk about limiting work in progress I point a number of reasons why multitasking and its outcome – context switching – is harmful. One of them is Zeigarnik Effect.

    Zeigarnik Effect is an observation that our brains remember much better tasks that we haven’t finished. Not only that though. If we haven’t finished something we will also have intrusive thoughts about that thing.

    So it’s not only that it’s easy for us to recall tasks that we haven’t finished. We don’t necessarily control that we occasionally think about these tasks either.

    What are the consequences? Probably the most important outcome is that, in a situation where we handle a lot of work in progress, it is an illusion that we are focusing on a single task. This is an argument that I’d frequently hear: it doesn’t matter that we have a dozen work items in development. After all, at any given time I only work on a single feature, right?

    Wrong. What Zeigarnik Effect suggests is that our brains will be switching context despite what we consciously think it would do.

    An interesting perspective to that discussion is that Zeiganik effect has been disputed – it isn’t universally accepted phenomenon. Let me run a quick validation with you then. When was the last time that, while doing something completely different, you’ve had an intrusive thought about an unfinished task. Be it an email you forgot to send, a call you didn’t make, a chore you was supposed to do or whatever else.

    We do have those out of the blue thoughts, don’t we? Now, think what happens when we do. Our brain instantly switches the context. It doesn’t really matter what we’ve been doing prior to that: driving a car, coding or having a discussion.

    That’s exactly where the multitasking tax is rooted.

    It’s not all bad though. We frequently use Zeigarnik Effect to help us. A canonical example is when we struggle with solving a complex issue, we give up, just to figure it all out when we take shower, brush our teeth or just lay in bed after we’ve woken up in the morning. We simply release the pressure of sorting it all out instantly and let our brain take care of that.

    And it does. On a moment that’s convenient for our thinking process we face a context switch that brings us to a solution of a puzzle that we’ve been facing.

    It is worth to remember that it’s a case of context switching as well. It just so happens that we’ve been taking a shower thus it doesn’t hit our productivity. The pattern in both cases is exactly the same though.

    That’s why it is worth remembering that adding more and more things to our plate doesn’t make us effective at all. At the same time we may use exactly the same mechanism to let our brain casually kick in when we struggle with solving a difficult problem.

  • Hierarchy Is Bad For Motivation

    Whenever a topic of motivation at work pops up I always bring up Dan Pink’s point. In the context of knowledge work, in order to create an environment where people are motivated we need autonomy, mastery, and purpose.

    The story is nice and compelling. However, what we don’t realize instantly is how high Dan Pink sets the bar. Let me leave the purpose part aside for now. It is worth the post on its own. Let’s focus on autonomy and mastery.

    First of all, especially in the context of software development, there’s a strong correlation between the two. Given that I have enough autonomy in how I organize my work and how the work gets done, I most likely can pursue mastery as well. There are edge cases of course, but most frequently autonomy translates to mastery (not necessarily so the other way around though).

    The problem is that the way organizations are managed does not support autonomy across the board. Vast majority of organization employs hierarchy-driven structures. A line worker has a manager, that manager has their own manager, and so on and so forth up to a CEO.

    The hierarchy itself isn’t that much of an issue though. What is an issue is how power is distributed within the hierarchy. Typically specific powers are assigned to specific levels of management. A line manager can do that much. A middle manager that much. A senior manager even more. Each manager is a ruler of their own kingdom.

    Why is power distribution so important? Well, ultimately in knowledge organizations power is used for one purpose: making decisions. And decision-making is a perfect proxy if we are interested in assessing autonomy.

    Of course each ruler has a fair level of flexibility when it comes to decide how the decision-making happens in their teams. There are, however, mechanisms that discourage them to change the common pattern, i.e. a dictatorship model.

    The hierarchical, a.k.a. dictatorship, model has its advantages. Namely it addresses the risks of indecisiveness and accountability. Given that power is clearly distributed across the hierarchy we always know who is supposed to make a decision and thus who should be kept accountable for it.

    That’s great. Unfortunately, at the same time it discourages attempts to distribute decision-making. As a manager I’m still kept accountable for all the relevant decisions made so I better make them myself or double-check whether I agree with those made by a team.

    This in turn means that normally there’s very little autonomy in hierarchical organizations.

    It brings us to a sad realization. The most common organizational structures actively discourage autonomy and authority distribution.

    If we come back to where we started – what are the drivers for motivation – we would derive that we should see really low levels of motivation out there. I mean, vast majority of companies adopt the hierarchical model as it was the only thing there is. Not only that though. Even within hierarchical model we may introduce a culture that encourages autonomy, yet very, very few companies are doing so.

    We could conclude that if the above argument is true we would expect really low levels of motivation globally in the workforce. It is a safe assumption that high motivation would result in engagement and vice versa.

    Interestingly enough Gallup run a global survey on employee engagement. The bottom line is that only 13% of employees are engaged in work. Thirteen. It would have been a shock if not the fact that we just proposed that one of the current management paradigms – a prevalent organizational structure – is unsuitable to introduce autonomy across the board and thus high levels of motivation.

    In fact, active disengagement, which would translate to being openly disgruntled, is universally more common that engagement. Now, that tells a story, doesn’t it?

    What we look at here is that modern workplace is not well-suited for achieving high motivation and high engagement of employees. There are certain things that can change the situation within structural constraints. There are good stories on how to encourage the right behaviors without tearing down the whole hierarchy.

    It is also a challenge for a dominant management paradigm that makes a rigid hierarchy a prevalent and by far the most popular organizational structure out there. While such hierarchy addresses specific risks it isn’t the only way of dealing with them. The price we pay for following that path is extremely high.

    I for once consider that price too high.

  • Value of MVP and Knowledge Discovery Process

    By now Minimal Viable Product (MVP) is for me mostly a buzzword. While I’m a huge fan of the idea since I learned it from Lean Startup, these days I feel like one can label anything an MVP.

    Given that Lunar Logic is a web software shop we often talk with startups that want to build their product. I think I can recall one or maybe two ideas that were really minimal in a way that they would validate a hypothesis and yet require least work to build. A normal case is when I can easily figure out a way of validating a hypothesis without building a half or even two thirds of an initial “MVP”.

    With enough understanding of business environment it’s fairly easy to go even further than that, i.e. cut down even more features and still get the idea (in)validated.

    A prevalent approach is still to build fairly feature-rich app that covers a bunch of typical scenarios that we think customers would expect. The problem is it means thinking in terms of features not in terms of customer’s problems.

    Given that Lunar is around for quite a long time – it’s going to be the 11th birthday this year – we also have a good sample of data how successful these early products are. Note, I’m focusing here more on whether an early version of a product survived, rather than whether it was a good business idea in the first place.

    Roughly 90% of apps we built are not online anymore. It doesn’t mean that all these business ideas weren’t successes. Some eventually evolved away from the original code base. Others ended up making their owners rich after they sold the product to e.g. Facebook. The reasons vary. Vast majority simply didn’t make the cut though.

    From that perspective, the only purpose these products served was knowledge discovery. We learned more about business context. We learned more about real problems of customers and their willingness to pay for solving them. We learned that specific assumptions we’d had were completely wrong and others were right on spot.

    In short, we acquired information.

    In fact, we bought it, paying for building the app.

    This is a perspective I’d like our potential clients to have whenever we’re discussing a new product. Of course we can build something that will cost 50 thousand bucks and only then release it and figure out what happens. Or maybe, we can figure out how to buy the same knowledge for much less.

    There are two consequences of such approach.

    One is that most likely there will be a much cheaper way to validate assumptions than building the app. The other is that we introduce one more intermediate step before deciding to build something.

    The step is answering how much knowing a specific thing is worth for us. How much would we pay to know whether our business idea would work or not. This also boils down to: how much it will be worth if it plays out.

    I can give you an example. When we were figuring out whether our no estimation cards make sense as a business idea we discussed the numbers. How much we may charge for a deck. What volumes we can think of. The end result of that discussion was that we figured that potential business outcomes don’t even justify turning the cards into a product on its own.

    esimtaion cards

    We simply abandoned the productization experiment as the cost of learning how much we could earn selling the cards was bigger that potential gain. Validating such a hypothesis wasn’t economically sensible.

    By the way, eventually we ended up building the site and made our awesome cards available but with a very different hypothesis in mind.

    In this case it wasn’t about defining what is a Minimal Viable Product. It was rather about figuring out how much potential new knowledge is worth and how much we’d need to invest to learn that knowledge. The economic equation didn’t work initially so we put any effort on hold till we pivoted the idea.

    If we turned that into a simple puzzle it would be obvious. Imagine that I have 2 envelopes. There is a hundred dollar bill inside one and the other is empty. How much would you be willing to pay for information where is the money? Well, mathematically speaking no more than 50 dollars. That’s simple.

    If only we could have such a discussion about every feature that we build in our products we would add much less waste to software. Same thing is true for products.

    Next time someone mentions an MVP you may ask what hypothesis they’re going to validate with the MVP and how much validating that hypothesis is worth. Only then a discussion about the cost of building the actual thing will have enough context.

    By the way the more unsure about the outcomes of validating the hypothesis they are the more valuable the actual experiment will be.

    And yes, employing such attitude does mean that many of what people call MVPs wouldn’t be built at all. And yes, I just said that we commonly encourage our potential clients to send us much less work than they initially want. And yes, it does mean that we get less money building these products.

    And no, I don’t think it affect the financial bottom line of the business. We end up being recommended for our Lean approach and taking care of best interest of our clients. It is a win-win.

  • Why We Want Women in Teams

    One of the messages that I frequently share is that we need more women in our teams. By now I’ve faced the whole spectrum of reactions to this message, from calling me a feminist to furious attacks pointing how I discriminate women. If nothing else people are opinionated on that topic and there’s a lot of shallow, and unfair, buzz when it comes to role of women in IT.

    Personally, I am guilty too. I’ve been caught off guard a few times when I simply shared the short message – “we need more women in our teams” – and didn’t properly explained the long story behind.

    Collective Intelligence

    The first part of the story is the one about collective intelligence. We can define the core of our jobs as solving complex problems and accomplishing complex tasks. We do that by writing code, testing it, designing it, deploying it, but the outcome is that we solved a problem for our customer. In fact, I frequently say that often the best solution doesn’t mean building something or writing code.

    If we agree on problem solving frame a perfect proxy for how well we’re dealing with it is collective intelligence. Well, at least as long as we are talking about collaborative work.

    Anita Woolley’s research pointed factors responsible for high collective intelligence: high empathy, evenness of communication in a group and diversity of cognitive styles. These are not things that we, as the industry, pay attention to during hiring. Another conclusion of the research is that women are typically stronger in these aspects and thus the more women in a team the higher collective intelligence.

    Role of Collaboration

    There are two follow up threads to that. One is that the research focused only on one aspect of work, which can be translated to collaboration. That’s not all that counts. We can have a team that collaborates perfectly yet doesn’t have the basic skills to accomplish a goal. Of course all the relevant factors should be balanced.

    This is why at Lunar Logic, during hiring process, we verify technical competences first. This way we know that a candidate won’t be a burden for a team when they join. Once we know that somebody’s technical skills are above the bar, we focus on the more important aspects, but the first filter is: “can you do the job?”

    The decision making factors are those related to the company culture and to collaboration.

    Correlation and Causation

    Another thread is that “more women” message is a follow up to an observation that women tend to do much better in terms of collective intelligence. I occasionally get flak for mentioning that women are more empathetic. It would typically be a story about a very empathetic man or a woman who was a real bitch and ruined the whole collaboration in a team.

    My answer to that is I don’t want to hire women. I want to hire people who excel at collaboration. If I ended up choosing between empathetic man and a cold-blooded female killer it would be a no-brainer to me. I’d go with the former.

    What is important though is that statistically speaking women are better if we take into consideration aforementioned aspects. It’s not like: every woman would be better than any man. It’s like: if we’ve been hiring for these traits we’d be hiring more women than men.

    And that’s where a discussion often gets dense. People would imply that I say that women are genetically better in, say, collaboration. Or pretty much the opposite, they’d say that in our societies we raise women in a way that their role boils down to “good collaborators” and not “achievers.”

    My answer to that is: correlation doesn’t mean causation. I never said that being a women is a cause of being empathetic and generally functioning better in a group. What I say is that there is simply correlation between the two.

    The first Kanban principle says “start with what you have” and I do start with what I have. I’m not an expert in genetics and I just accept the situation we have right now and start from there.

    The Best Candidate

    A valid challenge for “hire more women” argument is that it may end up with positive discrimination. My point in the whole discussion is not really hire women over men. In fact, the ultimate guidance for hiring remains the same: hire the best candidate you can.

    It just so happens that, once you start thinking about different contexts, the definition of “the best candidate” evolves. A set of traits and virtues of a perfect candidate would be different than what we are used to.

    And suddenly we will be hiring more women. Not because they are women. Simply, because they are the best available candidates.

    Such a change is not going to happen overnight. Even now at Lunar I think we are still too much biased toward technical skills. And yet our awareness and sensitivity toward what constitutes a perfect candidate is very different than it was a few years ago. That’s probably why we end up hiring fairly high percentage of women, and yet we’re not slaves to “hire women” attitude.

    Finally, I’d like to thank Janice Linden-Reed for inspiration to write this post. Our chats and her challenges to my messages are exactly the kind of conversations we need to be having in this context. And Janice, being a CEO herself and working extensively with IT industry, is the perfect person to speak up on this topic.

  • Culture Pockets

    Organizational culture is one of these areas that I pay a lot of attention to. Over years I started valuing the role of the culture increasingly more and more. The biggest difficulty though is that organizational culture is a challenging beast to control.

    Organizational Culture

    organizational culture
    the behavior of humans who are part of an organization and the meanings that the people react to their actions
    includes the organization values, visions, norms, working language, systems, symbols, beliefs, and habits

    If we look at how organizational culture is defined there are two things that are crucial. One thing is that is a culture is formed of behaviors of all people in an organization. The other is that it’s not only about behaviors but also about what drives these behaviors.

    When we look at it we realize that there’s no easy way to mandate a culture change. We can’t simply say: from now on we are a learning organization or that we will value collaboration starting on June the 1st.

    If we want to see a change of a culture we need to see change in behaviors. Bad news though is that change of behaviors can’t really be mandated either. I mean we can install a policeman who will make sure that everyone behaves according to the new policy we issued. What would happen when a policeman is gone? We can safely assume that over time more and more people would retreat back to the old status quo – behaviors they knew and were comfortable with. The change would be temporary and ephemeral.

    Identifying Culture

    If we want to approach a cultural change we first need to understand the existing culture. What is valued? What principles the organization lives by? How is it reflected in everyday behaviors? Without understanding the starting point changes would be rather random and doomed to fail.

    How to identify the culture then? Look at behaviors. Ultimately the culture is a sum of behaviors of people who are a part of the organization.

    There is a serious challenge that we’re facing on that front. Not everyone has equal influence over organizational culture. In fact, the higher in the hierarchy someone is the more influence they typically have.

    The mechanism is simple. Higher up in the hierarchy I have more positional power and my decisions affect more people. One specific type of decision I make, or at least strongly influence, is who gets promoted in my team. Given all my biases, I will likely promote people who are similar to me, share similar values, and behave in similar way. I perpetuate and strengthen the existing culture.

    That’s by the way the rationale behind an advice I frequently share: if you want to figure out what the organizational culture of a company is look at its CEO. The CEO typically has the most positional power and thus their influence over the company is the biggest one. The way they behave will be copied and mimicked across the board.

    Of course we need to pay attention to everyday behaviors and not to what is the official claim of the CEO. Very frequently there would be a gap between the two. That’s something I call authenticity gap. An organization claims one thing but everyday behaviors show another. For example they claim to care about customer satisfaction and then they bullshit their customers when it comes to share the project status.

    This alone says something about culture too (and not a good thing if you need to ask).

    Culture Change

    How do we influence the cultural change then? If we can influence the factors that drive behaviors, and thus the culture, resulting changes would influence the culture. It’s even better. When we’re changing organizational constraints we potentially influence change of behaviors across the board and not only in an individual case.

    We already established though that not everyone has equal influence on the culture. People at the top, in the long run, will have an upper hand. First, they control who gets promoted and as a consequence who has positional power. Second, that power is needed to change organizational constraints: introduce new rules, change the existing ones, and establish what acceptable and what’s not.

    A simple answer how to change organizational culture would be to get top management on board, and help them understand what it takes to influence the culture.

    Unfortunately, few have comfort of doing that.

    Does it mean that we are doomed? Does it mean that without enlisting top ranks any attempt to change organizational culture will fail? Not necessarily so.

    Culture Pockets

    I believe I learned about the concept of culture pockets from Dave Snowden in one of his presentations. The basic idea is that within a bigger, overarching culture we can develop and sustain a different culture.

    Another label that is used to describe this concept is a culture bubble.

    When we think about this frame, from the top of our heads we can come up with some examples. One would be multinational organizations that have offices all around the world. Because of geography and cultural differences each of the local offices will have at least slightly different organizational culture. You would expect to see a different vibe in an office in India, in Poland, and in USA, even if they are the parts of the same company. Even if that company has pretty uniform culture.

    There are examples of introducing culture pockets or culture bubbles when everyone works in the same building too.

    One such idea is Lean Startup. One obvious context of applying Lean Startup ideas are startups. Another, and quite a common one, is when big organizations decide to build their product according to Lean Startup principles.

    Such a team would operate very differently and very independently from the rest of the organization. Constraints would be different and so would be everyday behaviors. We’d have a culture pocket.

    Another similar example is Skunkworks. It’s an idea developed by Lockheed Martin and it boils down to a similar pattern. Lockheed Martin would occasionally run a project in Skunkworks – a very independent team that has a lot of freedom and autonomy. Clearly without all the typical constraints enforced by the company their culture is different than one seen in majority of the company. By the way, a project in this case means designing and building a whole new fighter aircraft or something of similar complexity.

    If we go by that analogy, every team can be a culture bubble. It is enough that the constraints within which that team operates are different from those that are standard for the whole organization. This type of culture pocket can go only as far as the team has positional power to redesign their constraints of course. The more positional power there is the bigger the difference of what is happening within and outside of a culture bubble.

    Creating a Culture Bubble

    Creating and maintaining a culture pocket is a balancing act. One thing is kicking off the change. That would typically mean someone defining different rules for a part of an organization. It can simply be a team of a few people.

    Normally any positional power would be an attribute of a manager. This means that such a change needs to involve that manager. They need to change rules, norms, and expected behaviors. Alternatively they need to let others decide about such stuff, i.e. give up on the power they’ve been assigned.

    There is another role for mangers in a setup too. They main responsibility is to sustain the culture bubble. When a culture pocket is established there’s effort needed to keep it going within broader, sometimes even unfriendly, culture of the whole organization.

    To give you an example, from a perspective of the whole organization it doesn’t matter at all how decisions are made in a team. What matters that there is no problem with indecisiveness and accountability. The way most organizations understand these concepts would mean that a manger has to be decisive and can be kept accountable. It may still be true even if decisions are made by the whole team using e.g. a decision making process.

    Fragility of Culture Pockets

    The biggest risk related to culture pockets is that they are fragile. Typically they base on the fact that some people, who were in power, distributed that power for a better good. It doesn’t mean, however, that when they are replaced with someone else a new person will keep a similar attitude.

    A safe thing in such a situation is to adjust to whatever is the overarching culture of the whole organization. It means that a culture bubble is gone as there’s no longer anyone who take cares of translating the two cultures back and forth.

    The message I have is twofold. On one hand if we want to see a fundamental and sustainable cultural change we need to get top ranks involved eventually. Without that we won’t address the risk of fragility of culture pockets. On the other hand, a simple fact that in a big organization we can’t simply change the culture of the whole company doesn’t mean that we have no options whatsoever.

    From my experience culture pockets, even if fragile and to some point ephemeral, are a perfect vehicle for self-realization of people inside. For people in leadership and management positions they are sometimes the only way to maintain internal integrity.

    Finally, sometimes it is the only option if we want to influence the cultural change.

  • Portfolio Kanban Board

    One thing that I learned quickly when I started experimenting with Portfolio Kanban is that a classic, flow-driven board design isn’t particularly good in vast majority of cases.

    Board Designs

    Long story short, I ended up redesigning the board structure completely and it worked much better. In fact, it worked so well that I started proposing such a design as a starting point whenever working with portfolio Kanban.

    Portfolio Kanban Board

    Interestingly enough, as Kanban adoption of portfolio level progressed I started seeing completely different approaches to visualization. Not that they were worse. They just focused on different aspects of work.

    One that popped up early was two-tier board that addresses different granularity of tasks at the same board. We can track the roots of this design to David Anderson’s time at Corbis. Since then it was picked up to manage portfolios.

    Portfolio Kanban Board

    Another example came from Zsolt Fabok, who was inspired by Chris Matts. What he proposed was a board that stresses expected delivery dates and how an organization is doing against these dates. Again the board design is completely different from the ones we’ve seen so far.

    Portfolio Kanban Board

    Another interesting example that I like is a portfolio board that visualized non-homogenous flow of work. This still is one of the most unusual board designs I’ve seen and yet it makes a perfect sense given the context.

    Portfolio Kanban Board

    By that time it was perfectly clear that there is no such thing as a standard design of Portfolio Kanban board. Each of these designs was fairly optimal if we considered the context. At the same time each of the boards was designed to stress a different aspect of work.

    The design I ended up with in my Portfolio Kanban story revolved around available capabilities and commitments. The two-tiered board design focused on flow of coarse-grained items and breaking work down to fine grained items. The deadline driven board based on an assumption that the most critical aspect of work are delivery dates and monitoring delays. Finally, non-homogenous flow board design addressed the issue of different flows of work in each of the projects.

    Which design is most useful then? It depends. To address that question we first need to answer which aspect of our work is the most important to track on a regular basis. To get that answer we need to discuss risks.

    Risk Management

    Obviously, risk management is a multi-dimensional issue. Some dimensions would be more interesting than others. The word “interesting” typically translates to the fact that we are more vulnerable to a specific class of risks or that we are doing especially badly against managing that class of risks.

    A typical example in the context of portfolio management would be overburdening. We commit to more projects or products than we can chew. We end up having our teams juggling all the concurrent endeavors. As a result we see a lot multitasking, context switching, and huge inefficiencies.

    In such a case the most interesting dimension of risks would be one related to managing available capabilities and ongoing commitments. And that would exactly be information that we’d like to focus on most when designing Portfolio Kanban board.

    That’s by the way almost exactly the process I went through when I proposed capability-focused board design. Of course, back then the thought process wasn’t that structured and it was more trial and error.

    There are some additional aspects of the story, like the huge variability of size of the projects that we typically see. This would affect the details of the board design as well. In this case relative size is visualized as well.

    The most important bit is that we start with the most important risk dimension. This should define the whole structure of Portfolio Kanban board.

    Coming back to different visualizations I mentioned we can easily figure out what was the key class of risks in each design.

    In two-tiered board the biggest concern was smooth flow of coarse-grained items (feature sets). We can also figure out that variability in size of feature sets wasn’t that much of a problem. Given that we’re talking about product development organization and that they are in full control of how they define feature sets, it does make a lot of sense.

    Delivery date driven board stressed how important risks related deadlines and timeliness of delivery were. We may also notice that there isn’t much stress on flow of work and not that much focus on addressing potential overburdening either.

    The design with non-homogenous flow, as its name suggests, pinpoints that most important risk dimension was managing flow. On the other hand risks related to capability management and overburdening don’t seem so important.

    Optimal Design

    The structure of Portfolio Kanban board can show only that much. We can’t visualize all the risk dimensions using the board structure alone. David Anderson in his Enterprise Service Planning talk points that it is common that organizations track 4-8 different dimensions of risks. The board design can address one or two.

    Make it the two that matter most.

    Where would others go? Fortunately we still have items on our board, whatever we decide them to be. We can track information relevant for other risk dimensions using information on index cards. The design of the items on the board is no less important than the design of the board itself.

    Designing Portfolio Kanban board is not an obvious task. We don’t even have a standard approach – something similar to a flow-based design we commonly use on a team level. Understanding how we manage risks is the best guidance that can lead to fairly optimal board design quickly.

    Of course one alternative is to go through a trial and error process. Eventually you’d land with similar outcomes. A quicker way though is to start with understanding risks.