Tag: software business

  • Lateral Skills Are Core Skills

    Lateral Skills Are Core Skills

    We can consider so many things both in our professional and personal lives as value exchange. The most obvious case: I exchange 40 hours of my time each week for a set amount of money that lands in my bank account each month.

    As a business, we do the same thing. We commit our engineers to spend their time on whatever our clients need them for, and in exchange, we receive an agreed-upon rate for each hour of that effort.

    In fact, I often use that frame when describing how we perceive our contributions. We aspire our clients to be happy with the value we deliver for the price they pay for the whole team.

    What Is Value?

    Now, the tricky part in these examples is value. While we can easily assess how many hours anyone spends on a task, the time doesn’t automatically translate to value. What’s more, it’s not even purely a matter of how one will spend that hour.

    I can work on an activity that has only certain odds of success. If the outcome ultimately emerges as a failure, no value will have been delivered. The reasons for that may be entirely outside my sphere of control.

    As an example, think of all the effort I invest into helping a potential client improve how they will approach building their new product just to see them choose a different partner. In this case, my work has not yielded any value for Lunar.

    However, even if I work on something that we assume to have intrinsic value, it’s still tricky. Let’s look at adding a feature to a product. (And yes, I know that not every feature is value-adding. In fact, there are some whose net value would be negative.)

    Assuming the feature I’m building will have a positive effect, the big question is how big of an impact and how much our client values it. That question is almost universally highly challenging.

    Most of the time, we can’t know the answer upfront. We need to build the thing to be able to validate the outcome. Most of the time, however, we don’t try to check that anyway. Even if we did, most of the time, the early validation will now give the complete picture.

    Think of The Shawshank Redemption. Its theatrical release was largely a flop. And yet, over the years, it gathered a strong fanbase, still keeps the #1 position as the best movie ever at IMDB, and eventually, it at least paid off production costs. Not a bad outcome for a flop, eh?

    While there obviously is a correlation between the opening weekend box office and the eventual financial success of a movie, we can’t precisely know the financial success/failure just after the first few days.

    The same goes for value assessment in most of the knowledge work. You could just as easily consider whether paying an employee any specific salary yields a valuable (enough) outcome. And the smaller chunk of work you look at, the more the mileage will vary.

    Making Value Exchange Work

    We use two basic coping mechanisms to work around uncertainty about value.

    The first one is ignoring the actual value altogether and sticking with our early assumptions of what we expected it to be. It’s like deciding to build that feature because “it will bring us so many new subscribers” and never looking back. Sure, before committing to the development, we might have asked for an estimate. If it was acceptable enough (and the work didn’t take much longer), it was the last time we did any assessment of the work.

    We thought it would bring us a ton of subscribers, and it was a sound decision to pay $10k for that. Oh, and since we already paid for that work, who cares whether it actually brought a single new soul to our solution?

    Well, I’d say this means giving up on a ton of learning here, and that’s of huge value by itself, but I clearly must be wrong, as very few product organizations do any post-release validation.

    The second way to dodge the uncertainty of value is by looking at a bigger whole. I don’t try to assess whether an engineer has been productive during every single hour or day. Heck, I’d be perfectly fine with a slower week, too. However, I want to know whether their long-term performance justifies their salary.

    By the same token, I want the whole team working for a client to deliver good value for money in the long run. So, if we look at the weeks or months of work of the whole group, we are happy with the value of everything they deliver (of course, in the context of what that effort costs, again, in total).

    In extreme cases, you could look at movie studios or VCs investing in startups. They are happy to weather plenty of bad investments as long as that one movie or that one startup yields a 100x or more return.

    Lateral Skillsets

    We all make one subconscious assumption here. That is, even though we exchange different things, they are of similar value for both parties. If we ask for $90 an hour for our developers, that hour would be priced roughly a similar way, whether by Lunar’s client or me. Or rather, we would price the skills our client rents for that hour similarly.

    This assumption, though, often doesn’t hold true.

    To stick with the engineering/software development context. When our potential clients want to assess our team’s skill set, it will almost always be heavily skewed toward technical skills. After all, when you hire developers, they will do development, right?

    But let me redefine the situation here. Imagine you seek someone to turn your idea into an MVP. You have two candidates with very similar technical skills. However, one builds their experience by working on many different small engagements, including several very early-stage products. The other spent big chunks of their time working on very few large and complex solutions.

    Which one would you choose?

    I bet most of us would go with the former over the latter, as we’d deem their experience more relevant. This relevancy, however, stems from a lateral skillset. It’s not an ultimate value. It’s value in the context.

    Product management skills may actually emerge as the most crucial for that role, even if we don’t consider it so upfront. At the early stage of product development, building the wrong thing is rarely salvageable. Building the thing wrongly, on the other hand, can typically be saved.

    If we had considered a different gig, we might have chosen differently.

    This is but one example of lateral skills that may make or break any endeavor. A broad range of people skills, project management savvy, business context understanding, and more may be similar game-changers here.

    And yet, without the specific context, the broad market would largely ignore those traits. Even within the context, they are most often omitted.

    Asymmetric Value Exchange

    The lateral skills are what change the economy of the whole value exchange. The job market would value two similarly technically skilled developers, well, similarly. After all, across a wide variety of challenges, they will provide comparable value.

    However, within the early stage product development context, the first one will deliver something extra. They wouldn’t be working extra hours, their code wouldn’t objectively be better quality, they wouldn’t be faster.

    Yet something that costs nothing extra to that developer–their lateral early product management skills–would be highly beneficial for the product company.

    That’s what breaks the simple economy of value exchange. I add to the mix something that’s of little to no cost for me but gives the other party a big upside.

    In other words, I give up nothing while they gain a lot.

    Suddenly, the whole deal has so much more wiggle room, which can be used to make it more attractive for both parties.

    Not only that, though. It also generates additional options for value delivery. Our developer may use their time building a feature that ultimately will not help. However, thanks to their experience, they can also suggest (in)validating the whole part of an app prior to committing to development. That, in turn, may lead to much more significant savings.

    In this case, exploiting lateral skills makes the value exchange asymmetric. Why is it important? It’s because whenever you can find a partnership with an asymmetric value exchange, it’s a plain win-win scenario for both parties.

    Since lateral skills typically create these scenarios, we should pay much attention to them.

    Side Skills Are Core Skills

    If I looked for a technical partner to help me with an early-stage product, I would primarily be looking for stories about discovery work, building MVPs, validation, etc.

    As a matter of fact, I’d be explicitly asking for failure stories. I mean, we know the data. New products do fail. So, if the only thing someone has to show is a neat streak of successes, you can be sure they’re in a fantastic realm.

    One of my mantras is, “Many of our past clients paid us to fail, so you don’t have to. You get all the experience we got from that as a part of the package.”

    These lessons do not fall into what’s commonly perceived as “core skills” for software development teams. And yet, we do consider them core. We shine most when we’re able to utilize those side skills.

    So, go figure out what constitutes those lateral skillsets in your context. These are the core traits you should be looking for, whether hiring or choosing a partner in your endeavors.

  • Empathy and Respect: What Makes Teams Great

    I’ve been known to bring up research on collective intelligence in many situations, e.g. here, here, or here. In my personal case, the research findings heavily influenced my perception of how to build teams and design organizations. The crucial lesson was that social perceptiveness and having everyone being heard in discussions were key to achieve high collective intelligence. This, in turn, translates to high effectiveness of a team in pretty much any flavor of knowledge work.

    Since the original work was published, the research has been repeated and findings were confirmed. Nevertheless, in software industry we tend to think we are special (even though we are not) and thus I often hear an argument that trading technical skills for social perceptiveness is not worth it. The reasoning is that technical skills easily translate to better effectiveness in what is our bread and butter—building software. At the same time fuzzy things, like e.g. empathy, do not.

    The research, indeed, was run on people from all walks of life. At the same time every niche has some specific prerequisites that enable any productivity. I don’t deny that there is specific set of technical skills that is required to get someone contributing to work a team tires to accomplish. That’s likely true in an industry and software development is no different.

    As a matter of fact, enough fluency with engineering is something we validate first when we hire at Lunar Logic. The way we define it, though, is “good enough”. We want to make sure that a new team member won’t hamper a team they join. Beyond that, we don’t care too much. It resonates with a simple realization that it is much easier to learn how to code than it is to develop empathy or social perceptiveness in general.

    The whole approach is based on an assumption that findings on collective intelligence hold true in our context. Now, do they?

    Google is known to be on their quest to find what’s the perfect team for years. Some time ago they shared what they learned in a few year-long research that involved 180 Google teams. It seems they confirmed pretty much everything that has been in the original Anita Woolley’s team work.

    It’s not the technical excellence that lands teams in the group of accomplishers. By the way, neither is management style—it was orthogonal to how well teams were doing. The patterns that were vividly seen were caring about other team members and equal access to discussion time.

    What’s more, the teams which did well against one goal seemed to do well against other goals as well. Conversely, teams that were below average seemed to be so in a consistent manner. The secret sauce seemed to work fairly universally against different challenges.

    What a surprise! After all, we are not as special as we tend to think we are.

    I could leave it here, as one of those “You see? I was right all that time!” kind of posts. There is more to learn from the Google story, though. Aspects that are mentioned often in the research are norms, either explicit or implicit. This refers to specific behaviors that are allowed and supported and, as a result, to organizational culture.

    When we are talking about teams, we talk about culture pockets as teams, especially in a big organization, may differ quite a bit one from another.

    It seems that even slight changes, such as attitude in group discussions, can boost collective effectiveness significantly. If we look deeper at what drives such behaviors we’ll find two keywords.

    Empathy and respect.

    Empathy is the enabler of social perceptiveness. It is this magic powder that makes people see and care for others. It pays off because empathic person would likely make everyone around better. Note: I’m using a very broad definition of empathy here, as there is a whole discussion how empathy is defined and decomposed.

    Then, we have respect that results in psychological safety, as people are neither embarrassed nor rejected for sharing their thoughts. This, in turn, means that everyone has equal access to ongoing conversations and they are heard. Simply put, everyone contributes. Interestingly enough, it is often perceived as a nice-to-have trait in organizations but rarely as the core capability, which every team needs to demonstrate.

    Corollary to that is an observation that both respect and care for others are deep down in the iceberg model of organizational culture. It means that we can roughly sense what are capabilities of an organization when it comes to collective intelligence. It’s enough to look at the execs and most senior managers. How much are they caring for others? How respectful are they? Since the organizational culture spreads very much in a top-down manner it is a good organizational climate metric.

    I would risk a bold hypothesis that, statistically speaking, successful organizations have leaders who act in respectful and empathic way. I have no proof to support the claim, and of course there’s anecdotal evidence how disrespectful Steve Jobs or Bill Gates were. That’s why I add “statistically speaking” to this hypothesis. Does anyone have a relevant research on that?

    Finally, there is something that I reluctantly admit since I’m not a believer in “fake it till you make it approach”. It seems that some rules and rituals can actually drive collective intelligence up. There are techniques to take turns in discussions. On one hand it creates equal access to conversation time. On the other if fakes respect in this context. It challenges ego-driven extroverts and, eventually, may trigger emergence of true respect.

    Similarly, we can learn to focus on perception of others so that we see better how they may feel. It fakes empathy but, yet again, it may trigger the right reactions and, eventually, help to develop the actual trait.

    In other words we are not doomed to fail even if so far we paid attention to technical skills only and we ended up with an environment that is far too nerdy.

    However, we’d be so much better off if we built our teams bearing in mind that empathy and respect for others are the most important traits for candidates. Yes, for software developers too.

  • Value for Money

    There’s one observation that I pretty much always bring to the table when I discuss the rates for our work at Lunar Logic. The following is true whenever we are buying anything, but when it comes to buying services the effect is magnified. A discussion about the price in isolation is a wrong discussion to have.

    What we should be discussing instead is value for money. How much value I get for what I pay. In a product development context, the discussion is interesting because value is not provided simply by adding more features. If it was true, if the following dynamics worked—the more features the better the product—we could distill the discussion down to efficient development.

    For anyone with just a little bit of experience in product development such an approach would sound utterly dumb.

    Customers who will use a product don’t want to have more features or more lines of code but they want their problem to be solved. The ultimate value is in understanding the problem and providing solutions that effectively address it.

    Less is more mantra has been heard for years. But it’s not necessarily about minimalism, but more about understanding the business hypothesis, the context, the customer and the problem and proposing a solution that works. Sometimes it will be “less is more”. Sometimes the outcome will be quite stuffed. Almost always the best solution will be different that the one envisioned at the beginning.

    I sometimes use a very simple, and not completely made up, example. Let’s assume you talk to a team that is twice as expensive as your reference team. They will, however, guide you through the product development process, so that they’ll end up building only one third of the initial scope. It will be enough to validate, or more likely invalidate, your initial business hypothesis. Which team is ultimately cheaper?

    They first team is not cheaper if you take into account the cost of developing an average feature. Feature development is, however, neither the only nor the most important outcome they produce. Looking from that perspective the whole equation looks very differently, doesn’t it?

    This is a way of showing that in every deal we trade different currencies. Most typically, but not necessarily so, one of these currencies is money. We already touched two more: functionality or features and validation of business hypothesis. We could go further: code quality, maintainability, scalability, and so on and so forth.

    Now, it doesn’t mean that all these currencies are equally important. In fact, to stick with the example I already used, rapid validation of business hypothesis can be of little value for a client who just needs to replace an old app with a new one, that is based on the same, proven business model.

    In other words in different situation different currencies will bear different value for a purchasing party.

    The same is true for the other side of the deal. It may cost us differently to provide a client scalable application than to build a high quality code. This would be a function of skills and experience that we have available at the company.

    The analogy goes even further than that. We can pick any currency and look how much each party values that currency. The perception of value will be different. It will be different even if we are talking about the meta currency—money.

    If you are an unfunded startup money is a scarce resource for you. If at the same time we are close to our ideal utilization (which is between 80% and 90%) additional money we’d get may not even be a good compensation for lost options and thus we’d value money much less than you do.

    On the other hand, if your startup just signed round B funding abundance of available money will make you value it much less. And if we just finished two big projects and have nothing queued up and plenty developers are slacking then we value money more than you do.

    This is obviously related to current availability of money and its reserves (put simply: wealth) in a given context. Dan Kahneman described it with a simple experiment. If you have ten thousand dollars and you get a hundred dollars that’s pretty much meh. If you have a hundred dollars and you get a hundred dollars, well, you value that hundred much, much more.

    Those two situations create a very different perception of the offer one party provides to the other. They also define two very different business environments. In one it is highly unlikely that the collaboration would be satisfying for both parties, even if it happens. In the other, odds are that both sides will be happy.

    This observation creates a very interesting dynamics. The most successful deals will be those when each party trades currency that is low-valued for the one that is valued highly.

    In fact, it makes a lot of sense to be patient and look for the deals where there is a good match on this account than to jump on anything that seems remotely attractive.

    Such an attitude requires a lot of organizational self-consciousness on both sides. At Lunar Logic we think of ourselves as of product developers. It’s not about software development or adding features. It’s about finding ways to build products effectively. It requires broader skills set and different attitude. At the same time we expect at least a bit of Lean Thinking on account of our clients. We want to share understanding that “more code” is pretty much never the answer.

    Only then we will be trading the currencies in a way that makes it a good deal for parties.

    And that’s exactly the pattern that I look for whenever I say “value for money.”

  • Portfolio Management: Role of Autonomy

    I’m a huge fan of Real Options. Along with Cynefin, it is one of the models that can be very universally applied in different domains. No wonder that some time ago I proposed application of Real Options as a sense making mechanism that connects different levels of work being done in an organization.

    Simply put, potential work, be it projects or products, are options. We rarely, if ever, can effectively work on all the potential initiatives we have on our plates. That’s why we end up picking, a.k.a. committing to, only a subset of options we have.

    Each commitment to start an initiative instantly generates a set of options on a lower level of work. Once we commit to run a project there are so many ways we can structure the work and so many possible feature sets that we can end up building. We again have a set of options available and again eventually commit to execute some of them. That in turn generates the options on a layer or finer granularity work items, say individual features. It goes all the way down to the most atomic work items we have.

    Portfolio Management Real Options

    We need an accompanying mechanism to close a full feedback loop between the layers of work. We simply need to provide information back to the higher level of work. Think of situations like a project taking longer than expected. We obviously want that information to be taken into account when we are making commitments on a portfolio level. Ultimately, it means that available capabilities have changed and thus it influences the set of options we have on a portfolio level.

    Again, the similar dynamics will be seen between any of the two neighboring layers of work. Specific technical choices for features will influence how other features are built or how much time we’d need to make changes in a product.

    Portfolio Management Real Options

    The model can be easily scaled up to reflect all the layers of work that are present in an organization. In big companies there will be multiple layers of work even in the area of portfolio management only.

    The underlying observation is that we very, very rarely need information to be escalated farther than between neighboring levels of work. In other words a single feature that is late will not affect decision-making process on portfolio level. By the same token commitment to start a new project, as long as it takes into account available capabilities, will be of little interest to a feature team involved in an ongoing initiative.

    There is, however, one basic assumption that I subconsciously made when proposing this model. The assumption is about autonomy.

    Work flows down to the finer-granularity level is through a commitment at a coarser-granularity level. The commitment, however, is not only expressing good will that we want to build something. If we make a commitment to run a project we need to fund and staff it. The part of the commitment is providing people, skills and resources required to accomplish that project within expected constraints, be it time, budget, scope, etc.

    If there are other constraints that are important they need to be explicitly described once the commitment is being made. One example that comes to my mind would be around the ultimate goals for a product or a project. It can be about technical constraints – for whatever reasons technologies that a product will be built in may be fixed. Another common case would be about high level dependencies, e.g. between two interconnected systems.

    Such constraints need to be explicit and need to be expressed when the commitment is being made simply because they influence what options we will have in the lower level of work.

    There’s also another important reason why we want explicit constraints. When we move our perspective to a different level of work we also change the team that is involved in work. In the most common scenario the team context will change from PMO, through a project team to a feature team as we go down through the picture.

    And that’s exactly when autonomy kicks in. Commitment on a higher level of work generates options on a lower level. What kind of options we get depends on the constraints we set. These are all prerogatives of a team making decisions on a higher level.

    The specific choice among the available options, on the other hand, is responsibility of a team that operates on a lower level.

    Obviously, we don’t want PMO leader to tell developers how to write unit tests. That’s the extreme example though and I see violation of autonomy all over the place.

    Let’s start from the top. The role of PMO in such a scenario would be to pick initiatives that we want to run, a.k.a. make project- or product-level commitments. The part of the process would be defining relevant constraints for each commitment. These would be things like manning and funding the new initiative, sharing expectations deadlines, etc. This is supposed to provide fair amount of predictability and safety to the team that will be doing the actual work.

    One crucial part of defining constraints is making the goals of the initiative explicit. What we are trying to achieve with this product or project. In other words why we decided to invest time of that many people and that much money and we believe it was a good idea.

    And now the final part. Then PMO should get out of the way. Options are there in a product team or a project team. That team should have autonomy to pick the ones they believe are the best. Interference from the top will disable autonomy and as such will be a source of demotivation and disengagement. It is very likely that such interference would yield suboptimal choice of options too.

    The pattern remains the same when we look at any two neighboring layers of work. For example, we will see similar dynamics between a product team and a feature team.

    Portfolio Management Real Options Autonomy

    The influence on which options get executed happens through definition of constraints and not by enforcing a specific choice of options. Those different levels of work are, in a way, isolated between each other by the mechanism of commitment that yields options on a lower level, feedback loops going up and finally by distributing authority and maintaining autonomy to make decisions within own sphere of influence.

    Unsurprisingly the latter gets abused fairly commonly, which is exactly why we need to be more aware and mindful about the issue.

  • Value of MVP and Knowledge Discovery Process

    By now Minimal Viable Product (MVP) is for me mostly a buzzword. While I’m a huge fan of the idea since I learned it from Lean Startup, these days I feel like one can label anything an MVP.

    Given that Lunar Logic is a web software shop we often talk with startups that want to build their product. I think I can recall one or maybe two ideas that were really minimal in a way that they would validate a hypothesis and yet require least work to build. A normal case is when I can easily figure out a way of validating a hypothesis without building a half or even two thirds of an initial “MVP”.

    With enough understanding of business environment it’s fairly easy to go even further than that, i.e. cut down even more features and still get the idea (in)validated.

    A prevalent approach is still to build fairly feature-rich app that covers a bunch of typical scenarios that we think customers would expect. The problem is it means thinking in terms of features not in terms of customer’s problems.

    Given that Lunar is around for quite a long time – it’s going to be the 11th birthday this year – we also have a good sample of data how successful these early products are. Note, I’m focusing here more on whether an early version of a product survived, rather than whether it was a good business idea in the first place.

    Roughly 90% of apps we built are not online anymore. It doesn’t mean that all these business ideas weren’t successes. Some eventually evolved away from the original code base. Others ended up making their owners rich after they sold the product to e.g. Facebook. The reasons vary. Vast majority simply didn’t make the cut though.

    From that perspective, the only purpose these products served was knowledge discovery. We learned more about business context. We learned more about real problems of customers and their willingness to pay for solving them. We learned that specific assumptions we’d had were completely wrong and others were right on spot.

    In short, we acquired information.

    In fact, we bought it, paying for building the app.

    This is a perspective I’d like our potential clients to have whenever we’re discussing a new product. Of course we can build something that will cost 50 thousand bucks and only then release it and figure out what happens. Or maybe, we can figure out how to buy the same knowledge for much less.

    There are two consequences of such approach.

    One is that most likely there will be a much cheaper way to validate assumptions than building the app. The other is that we introduce one more intermediate step before deciding to build something.

    The step is answering how much knowing a specific thing is worth for us. How much would we pay to know whether our business idea would work or not. This also boils down to: how much it will be worth if it plays out.

    I can give you an example. When we were figuring out whether our no estimation cards make sense as a business idea we discussed the numbers. How much we may charge for a deck. What volumes we can think of. The end result of that discussion was that we figured that potential business outcomes don’t even justify turning the cards into a product on its own.

    esimtaion cards

    We simply abandoned the productization experiment as the cost of learning how much we could earn selling the cards was bigger that potential gain. Validating such a hypothesis wasn’t economically sensible.

    By the way, eventually we ended up building the site and made our awesome cards available but with a very different hypothesis in mind.

    In this case it wasn’t about defining what is a Minimal Viable Product. It was rather about figuring out how much potential new knowledge is worth and how much we’d need to invest to learn that knowledge. The economic equation didn’t work initially so we put any effort on hold till we pivoted the idea.

    If we turned that into a simple puzzle it would be obvious. Imagine that I have 2 envelopes. There is a hundred dollar bill inside one and the other is empty. How much would you be willing to pay for information where is the money? Well, mathematically speaking no more than 50 dollars. That’s simple.

    If only we could have such a discussion about every feature that we build in our products we would add much less waste to software. Same thing is true for products.

    Next time someone mentions an MVP you may ask what hypothesis they’re going to validate with the MVP and how much validating that hypothesis is worth. Only then a discussion about the cost of building the actual thing will have enough context.

    By the way the more unsure about the outcomes of validating the hypothesis they are the more valuable the actual experiment will be.

    And yes, employing such attitude does mean that many of what people call MVPs wouldn’t be built at all. And yes, I just said that we commonly encourage our potential clients to send us much less work than they initially want. And yes, it does mean that we get less money building these products.

    And no, I don’t think it affect the financial bottom line of the business. We end up being recommended for our Lean approach and taking care of best interest of our clients. It is a win-win.

  • Why We Fail to Change

    I’d love to get a beer each time I hear a story about management imposing a change on teams and facing strong resistance. It would be like an almost unlimited source of that decent beverage. Literally every time I’d fancy a beer I’d be like “Hey, does anybody have an agile implementation story to share?”

    One common excuse is that people don’t like the change. That is surprising given how adaptable humankind has proven to be. I rather subscribe to the idea that people don’t mind the change; they don’t like being changed.

    Unfortunately being changed part is the story of oh so many improvement initiatives. Agile implementations are among most prominent examples of these change programs of course.

    So how is it really with responding to changes?

    First, it really helps to understand typical patterns of introducing change. The model I find very relevant is Virginia Satir’s Change Model. Let me walk you through it.

    We start with existing status quo that translates to a performance level. Then we introduce something new, which we call a foreign element.

    Virginia Satir's Change Model 1

    Then we see an expected improvement and they lived happily ever after. Actually, not really. In fact whenever I draw that part of the model and ask what happens next people intuitively give pretty good answers.

    After introducing a change performance drops.

    Virginia Satir's Change Model

    It is kind of obvious. We need time to learn how to handle a new tool, practice, method or what have we. Eventually, we get better and better at that and we start seeing the results of promised improvements. Finally, we internalize the change and the cycle is finished.

    Because of its shape the curve is called a J-curve.

    It is an idealized picture though. In reality it is never such a nice curve.

    Virginia Satir's Change Model

    What we really see is something much bumpier. It is bumpy already when we maintain status quo. It gets much bumpier when we start messing with stuff. It’s not only that rough average goes down but also worst case scenario goes down and by much more.

    It’s pretty much chaos. In fact, that’s exactly how this phase is called in the original Virginia Satir’s model.

    Virginia Satir's Change Model

    An interesting observation we can make is that the phase called resistance is a short one that happens just after introducing a foreign element. Does it mean that we should expect resistance against the change to be short-lived?

    Yes and no. Yes, if we consider only “I’m not even going to try that new crap” type of resistance. It is typically driven by lack of understanding why the whole change was proposed in the first place. There is however the whole range of behaviors that happen later in the process that we would commonly call resistance too.

    Some people aren’t ready to see, even temporary, drop in performance and once they face it they propose to get back to the old status quo. When facing a stressful situation many people retreat back to what they know best and the old ways of doing things is exactly what they know best. There are also those who are impatient and not willing to give people enough time to learn the ropes. The last group often includes managers who funded the change in the first place.

    In either case the result, eventually, is the same. More resistance.

    Virginia Satir's Change Model

    Inevitably we reach a pivotal moment. We’ve been through the bumpy ride for quite some time already and yet we haven’t gotten better. In fact, we’ve gotten worse. Not only that. We’ve gotten worse and less predictable. The whole change doesn’t seem like such a good idea after all.

    So what do we do?

    Virginia Satir's Change Model

    Of course we reverse the change and go back to the old status quo. Oh, and we fire or at least demote that bastard who tricked us into starting the whole thing.

    One interesting caveat of the whole process is that a change is not always simply reversible. When we changed specific behavior and yet didn’t get expected outcomes reverting the behaviors may be difficult if not impossible.

    For the sake of the discussion let’s assume we are lucky and the change is reversible. We are back to the late status quo and we simply wasted some time trying something new. Oh, and we built a stronger case for resisting the next change. We petrified the existing situation just a little bit more.

    One reason why changes are reverted so often is the perceived risk of the change.

    Virginia Satir's Change Model

    Pretty good proxy for perceived risk is predictability. Typically the more unpredictable a team or a process is the more risky it is considered. In this case, the important thing that comes along with a foreign factor is how much predictability changes. Not only does performance drops but it also becomes much less predictable.

    While the former alone might have been bearable, both factors combined contribute to the perception that the change was wrong in the first place.

    There is another dimension that is very interesting for the whole discussion. It is the scale of change. How much we want to change the existing environment: team, process, practices, etc.

    Virginia Satir's Change Model

    We can imagine a series of small changes, each modifying the context only slightly. The whole series lead to a similar outcome as one big change rolled out at once.

    We can call one approach evolutionary and the other revolutionary. We can use inspiration from Lean and call evolutionary approach Kaizen and revolutionary one Kaikaku.

    Virginia Satir's Change Model

    Fundamentally the J-curve in both approaches would be shaped the same. The difference is in the scale. The revolutionary change means one big leap and rolling out all the new stuff at once. This means a single big J-curve.

    The evolutionary approach introduces a lot of tiny J-curves one after the other. In fact it is possible to have a few of changes run concurrently but let’s not complicate the picture any more.

    What are the implications?

    Virginia Satir's Change Model

    Let’s go back to the scale of the risk we undertake. With Kaikaku unpredictability we introduce is much higher than what we’ve seen in the late status quo.

    Kaizen on the other hand typically go with the changes small enough that we don’t destabilize the system nearly as much. In fact it is pretty likely that unpredictability introduced by each of the small changes will be almost invisible given that we don’t deal with fully predictable process anyway.

    The risks we take with evolutionary approach are much more bearable than ones that we deal with rolling out one big change.

    That’s not all though.

    Virginia Satir's Change Model

    Another thing is how much destabilization lasts. In other words what is cycle time of change.

    Big change, naturally, has much longer cycle time as it requires people to internalize much more new stuff. It means that exposure to the risks is longer. Given that the risks are also bigger it raises the odds that the change will be reverted before we see its results.

    With small changes cycle time is shorter and so is exposure to the risks. Again, not only are the risks much smaller but also they are mitigated much faster.

    One last thing worth mentioning here is that so far we optimistically assumed that all the proposed changes have positive outcome. That is not always true.

    With the evolutionary approach even if some of the changes don’t yield expected results we still gain from introducing others. With a revolutionary approach each part that doesn’t work simply increase likeliness of reverting the whole thing altogether.

    It is not to say that Kaizen is always superior to Kaikaku. In fact both evolutionary and revolutionary approaches have their place. Stuart Kauffman’s Fitness Landscape helps to explain that.

    Stuart Kauffman Fitness Landscape

    Imagine a landscape that roughly shows how fit for purpose your organization is. It should simply translate to factors such as productivity etc. The higher you are the better.

    The simplest and safest way to climb up would be to make small steps uphill.

    Stuart Kauffman Fitness Landscape

    While the approach works very well, eventually we reach a local peak. If we continue our small steps in any direction it would result in lower fitness for purpose. Simply said we wouldn’t perform as well as we did at the peak.

    If we look only at the closest terrain we might as well say that we’re already perfect and there’s no need to go further.

    Stuart Kauffman Fitness Landscape

    Obviously, someone saying that wouldn’t be treated seriously. Well, not unless we are discussing a patient of a mental facility.

    The solution is seen when we look at the big picture. If we moved to the slope of another hill we can get better than we are.

    Stuart Kauffman Fitness Landscape

    That’s exactly when we need a big jump. It doesn’t have to automatically land us in a better situation than the one we’ve been at initially. The opposite would often be the case. What is important though is that we land on the hill that is higher. That translates to bigger potential for improvement.

    Stuart Kauffman Fitness Landscape

    Once there we can retreat back to good old strategy of small steps that allow us to climb up. Eventually we reach the peak that is higher than the previous one. Then we can repeat the whole cycle looking for even a bigger hill.

    Of course, similarly to the case of J-curves the picture here is idealistic in a way that each change, be it small or big, is a successful one. In reality it is more of experimentation. Some of the changes would work, some not.

    Stuart Kauffman Fitness Landscape

    As you might have guessed, small steps here represent the evolutionary approach or Kaizen. A big jump is an equivalent of revolutionary change or Kaikaku. Depending on the context one or the other will be more useful.

    In fact, there are situations when one of the strategies will be basically useless. That’s why introducing change without understanding current context is simply begging for failure.

    One more implication of the picture is that, given lack of any other guidance, evolutionary approach is both less risky and more likely to succeed. That’s why I prefer to start with when I’m unsure about the context which I’m operating within.

    One last remark on the Fitness Landscape. What you’ve seen here is a heavily oversimplified view. In reality fitness landscape wouldn’t be two-dimensional. Stuart Kauffman discussed it as three-dimensional model although I tend to think of it as of a multi-dimensional model.

    It means that each change can improve our situation in some dimensions and have an opposite result in others. We will have different combination of effects in different dimensions – some more desirable and some less.

    If that wasn’t enough the whole landscape is dynamic and it is continuously changing over time. In other words, even after reaching local optimum we will need further continuous improvements to maintain our fitness for purpose. The peak will be moving over time.

    I know the post got long by now (thank for bearing with me that far by the way). This is however the starting point for the discussion why introducing the change very often triggers resistance. It provides pretty good explanation why some many improvement initiatives fail. This is also one of my answers to the question why many agile or lean adoptions are doomed to failure from the day one.

    Trying to significantly change an organization without understanding some underlying mechanisms is simply begging for frustration and failure.

    Finally, understanding the change models will influence the choice of the methods and tools we’d use to drive our change programs.

  • Happiness Index as Team Sustainability Metric

    One of the discussions that happened during Don Reinertsen’s product development workshop was the one about absorbing turbulence. Absorbing turbulence is a metaphor for how well a team deals with unplanned variability of the process. These would be situations like arrival of a bigger batch of features to build, or especially complex work items, etc.

    Normally, every team is suitable to absorb some turbulence. Beyond that point this capability will not be sufficient and the situation would quickly escalate. A good example may be that the team would gracefully deal with some additional features but above certain threshold we’ll quickly the situation escalating to a high queue state. This would result in long lead times, long feedback loops and generally decreased responsiveness of a team.

    These all result in lower efficiency too.

    If such a situation persists over longer period of time it can take a heavy toll on people as we are running at risk of burning people out.

    I really like the analogy that Don uses in his work: when a human body is losing blood, interestingly enough, blood pressure doesn’t go down. What our body is designed for is to pump enough blood to the brain so it can operate normally.

    To compensate for losing blood the heart rate keeps going up. It goes up to the point where this mechanism is no longer sufficient and the organism crashes over a very short period of time.

    A similar thing happens when we overload our teams. For some time they deal with the overload just fine introducing their own coping mechanism. They’d likely work overtime, introduce additional multitasking, etc. A problem is that, similarly to the situation when a human body loses blood, there’s a price to be paid for that.

    Eventually people end up burned out and leave an organization. From that perspective a good question would be: what kind of early signals we get so that we can address the problem early.

    One answer would be careful monitoring of work. How much work in progress we have. How our queues in backlog change. This kind of stuff. While obviously that would provide us some signal, that wouldn’t really be a proxy measure if we want to consider people first, not the process first.

    What strikes me is that we at Lunar Logic already use a nice signaling mechanism that would give us an early warning. The mechanism is a happiness chart. Despite the fact that a happiness chart is a super simple, super low-tech tool it works on a few levels.

    On one level we have individuals. When I see a deviation from the norm in the chart of one person it likely means that something’s going wrong. It may be related to the work, it may be related to a personal life. Whatever the case is it does affect the work.

    If the case is individual it will unlikely be related to the whole project.

    Another context is the one of the whole company. If things were going generally wrong I’d expect to see a strong signal across the whole board. “Generally wrong” may mean either mediocre business results, or a cultural problem, or a strong source of frustration, or a bunch of different things.

    In such situation the signal would be broader than just a single project or product team.

    There’s another scenario, which would be a bunch of people recording worse than typical mood and the correlation would be around a common project. Process-related parameters of that project may look OK: a backlog queue wouldn’t grow, throughput would go up, etc. Is there anything wrong then?

    In this case happiness chart would serve as blood pressure – it would be a countermeasure pointing that something is going not exactly OK and if we keep doing that we may end up crashing.

    The rule for our happiness chart is that the color translates to the mood. Green means happy, blue means so-so, and red means not happy. Once the chart for a project team gets more bluish or even reddish that’s a very strong indicator that we are not absorbing turbulence in that team gracefully.

    The interesting part is that a happiness chart would provide such an information no matter whether a root cause is overloading a team with tasks, interpersonal conflicts that affect collaboration or anything else. The alarm would go off in either case. That’s actually perfect as we want to react to dangerous turbulence early in any situation.

    From that perspective the outcome of happiness chart isn’t a leading indicator per se – when the change is triggered it means that there’s already something happening. It isn’t a lagging one either. We can correlate it with other parameters or even check with people to figure out what is happening. We would know that we’re not coping with turbulence well enough much earlier than when we are on the verge of crashing.

    The reason why I think this strategy is so valuable is because it allows to merge the data from two different universes. On one hand we have all the process metrics that tell us how we are doing from efficiency perspective. On the other we get a social metric that tell a lot whether the current state is sustainable.

    It is exactly like a combination of a blood pressure and a heart rate. The former translate to efficient work of our steering center – the brain. The latter tells a little bit how sustainable current conditions are.

  • Two Rules of Autonomy

    One thing that we are doing at Lunar Logic is we evolve toward no management model of leadership. This means a lot of small changes that all happen with the same attitude at heart: to distribute more and more decision-making power across the whole company. This by the way also means systematically stripping down the management from that power.

    The latter is easy in our case as the management is limited to me and I kind of launched the whole process. I would have to be either a hypocrite or a schizophrenic to resist the changes. Luckily enough I believe I’m neither. (Unless that other me has something else to say, that is.)

    I don’t say it’s easy. One challenge in each step toward participatory leadership is that we, humans, don’t like to give up on power. I’m no different. I like that warm feeling that I can make a call and it shall be as I say. It’s not only that. Sometimes I simply know which option is good and the temptation to intervene and tell people what’s the best choice is strong. It would mean, however, taking a step back on a path toward democratizing leadership. So I keep my mouth shut.

    On other occasions I just feel like we are going too far from my comfort zone and I slow down the process.

    Giving up on power is a prerequisite to go further. While I don’t say it will go as easy in every case it isn’t enough to get that part working. In fact, despite being vocal how much I don’t want to make all sorts of decisions and how much I appreciate autonomy I still get loads of the questions that start with “Can I…”

    If I’m lucky enough to suppress my System 1 reaction that would be either of: yes, yes but, no, no but answers I’d reply with “Can you?” The ball is back in your court and as long as you take responsibility for the call you make I’m OK with that.

    The interesting thing is why these questions are popping up over and over again though. Despite the fact that on a conscious level we promote autonomy our natural behaviors means retreating back to the old pattern of asking for permission.

    We simply don’t claim autonomy even if it slaps us in the face.

    Besides years of programming our brains by education and work system that make it hard to act differently there’s another reason for that. Most of us want to be good citizens and we don’t want to use our autonomy to do stuff other wouldn’t like or even would be against. So we back up to the ultimate decision-making authority who is supposed to know what everyone in an organization likes or approves or more likely who doesn’t give a damn what anyone thinks – a manager.

    The interesting thing is that the fear sometimes is well-grounded. We have different sensitivity toward different things. Behaviors that we consider positive or neutral may have negative connotation for others. If I’m a manager and I use my ultimate decision-making power and I don’t give a damn then, well, I don’t give a damn. But what if I’m just a team member who cares?

    The idea we came up with is a set of two very simple rules.

    1. The Nike Way
    If you want to do something just do it.

    2. Speak Up
    If you don’t like what someone else is doing speak up.

    Yes, that’s it. There’s one underlying principle, which is mutual respect. We don’t need to love each other. We need to respect our autonomy and our right to have different views on stuff.

    The nice thing about this setup is that it is a self-balancing mechanism. It takes only one person try something new. It doesn’t require permission or even extensive up-front discussion. Pretty much the opposite, as a default we assume that every initiative would be awesome and everyone would love it or at least have nothing against.

    The Nike Way is verbalizing the attitude described by famous Grace Hopper’s words: It’s easier to ask forgiveness than it is to get permission.

    What we do know is that despite best intentions it won’t be true all the time. Occasionally, OK more often than occasionally, someone would do something that somebody else is not OK with. Then we have Speak Up rule that triggers a conscious and meaningful discussion (sometimes dubbed a shit storm) that provides additional insight for both sides and most likely some sort of consensus.

    Speak Up rule was designed with a positive scenario in mind, i.e. someone unintentionally stepped on someone else’s toe. It works however in malicious cases as well. When someone intentionally crosses the line or pulls an organization in an unwanted direction someone else will speak up too.

    The best part is that the same way it takes only one person to just do it, you need only one person to speak up.

    One might point that there’s a risk that it would end up in indecisiveness. So far I don’t see that happening. First, speaking up doesn’t mean the ultimate veto power. It simply triggers a discussion. Second, the respect bit that is a hard prerequisite keeps the discussion civilized.

    There’s a little more sophistication to balance that. Naturally extroverts would have an upper hand in unstructured discussion. That’s where empathy plays its role as helps to facilitate these weaker signals. On a basic level there are just these two simple rules: The Nike Way and Speak Up rule.

  • Practices, Principles, Values

    I was never a fan of recipes. Even less so when I heard that I have to apply them by the book. What I found over years was that books rarely, if ever, describe a context that is close enough to mine. This means that specific solutions wouldn’t be applicable in the same way as described in the original source.

    This is why I typically look for more abstracted knowledge and treat more context-dependent advice as rather inspiration that a real advice.

    From what I see that’s not a common attitude. I am surprised how frequently at conferences I would hear an argument that the sessions weren’t practical enough only because there was no recipe included. This is only a symptom though.

    A root cause for that is more general way of thinking and approaching problems. Something that we see over and over again when we’re looking at all sorts of transformations and change programs.

    People copy the most visible, obvious, and frequently least important practices.

    Jeffrey Pfeffer & Robert Sutton

    Our bias toward practices is there not without a reason. After all, we’ve heard success stories. What Toyota were doing to take over the lead in automotive industry. The early successes of companies adopting Agile methods. There were plenty of recipes in the stories. After all that’s what we first see when we are looking at the organizations.

    Iceberg

    The tricky part is that practices, techniques, tools and methods are just a tip of the iceberg. On one hand this is exactly what we see when we look at the sea. On the other there’s this ten times bigger part that is below the waterline. The underwater part is there and it is exactly what keep the tip above the water.

    In other words if we took just the visible tip of the iceberg and put it back to water the result wouldn’t be nearly as impressive.

    Practices, Principles, Values

    This metaphor is very relevant to how organizational changes happen. The thing we keep hearing about in experience reports and success stories is just a small part of the whole context. Unless we understand what’s hidden below the waterline copying the visible part doesn’t make any sense.

    Principles

    A thing beyond any practice is a principle. If we are talking about visualization we are implicitly talking about providing transparency and improving understanding of work too. Providing transparency is not a practice. It is a principle that can be embodied by a whole lot of practices.

    The interesting part is that there are principles behind practices but there are also principles that are embraced by an organization. If these two sets aren’t aligned applying a specific practice won’t work.

    Let me illustrate that with a story. There was a team of software architects in a company where Kanban was being rolled out across multiple teams. In that specific team there was a huge resistance even at the earliest stages, which is simply visualizing work.

    What was happening under the hood was that transparency provided by visualization was a threat for people working on the team. They were simply accomplishing very little. Most of the time they would spend time on meetings, discussions, etc. Transparency was a threat for their sense of safety, thus the resistance.

    Without understanding a deeper context though one would wonder what the heck was happening and why a method wouldn’t yield similar results as in another environment.

    Values

    The part that goes even deeper are values. When talking about values there’s one thing that typically comes to mind, which is all sorts of visions and mission statements, etc. This is where we will find values a company cares about. To be more precise: what an organization claims to care about.

    The problem with these is that very commonly there is a huge authenticity gap between the pretense and everyday behaviors of leaders and people in an organization.

    One value that would be mentioned pretty much universally is quality. Every single organization cares about high quality, right? Well, so they say, at least.

    A good question is what values are expressed by everyday behaviors. If a developer hears that there’s no time to write unit tests and they’re supposed to build ore features or no one really cares whether a build is green or red, what does that tell you about real values of a company?

    In fact, the pretense almost doesn’t matter at all. It plays its role only to build up frustration of people who see inauthenticity of the message. The values that matter would be those illustrated by behaviors. In many cases we would realize that it would mean utilization optimization, disrespecting people, lack of transparency, etc.

    Again, this is important because we can find values behind practices. If we take Kanban as an example we can use Mike Burrows’ take on Kanban values. Now, an interesting question would be how these values are aligned with values embraced by an organization.

    If they are not the impact of introducing the method would either be very limited, or non-existent or even negative. This is true for any method or practice out there.

    Mindfulness

    The bottom line of that is we need to be mindful in applying practices, tools and methods. It goes really far as not only does it mean initial deep understanding of the tools we use but also understanding of our own organization.

    This is against “fake it till you make it” attitude that I frequently see. In fact, in a specific context “making it” may not even be possible and without understanding the lower part of the iceberg we won’t be able to figure out what’s going wrong and why our efforts are futile.

    Paying attention to principles and values also enables learning. Without that we will simply copy the same tools we already know, no matter how applicable they are in a specific context. This is by the way what many agile coaches do.

    Mindful use of practice leads to learning; mindless use of practice leads to cargo cult.

  • Scaling Up Is Not the Only Option

    There is one thing that seems to be present in pretty much every company strategy these days. Given the opportunity, they want to grow. Obviously not every organization is successful at that but scaling up is treated as a default option and a universally desired goal.

    In fact, it is sometimes assumed so obvious that it doesn’t even gets discussed. One example was a recent discussion on preserving culture (see comments too). The post that initiated the whole thing seems to assume that the growth is a fixed part of the equation.

    An interesting thing happens when I mention scaling up strategy that we have at Lunar Logic, which is: don’t. All those blank stares and questions like “why would you pass an opportunity to grow?” Interestingly enough ceasing to grow means that we have to pass on some potential projects. This makes the whole discussion even more interesting.

    An issue we forget is that scaling up doesn’t come for free and the biggest cost of growth is how it affects organizational culture. Given that the current culture is something one values, preserving the important parts of it during growth is extremely difficult. If we are talking about rapid growth it’s basically impossible.

    If you happen to be in a place where the culture is a cornerstone of your success, which is exactly our case at Lunar, you may want to rethink whether scaling up is an obvious, or even desired, choice.

    Nevertheless a story I keep hearing over and over again about different organizations is “we’ve been doing so great so far that we decided that we want to grow by 100% in a year.” The trick is that when they succeed they will likely be a very different organization. Different values will be shared across the group. Different behaviors will be treated as a norm. Different way of work will be considered a standard. It will be a different culture altogether.

    It is likely that the very things that made them successful in the first place will be gone by then lost in rush to get bigger.

    Obviously there’s only that many things one can do with couple dozen people but then decision about growth is typically made without considering all side effects.

    And by the way, while I keep focusing on culture, as it is most vulnerable, there’s much more than that. One of the catchy themes these days is scaling Agile. Obviously part of the story is rolling out Agile in the context of big organizations. Another part though is maintaining all the value that we get thanks to using agile approach once we grow. While this is doable it adds to complexity of all the processes and the whole system.

    It’s not without a reason that smallest organizations tend to be the most efficient and as they grow they tend to spend more and more effort trying to comply with their internal and artificial processes.

    So why not dodge the bullet and keep an organization small? While I don’t say it has to be a default option it’s definitely an option to consider.

    It changes the whole equation as suddenly you are incentivized to say no. No, we won’t take this project as we are fully booked. No, we won’t jump on that candidate even though she seems to be pretty good. Suddenly, we have higher standards for work we do and people we hire.

    The focus is not on “more and more” but on “better and better.” Wouldn’t that be a nice option to consider?