Author: Pawel Brodzinski

  • The Most Underestimated Factor in Estimation

    The Most Underestimated Factor in Estimation

    We were preparing yet another estimate. It was a greenfield product, nothing too fancy. We used our default approach, grouped work into epic stories, and used historical data to produce a coarse-grained time estimate per epic.

    We ended up with a 12-20 week bracket. Unsurprisingly, our initial hip shot would probably be close to that.

    The whole process took maybe half an hour. Maybe less.

    Then we fell into an AI rabbit hole. Should our estimate be lower since we will generate a good part of the code?

    AI in Early-Stage Product Development

    We could discuss the actual impact of AI tools in established and complex code bases. Even more interestingly, we could discuss our perceptions.

    Yet, for a greenfield project and not-very-complex functionality, generating swaths of code should be easy enough.

    After all, it seems that’s what cutting-edge startups do these days (emphasis mine):

    The ability for AI to subsidize an otherwise heavy workload has allowed these companies to build with fewer people. For about a quarter of the current YC startups, 95% of their code was written by AI, Tan said.

    Garry Tan is the CEO of Y Combinator, so most definitely a highly influential figure in the startup world. And probably quite knowledgeable of what YC startups do, let me add.

    If that’s what the best do, we should follow suit, right? That’s why we got back to our initial estimate and tried to assess how much we can shave off of it, thanks to the technology.

    It’s Not About Coding Speed

    A lot of the early-stage work we do at Lunar Logic has already shifted to the new paradigm. The code is generated. Developers’ jobs have evolved. It’s code-review-heavy and typing-light. That is, unless you count prompting.

    Yet, it’s possible to generate entire features, heck, entire apps with AI tools. So we should be faster, right? Right?

    One good discussion later, we decided to stick with the original estimate nonetheless. The gist of it? It was never about coding pace.

    writing code fast was not the bottleneck

    Yes, you can generate a lot of code with a single prompt, and with enough preparations, you can make its quality decent. But AI is not doing the discovery part for you. It does not validate whether what you’re building works.

    It won’t take care of the whole back-and-forth with the client whose vision is most definitely somewhat different from what they’re going to get. And even if they were able to scope their dream precisely, the First Rule of Product Development applies.

    our clients always know what they want. until they get it. then they know they wanted something different

    It’s a completely different experience to imagine a product and to actually interact with it. No wonder people change their minds once they roll up their sleeves and start using the thing.

    The Core Cost of Product Development

    After building (partially or entirely) some 200 software products at Lunar, we have enough reference points to see patterns. Here’s one.

    What’s the number one reason for the increased effort needed to complete the work? Communication.

    Communication and its quality.

    • Insufficient clarity before starting a task triggers rework down the line.
    • Waiting for feedback increases context switching and thus makes the team inefficient.
    • Inadequate knowledge of the business context results in building the wrong thing.
    • Lack of focus in communication is a direct waste of everyone’s time.

    Should I go on? Because I totally could.

    In practice, I’ve seen efforts where poor communication added as much as 100% to the workload. It went down to all the rework and inefficiencies triggered by a lack of clarity.

    When such a thing happens, we might have been wrong about the actual number of features or the size of some of them, and it wouldn’t have mattered. At all. Any such mistake would be covered many times by the bad communication overhead. And then some.

    AI Does Nothing to the Quality of Communication

    Before we move further, a disclaimer: I understand that there are many AI tools designed around human-to-human communication.

    AI summary of conversation between developers
    A “helpful” Slack AI conversation summary

    While there’s still work to catch up with regular technical conversations between developers, things like meeting summaries can be useful. Although I’d love to see usage data, how many of these summaries are read? Like, ever.

    The communication I write about is a different beast, though. It’s not notetaking. It’s attentive listening, creative friction, and collective intelligence. It’s experience cross-pollination.

    With that, AI is of little to no use. And yet, this is the critical aspect of any effective software project.

    What’s more, there’s little you can know about the quality of communication before the collaboration starts. Sure, you get early signs. But you know what it really is once you start working together.

    Start Small

    One of the reasons why I’m a huge fan of starting collaboration with something small—like a couple of weeks kind of small—is that we learn what communication will look like.

    It’s a small risk for our clients, too. After all, how much can you spend on a couple of people working for two weeks?

    Once we’re past that initial rite of passage, we know how to treat any later estimates. Should we assume there’s going to be a significant communication tax? Or rather, we could shave some time here and there because we all will be rowing in the same direction.

    One of our most recent clients is a case in point. Throughout the early commitment, he actively managed stakeholders on his end to avoid adding new ideas to the initial scope. He helped us keep things simple and defer improvements till we get more feedback from the actual use.

    The result? Our estimate turned out to be wrong. We wrapped up the originally planned work when we were around 75% of the budget mark.

    Communication quality (or lack thereof), as much as it can add a lot of work, can remove some, too. That’s why it’s the most underestimated factor in estimation (pun intended).


    A post on estimation is always a chance to share our evergreen: no bullshit estimation cards. After a dozen years, I still hear how they get appreciated by teams.


    If you like what you read and you’d like to keep track of new stuff, you can subscribe on the main page.
    I’m also active on Bluesky and LinkedIn too, with shorter updates.
    I also run the Pre-Pre-Seed Substack, where I focus on early-stage product development (and, inevitably, AI).

  • The Renaissance of Full-Stack Developers

    The Renaissance of Full-Stack Developers

    I’m old enough to remember the times when we didn’t use a label for full-stack developers because, well, all developers were full-stack.

    In the 1990s, we still saw examples of products developed single-handedly (both in professional domains and entertainment), and some major successes required as little as an equivalent of a single Scrum team.

    What followed was that software engineering had to be quite a holistic discipline. You wanted to store the data? Learning databases had to be your thing. You wanted to exploit the advantages of the internet boom? Web servers, hosting, and deployment were on your to-do list.

    It was an essentially “whatever it takes” attitude. Whatever bit of technology a product needed to run, developers were picking it up.

    Specialization in Software Engineering

    The next few decades were all about increasing specialization. The increasingly dominant position of web applications fueled the rise of javascript, which, in turn, created front-end as a separate role.

    Suddenly, we had front-end and back-end developers. And, of course, full-stack developers as a reference point to differentiate from. The latter has quickly become a topic of memes.

    Full Stack Horse

    Oh, and mobile developers. Them too, of course.

    The user-facing part has undergone further specialization. We carved more and more stuff for design and UX roles.

    Back-end? It was no different. Databases have become a separate thing. Then, with big data, we’ve got all the data science. The infrastructural part has evolved into devops.

    And then it went further. A front-end developer turned into a javascript developer, and that one into a React developer.

    The winning game in the job market was to become deeply specialized in something relatively narrow, then pass a ridiculous set of technical tests and land an extravagantly paid position at a big tech.

    The transition wouldn’t have happened without two critical factors.

    Growth of Product Teams

    First, the software projects grew in size. So did the product teams. As a result, there was more space for specialized (sometimes highly specialized) roles in just about any software development team.

    Sure, there have always been highly specialized roles—engineers pushing an envelope in all sorts of domains. But the overwhelming majority of software engineering is not rocket science. It’s Just Another Web App™.

    However, because Just Another Web App™ became increasingly larger, it was easier to specialize. And so we did.

    Technology Evolution

    The second factor that played a major role was the technology.

    Back in the 90s, when you picked up C as a programming language, you had to understand how to manage memory. You literally allocated blocks of RAM. In the code. Like an animal. And then, with the next generation of technology, you didn’t need to.

    The same thing happened with the databases. The first time I heard an aspiring developer claim that they neither needed nor wanted to learn anything about SQL because “RoR takes care of that for me,” I was taken aback.

    But it made sense. The developer started their journey late enough, so they could have chosen a technology that hid the database layer from them entirely (and, unless supervised, made an absolute disaster out of the data structures, but that’s another discussion entirely).

    And don’t even get me started about front-end developers whose knowledge of back-end architecture ends at knowing how to call an API endpoint. Or back-end developers who proudly resolve CSS as Can’t Stand Styling.

    Ignore my grandpa’s complaints, though. The dynamic was there, and it only reinforced the trend for specialization.

    The Bootcamp Kids

    As if that all weren’t enough, the IT industry, still hungry for more specialists, turned into a mass-producing machine of wannabe developers.

    With such a narrow specialization, we figured it might be enough to get someone through several weeks of a coding bootcamp, and voila! We got ourselves a new developer, high five, everyone!

    Yes, a developer who can do rather generic tasks in only one technology, which covers just a small bit of the whole product stack, but a developer nonetheless.

    The narrow got even narrower, even if the depth didn’t get deeper at all.

    AI Disruption

    Enter AI, and we are told we don’t need all these inexperienced developers anymore because, well, AI will do all that work, what don’t you understand?

    Seemingly, we can vibe code a product, which is a lie, but one that AI vendors will perpetuate because it’s convenient for them.

    The fact is that these narrow & shallow jobs are gone. The AI models generate boilerplate code just fine, thank you very much. Sure, the higher the complexity, the worse the output. But that’s not where those shallow skill sets are of any use.

    Arguably, depth doesn’t help as much either.

    We need breadth.

    Since an AI model can generate a working app, it necessarily touches all its layers, from infrastructure, through data, back-end, front-end, to UX, design, and what have you.

    Breadth over Depth

    The big challenge, though, is that AI can hallucinate all sorts of “fun” stuff. If our goal is to ensure it does not, well, we need to understand a bit of everything. Enough of everything to be able to point (prompt) the AI model in the right directions.

    A highly specialized knowledge can help to make sure we’re good with one part of a product. However, if it comes in the package of complete ignorance in other areas, it’s a recipe for disaster.

    The new tooling calls for a good old “anything it takes” approach.

    If that weren’t enough, the capability to generate code, especially when we talk about large amounts of rather basic code, potentially enables a return to smaller teams.

    The jury is still out. On the one hand, Dario Amodeis of this world would be quick to announce that we’ll soon see billion-dollar companies run by solopreneurs. On the other hand, the recent METR study suggested that experienced developers using AI tools were, in fact, slower. And that despite their perception of being faster.

    In the new reality, a developer becomes more of a navigator than a coder, and this role calls for a broader skill set.

    Filling the Gaps

    Increased technical flexibility is both a new requirement and an opportunity. At Lunar Logic, we work extensively with early-stage founders. That type of endeavor sways toward experimentation and, on many accounts, forgives more than working on established, scaled products.

    On the other hand, the cost-effectiveness is crucial. The pre-pre-seed startups aren’t known to be drowning in money.

    Examining how our work evolves thanks to AI tooling, I see similar patterns. For some products, the role of design and (arguably) UX is significantly lesser than for others. Consider a back-office tool designed to support an internal team in managing a complex information flow, as a good example.

    A now viable option is to generate the whole UI with a tool such as v0, focusing on usability, which is but one aspect of design/UX, and we’re good.

    Is the UI as good as designed by an experienced designer? Hell, no! Is it good enough within the context, though? You betcha! The best part? A developer could have done that. Given they know a thing or two about usability, that is. That knowledge? That’s breadth again.

    I could go with similar examples in other areas, like getting CSS that’s surprisingly decent (and way better than something done by a Can’t Stand Styling developer), or a database schema that’s a leapfrog ahead of what some programming languages would generate for you out of the box (I’m looking at you, Ruby on Rails).

    The thing is, every developer can now easily be more independent.

    Full-Stack Strikes Back

    The tides have turned. We have reversed the flow in both product team dynamics and technical skills required to be effective. That, however, comes at a cost of a new demand. We need more flexibility.

    It’s not without a reason why experienced developers are still in high demand. They have been around the block. They can utilize the new AI tooling as an intellectual exoskeleton to address their shortcomings (precisely because they understand their own shortcomings). Thanks to extensive experience, such developers can guide AI models to do the heavy lifting (and fix stuff when AI breaks things in the process).

    That’s the archetype of a software engineer that we need for the future. Understandably, many developers are caught off guard as they were investing in a completely different path, sometimes for all the wrong reasons (like, it’s a meh job, but at least it pays great).

    These days, if you don’t have a passion to learn to be a full-stack developer, it will be harder and harder to keep up.

    A disclaimer: there have always been and will always be edge-case jobs that require high specialization and deep knowledge. Nothing changes on this account. It’s just that the mainstream (and thus, a bulk of “typical” jobs) is going to change.

    Reinventing the Learning Curve

    That, of course, creates a whole new challenge. How do we sustain the talent pool in the long run? After all, we keep hearing that “we don’t need inexperienced developers anymore.” And the argument above might be read as support for such a notion.

    It’s not my intention to paint such a picture.

    I’ve always been a fan of hiring interns and helping them grow, and it hasn’t changed.

    hiring junior developers

    You can bet that many companies will not view it in this way.

    best time to plant a tree

    Decades back, we were capable of learning the ropes when we needed to allocate a block of memory manually each time we wanted to use it. I don’t see a reason why shouldn’t we learn good engineering now, with all the modern tools.

    Sure, the way we teach software development needs to change. I don’t expect it to dumb down. It will smart up.

    Then, we’ll see a renaissance of full-stack developers.

  • Flailing Around with Intent

    Flailing Around with Intent

    Knowing is not enough; we must apply. Willing is not enough; we must do.

    Johann Wolfgang von Goethe

    Does it sometimes happen to you that you try to explain something in a detailed way to someone, and that person responds with a one-liner that nails the idea? I suck at brevity, so it happens to me a lot.

    That’s one reason why I appreciate so much the opportunities to exchange ideas with smart people from lean and agile communities.

    The most recent one happened thanks to Chris Matts and his LinkedIn post on agile practices. Now, I’d probably pass on yet another nitpicky argument about what’s agile and what’s not, but if it comes from Chris, you can count on good insight and an unusual vantage point.

    Community of Needs

    One reason that I’m always interested in Chris’ perspective is that he operates in what he describes as the Community of Needs.

    Members of the Communities of Needs operate in the area of “Need” to create a meme and then work with the meme to identify fitness landscapes where it fails, and evolve them accordingly. These communities have problems that need to be solved. They take solutions developed for one context and attempt to implement them in their own context (exaption), and modify (evolve) them as appropriate.

    If you dissect that, people in the Community of Needs will:

    • Focus on a practical challenge at hand
    • Be method/framework-agnostic
    • Understand the specifics of their own context and its differences from the original context of a solution
    • Seek broad inspirations
    • Be crafty with makeshift solutions

    The word ‘practitioner’ comes to mind, although it might be overly limiting, as the Community of Needs describes more of an attitude than an exact role one has in a setup.

    One might propose ‘thinker’ as the opposite archetype. It would be someone who distills many observations to propose a new method, framework, solution, etc.

    Thought Leaders

    Let’s change the perspective for a moment. When we look at the most prominent figures in lean & agile (or any other, really) community, who do we see? As a vivid example, consider who authored the most popular methods.

    All of them ‘thinkers’ (in Chris’ frame, members of the Community of Solutions), not ‘practitioners.’

    Before someone argues that the most popular methods stem from practical experiences, let me ask this:

    When was the last time the founding fathers (they’re always fathers, by the way) of agile methods actually managed a team, project, or product? It’s been decades, hasn’t it?

    Yet, these are people whom we rely on to invent things. To tell us how our teams and organizations should work. We take recipes they concoct and argue about their purity when anyone questions their value.

    I mean, seriously, people are ready to argue that the Scrum guide doesn’t call daily meetings ‘standups,’ as if the name mattered to how dysfunctional so many of these meetings are.

    It seems the price to pay for such thought leadership is following rigidity, preceptiveness, and zealotry. Thank you, I’ll pass. It feels better to stay on the sidelines. I will still take all the inspiration I want when I consider it appropriate, but that’s it. It’s not going to become a hammer that makes me perceive every case as a nail.

    Where Theory Meets Practice

    I admit that I have a very utilitarian approach to all sorts of methods and frameworks. If there’s a general guideline I follow, it’s something like that:

    Try things. Keep the ones that work. Drop those that don’t.

    Put differently, I just flail around with an intent to do more good than harm.

    Over the years, I’ve learned that a by-the-book approach is never an optimal solution. Sure, occasionally, we may consider it an acceptable trade-off. In my book, though, “an acceptable trade-off” doesn’t equal “an optimal choice.”

    Almost universally, a better option would be something adjusted to the context. A theory, a set of principles, a method, a framework—each may serve as a great starting point. Yet my local idiosyncrasies matter. They matter a hell lot.

    A smart change agent will take these local specifics into account when choosing the starting point, not only when adjusting the methods.

    For one of the organizations I worked at, Scrum was not a good starting point. Why? Were their processes so unusual that they wouldn’t broadly fit into the most popular agile method? Or maybe a decision maker was someone from another method camp? Might they be subject to heavy compliance regulations that forced them into a more rigid way of working?

    Neither. It’s simply that they had tried Scrum in the past, and they got burned (primarily because they chose poor consultants). The burn was so bad that anything related to Scrum as a label was a no-go. Working on the same principles but under a different banner simply triggered way less resistance.

    Local idiosyncrasies all the way. Without understanding a local context, it’s impossible to tell which method might be most useful and how best to approach it.

    Portfolio Story

    When we operate within the Community of Needs, even when we don’t have a strong signal like the one above, we rarely have a single ready answer.

    Consider this example. As a manager responsible for project delivery across the entire project portfolio, I was asked to overcommit. And not just by a bit. While already operating close to our capacity, top leadership expected me to commit to the biggest project in the organization’s history under an already unrealistic deadline.

    By the way, show me a method that provides an explicit recipe for dealing with such a challenge.

    At its core, it wasn’t even a method problem. It was a people problem. It was about getting through the “but you have to make it work and I don’t care how; it’s your job we pay you for” and starting the conversation about the actual options we had. You might consider it almost a psychological challenge.

    My goal was not to educate the organization on portfolio management, but to fix a very tangible issue in (hopefully) a timely manner.

    If I had been a Certified Expert of an Agile Method™, I might have known the answer in an instant. Let’s do a beautiful Release Train here, as my handbook tells me so. I bet I’d have a neat Agile Trainwreck™ story to tell.

    In the Community of Needs, we acknowledge that we don’t have THE answer and assess options. In this case, I could try Chris Matts’ Capacity Planning, which emerged in an analogous context. I might consider one of Portfolio Kanban visualizations, hoping to refocus the conversation to utilization. Exploiting Johanna Rothman’s rolling wave commitments might help to unravel the actual priorities. Inspiration from Annie Duke’s bets metaphor could be tremendously helpful, too.

    Or do a bit of everything and more. Frankly, I couldn’t care less whether I would do that by the book, even if there were a book.

    Ultimately, I wasn’t trying to implement a method. I was trying to address a need.

    Flailing Around with Intent

    It all does sound iffy, doesn’t it?

    “You can’t know the answer.”
    “You should know all these different things and combine them on the fly.” “Try things until something works.”

    Weren’t the methods invented for the sole purpose of telling us how to address such situations?

    They might have been. Kinda. The thing is, they respond only to a specific set of contexts. Or rather, they were designed only with particular contexts in mind, and they fit these circumstances well. Everything else? We’re better off treating them as an inspiration, not an instruction.

    We’re better off trying stuff, sticking with what works, getting rid of what doesn’t.

    As Chris put it:

    “Flailing around with intent is the best we can do most of the time when we are trail blazing beyond the edge of the map.”

    Chris Matts

    So, if you want a neat two-liner to sum up this essay, I won’t come up with anything remotely as good as this one.

    The Edges of the Map

    We could, of course, discuss the edges of the map. The popularity of a method may suggest its broad applicability. Take Scrum as an example. Since many teams are using Scrum, it must be useful for them, right?

    On a very shallow level, sure! Probably. Maybe. However, if something claims to be good at everything, it’s probably good at nothing.

    The Scrum Curse

    The more ground any given method wants to cover, the less suited it is for any particular set of circumstances.

    And if one wants to build a huge certification machine behind a method, it necessarily needs to aim to cover as much ground as possible.

    So, what is a charted map for Scrum? Should we consider any context where the method could potentially be applied? If so, the map is huge.

    However, if we choose the Community of Needs vantage point, and we seek the most suitable solution for a specific need we face, then the map shrinks rapidly. It will be a rare occurrence indeed when we choose Scrum as the optimal way given the circumstances.

    Then, we’re trailblazing beyond the edges of the map more often than we’d think. And flailing around with intent turns into a surprisingly effective tool.


    Thank you, Chris Matts and Yves Hanoulle, for the discussion that has influenced this article. I always appreciate your perspectives.

  • A Love Letter to Physical Whiteboards

    A Love Letter to Physical Whiteboards

    A few days back, Tonianne DeMaria wrote about how differently we process physical and digital visualizations.

    “Have you ever noticed how your brain just feels different staring at a physical Kanban papered with Post-its versus when scrolling through task cards in a digital tool? Turns out that it’s not your imagination at play here, it’s neuroscience.

    Physical Whiteboards Are a Luxury

    I’m a long-time fan of physical visual boards. Since my earliest experiments with Kanban, I have always used whiteboards as much as I could.

    Which is not much, sadly.

    We live in an increasingly digitalized and distributed world.

    In the late 2000s, when Kanban was gaining popular awareness, there were no good tools simulating a visual board. The latest craze in project management circles was online tools doing Gannt charts. Now? Even JIRA has it.

    A decade ago, the world was just flirting with remote work. Post-COVID? It’s a norm. Scarcely any team can reliably assume that everyone will be in the same physical space.

    Suddenly, digital boards are everywhere, while a physical whiteboard looks like an extravaganza.

    And if you collaborate with customers from another geography, which has been more than a dozen straight years for me, then a whiteboard wasn’t an option at all.

    Or was it?

    Edge Case Whiteboards

    We tend to limit the application of visual boards to only the most obvious contexts. Project work. That’s it. Nothing interesting here. Move on!

    If, however, we consider it as a tool for visualization of all sorts of workflows, then we’ll quickly notice that the work flows on many levels and in different contexts, far beyond the usual applications.

    One such example is our sales board.

    visual board sales process

    Over the years, we’ve tried different tools to manage our sales prospects. We’ve tried anything from Trello to Salesforce (BTW, Trello was actually pretty good).

    And yet, after another frustrating event when something “fell off the table” yet again, I suggested scratching the digital tools. We repurposed one of the whiteboards as our sales activities HQ.

    I lived happily ever after.

    Physical Board in Digital World

    We don’t have the comfort of having everyone at the office all the time. Over the time the board has been in place, we have had people involved living in different cities.

    Still, we arrange to meet at least weekly in the same room.

    “But Pawel, it means that you can reliably update the board only once a week!”

    More frequently, in fact, but that’s correct. We can’t rely on it being up-to-date every single day.

    The thing is, it doesn’t matter.

    The activities we track don’t have an hourly rhythm that many software development teams experience. There aren’t that many active items on the board, either.

    A side note: The picture shows the actual state, although I obfuscated the names on post-its with fancy technology (more post-its).

    Flexibility of Physical Boards

    I like this example as it shows several advantages of using a whiteboard populated with sticky notes.

    Defining the Workflow

    While the structure of the board is nothing fancy, there are a couple of things that we get for free on a whiteboard, while they would be a pain in a digital tool.

    • The middle section (mild/warm/hot) is a subflow. And not even a real flow, as items freely travel between all three stages. The overarching flow is: parking, the middle section, and the virtual ‘done’ column. Most work just doesn’t flow linearly from one JIRA column to another.
    • The ‘done’ column has two possible outcomes (success/failure), but we cut the column vertically. Anyone want to take a shot at which outcome is more desirable (even without reading the labels)?
    • We added vividly visible definitions of key columns. All the important information is there, in front of anyone’s eyeballs.
    • My favorite: with sticky notes, color-coding is painfully obvious. And yes, color coding is still screwed up in just about any digital tool out there.

    By the way, there’s a reason why we split the middle section vertically while the done column horizontally. The mild/warm/hot part follows the behavior of “reading from the right,” where whatever is closer to being done also gets priority attention.

    The rightmost column presents a simple differentiator of the outcome. There’s no immediate stuff to do with the items in there.

    We handle items in different sections of the board differently, and the design reflects that.

    Data on Index Cards

    Over time, we began adding various data to individual index cards.

    sticky notes on sales visual board

    There are lead times (which we measure in months, by the way), sources of contact, etc. However, we could define all of this as custom fields on a digital board.

    The interesting part is that we add whatever random bits of information are crucial. But only crucial. No wall of text with the summary of the last call with a potential client.

    Why? Because there’s not enough space to slap everything there. Thus, the constraint serves as a filter.

    The Overview

    That’s by far the most essential part. With just a rudimentary understanding of the columns and post-its colors, you could easily assess what’s happening in the whole sales process.

    Having an opportunity to glance at the board when we come back to our desks with coffee serves as a trigger to follow up on whatever we forgot about. We can nudge another person to ask about their task simply because we notice it accidentally.

    It’s this helicopter view that’s almost nonexistent in digital tools. And when it is, it typically is another dedicated view that one has to check explicitly.

    This kind of serendipitous information consumption happens almost exclusively with physical visualizations.

    A Love Letter to Physical Boards

    The tradeoff we make between digital and physical boards is not only about convenience. It’s also about how we engage with information.

    It’s obvious when you think about it. Tonianne observes:

    “It’s worth noting that our spatial memory and systems thinking abilities evolved in the physical world.

    We are genetically wired to use physical visualizations. It’s no wonder they serve us better in a broader range of contexts.

    Yes, there are situations when we want or need to focus on only a short list of a few tasks that are assigned to us. It’s just an infrequent occurrence when it’s the most effective choice for the team.

    So, treat it as my love letter to physical boards. Like any other person, I use digital tools a lot. I have to. No matter how hard I try, my whiteboard won’t be useful to a client in New York.

    Yet, there are many situations where the simplest old-school visualizations are feasible. And when they are, they are bound to beat the crap out of digital tools.


    The inspiration for this post came from our discussion with Tonianne DeMaria and Jim Benson (of Personal Kanban fame) on Substack, where they’ve recently started publishing. If any of the above considerations sounds interesting, I recommend subscribing to their newsletter.

  • Care Matters, or How To Distribute Autonomy and Not Break Things in the Process

    Care Matters, or How To Distribute Autonomy and Not Break Things in the Process

    At Lunar Logic, we have no formal managers, and anyone can make any decision. This introduction is typically enough to pique people’s curiosity (or, rather, trigger their disbelief).

    One of the most interesting aspects of such an organizational culture is the salary system.

    Since we all can decide about salaries—ours and our colleagues—it naturally follows that we know the whole payroll. Oh my, can that trigger a flame war.

    Transparent Salaries

    I wrote about our experiments with open salaries at Lunar in the past. At least one of those posts got hot on Hacker News—my “beloved” place for respectful discussions.

    As you may guess, not all remarks were supportive.

    Comments about transparent salaries from Hacker News

    My favorite, though?

    IT WILL FAIL. Salaries are not open for a reason. It is against human nature.

    No. Can’t do. Because it is “against human nature.” Sorry, Lunar, I guess. You’re doomed.

    On a more serious note, many comments mentioned that transparent salaries may/will piss people off.

    The thing they missed was that transparency and autonomy must always move together. You can’t just pin the payroll to a wall near a water cooler. It will, indeed, trigger only frustration.

    By the same token, you can’t let people decide about salaries if they don’t know who earns what. What kind of decisions would you end up with?

    So, whatever the system, it has to enable salary transparency and give people influence over who earns what.

    Cautionary Tale

    Several years back, I had an opportunity to consult for a company that was doing open salaries. Their problem? Selfishness.

    In their system, everyone could periodically decide on their raise (within limits). However, each time after the round of raises, the company went into the red. All the profits they were making—and more—went to increased salaries.

    The following months were spent recovering from the situation and regaining profitability, only to repeat the cycle again next time.

    Their education efforts had only a marginal effect. Some were convinced, but seeing how colleagues aimed for the maximum possible raise, people yielded to the trend.

    The cycle has perpetuated.

    So what did go wrong? After all, they followed the rulebook. They merged autonomy with transparency. And not only with salaries. The company’s profit and loss statements were transparent, too.

    It’s just people didn’t care.

    Care

    Over the years, when I spoke about distributed autonomy, I struggled to nail down one aspect of it. When we get people involved in decision-making, we want them to feel the responsibility for the outcomes of their decisions.

    The problem is that we have a diverse interpretation of the word. I once was on the sidelines of a discussion about responsibility versus accountability. People were arguing about which one was intrinsic and which was extrinsic.

    As the only non-native English speaker in the room, I checked the dictionary definitions. Funny thing, both sides were wrong.

    Still, I’d rather go with how people understand the term (living language) rather than with dictionary definitions.

    So, what I mean when I refer to being responsible for the outcomes of one’s decisions is this intrinsic feeling.

    I can’t make someone feel responsible/accountable for the outcomes of their call. At most, I can express my expectations and trigger appropriate consequences.

    To dodge the semantic discussion altogether, I picked the word agency instead.

    The only problem is that it translates awfully to my native Polish. Frustrated, I started a chat with my friend, and he was like, “Isn’t the thing you describe just care?”

    He nailed it.

    Care strongly suggests intrinsic motivation, and “caring for decision’s outcomes” is a perfect frame.

    How Do You Get People to Care?

    The story of the company with self-set salaries—and many comments in the Hacker News thread—shows a lack of care for their organizations.

    “As far as I get my fat raise, I don’t care if the company goes under.”

    So, how do you change such perspectives?

    Care, not unlike trust, is a two-way relationship. If one side doesn’t care for the other, it shouldn’t expect anything else in return. And similarly to trust, one builds care in small steps.

    Imagine what would happen if Amazon adopted open salaries for its warehouse workers. Would you expect them to have any restraint? I didn’t think so. But then, all Amazon shows these people is how it doesn’t give a damn about them.

    And that can’t be changed in one quick move, with Jeff Bezos giving a pep talk about making Amazon “Earth’s best employer” (yup, he did that).

    First, it’s the facts, not words, that count. Second, it would be a hell of a leap for any company, let alone a behemoth employing way more than a million people.

    As I’m writing this, I realize that taking care of people’s well-being is a prerequisite for them to care about the company. And that, in turn, is required in order to distribute autonomy.

    The Role of Care

    The trigger to write this post was a conversation earlier today. We’re organizing a company off-site, and I was asked for my take on paying for something from the company’s pocket.

    Unsurprisingly, the frame of the question was, “Can we spend 250 EUR on something?”

    Now, a little bit of context may help here. Last year was brutal for us business-wise. Many people make some concessions to keep us afloat. Given all that, my personal take was that if I had 250 EUR to spend, I’d rather spend it differently.

    But that wasn’t my answer.

    My answer was:

    • Everybody knows our P&L
    • Everybody knows the invoices we issued last month
    • Everybody knows the costs we have to cover this month
    • Everybody knows the broader context, including people’s concessions
    • We have autonomy
    • Go ahead, make your decision

    In the end, we’re doing a potluck-style collection.

    Sure, it was just a 250 EUR decision. That’s a canonical case of a decision that can not sink a company. But the end of that story is exactly the reason why I’m not worried about putting in the hands of our people decisions that are worth a hundredfold or thousandfold.

    We’ve never gone under because we’ve given ourselves too many selfish raises. Even if we could. The answer to why it is so lies in how we deal with those small-scale things.

    After all, care is as much a prerequisite for distributed autonomy as alignment is.


    This is the third part of a short series of essays on autonomy and alignment. Published so far:

  • The Role of Alignment

    The Role of Alignment

    In the first part of this series, I focused on why autonomy in a workplace is a critical ingredient if we want to stay relevant. Not only is it a response to the nature of everyday work, with the increasing significance of remote work and the rise of AI, but it is also an emergent outcome of the large-scale evolution of the economy.

    However, if there is a universal warning that should be attached to the advice suggesting decentralizing control, it should be the following.

    It’s never as simple as “give people more autonomy.” The way people act in a decentralized system depends on a broader culture, which one should consider before giving everyone more power.

    Purpose

    One common theme in the discourse on organizational culture is purpose. A shared theme that people and teams aspire to change into reality.

    By the way, when considering joining any company, I recommend asking about their purpose. In fact, I’d ask different people this very question and see whether their answers are aligned.

    “Making more money” is not a purpose. It’s a tactic. Ditto “increasing value for shareholders.” If you want to send a man to the moon, that’s a great purpose. But it doesn’t have to be that big. I’m a fan of honest aspirations like “creating a healthy workplace that sustains a few dozen employees and their loved ones,” too.

    Aside from its strategic role, or impact on motivation, purpose has a role in the discussion about autonomy. It is the force that encourages alignment of all the efforts happening in an organization.

    Misalignment

    Imagine a company guided by the “making more money” aspiration. People would naturally see different, sometimes contradicting, ways of generating revenues. They’d be pulling in different directions.

    Using a physical metaphor, we could consider a circle as the whole organization and arrows within as different individuals pursuing different goals.

    Low autonomy and low alignment impact on organizational momentum

    All those forces combined would create some momentum. The company would be slowly moving wherever the push is strongest.

    What would happen if we gave people more autonomy in such a setup? It is the equivalent of giving every individual more influence over the whole company. Each force vector would become stronger.

    High autonomy and low alignment impact on organizational momentum

    Now, everyone has better leverage, but the combined effect on the organizational momentum is marginal. The reason is obvious. It’s all the contradicting priorities. People try to push in different directions.

    Alignment

    In contrast, we can start in exactly the same situation. However, instead of pursuing the agenda of distributed autonomy, we’ll begin with an attempt to sync up everyone’s efforts.

    Low autonomy and low alignment impact on organizational momentum

    It would mean getting more arrows to point in a similar direction. I don’t expect a perfect alignment. Every individual has their own goals, which would never be matched perfectly with an organization’s goals. But we can get closer.

    The basic tool we have is the purpose. Once it’s clear to everyone what that is, two things will happen. Some people will adopt it and adjust their actions to help achieve it. It’s as if they redirected their vector more toward the desired direction.

    Others will figure that they’d rather keep pushing where they have before. For them, it would be clear they wouldn’t get much support. The odds are they’ll leave soon. If our HR does even a half-decent job, whoever comes in their place would be better aligned with the purpose.

    One way or the other, we’d get more people rowing in (roughly) the same direction.

    Low autonomy and high alignment impact on organizational momentum

    That itself changes the organizational momentum significantly. Not only did we remove the opposing force, but we also added a supporting one.

    If we follow up with increasing autonomy in this setup now, we will maximize the gains.

    High autonomy and high alignment impact on organizational momentum

    Again, everyone has bigger leverage, but thanks to synchronized efforts, the impact is so much more significant.

    Alignment First

    One could argue that we can achieve the same outcome independently of the order of changes. After all, if we refocused everyone’s efforts after increasing autonomy, the end game would look the same.

    In theory, yes. In practice, achieving alignment in such a manner is much less likely and more difficult.

    Each vector is a representation of somebody’s drive. The stronger it is, the harder it is to redirect it significantly. Think of the arrows as if they had weight proportional to the force they represent. With bigger weights, it simply requires more effort to align the vectors.

    Realignment cost with high and low autonomy

    In some cases, alignment will be impossible altogether. We extend our individual expectations to the whole organization. It’s like saying, “I want to pursue this agenda, and thus, I want my company to enable that.” While we would rarely, if ever, express it with these exact words, that’s a prevalent theme in conversations happening around job changes (exit and job interviews alike).

    Bigger arrows tend to break before we can realign them to a significantly different direction.

    Alignment versus Autonomy

    There’s a fantastic depiction of the relationship between autonomy and alignment proposed by Stephen Bungay in The Art of Action.

    He plots a two-dimensional plane with our culprits defining the axes.

    Stephen Bungay's autonomy and alignment dimensions

    In an environment with low autonomy and alignment, we won’t see much action. People will neither feel empowered, nor they would have a sense of clarity. You can expect a lot of confusion and minimal tangible outcomes.

    If we stick to low autonomy but increase alignment, we will have clarity about the goals. However, the actions will still be carefully managed and controlled. It would be a typical micromanagement environment. Not the most inspiring workplace in my book.

    On the opposite end, there’s a low-alignment, high-autonomy environment. There will be a lot going on in such an organization. The problem is that much of that effort will be misdirected. Some of it may be actively counterproductive.

    Finally, we have our most desired quadrant with high alignment and autonomy. That’s where we have clarity about the goals, and people act without waiting for permission. Their actions, thus, will be both targeted and effective.

    Interestingly enough, Stephen Bungay doesn’t stop by showing what we should expect in each type of environment. He also suggests the best path from the bottom left to the upper right corner.

    Unsurprisingly, this path leads through increasing alignment first and only then distributing more autonomy.

    Stephen Bungay's autonomy and alignment dimensions

    I can personally attest it’s a good way, as we did the opposite at Lunar. The price we paid for neglecting alignment was steep. There was a load of interpersonal conflicts, which became a borderline tribal war, and 20% of the company left in the aftermath. Show me a leader who’d willingly drive their company there.

    Big Picture

    If there were only one big-picture suggestion, I’d couple with my strong encouragement to make distributed autonomy a central piece of organizational culture, it would be about alignment.

    Decentralizing control means everyone gets more power over a company and everything it does. That may only get us promising results if everyone rows (roughly) in a similar direction.

    We won’t get that unless we explicitly work on alignment. Or are extremely lucky.

    I tend not to rely on the latter.


    This is the second part of a short series of essays on autonomy and alignment. Published so far:

  • Pivotal Role of Distributed Autonomy

    Pivotal Role of Distributed Autonomy

    I’m a massive fan of distributed autonomy. I believe that, in principle, giving people more autonomy at work is the largest organizational challenge the modern workplace faces.

    Yes, the news of the day is either remote/hybrid work or the impact of AI on everyday jobs. Reinventing the organizational structures of a 21st-century corporation doesn’t belong to a broad discourse.

    From both perspectives, however, distributed autonomy plays a pivotal role.

    Autonomy in Remote Work

    With remote work, the dependency is straightforward. Much of the work has moved from the office—where it could be physically supervised by a manager—to homes, where supervision is significantly limited.

    The manager’s control is limited to the outcomes but not the actions that lead to them. For example, I can observe whether my engineers deliver features or add code to the codebase, but I don’t see when, how, and how much time they spend on activities that lead to “new features.”

    Sure, some organizations would turn to digital tools to control employees’ activities. Guess what. It doesn’t work. Well, it does, but not the way they intend. Here’s what this kind of monitoring does to people:

    • It reduces job satisfaction.
    • It increases stress.
    • It reduces productivity.
    • It increases counterproductive work behaviors.

    One hell of a slam dunk, really.

    It’s not only the lack of control, though. It’s also the availability of help. For the vast majority of organizations, remote work creates additional communication barriers.

    My leader no longer sits at the next desk. I can’t see whether it’s a good moment to interrupt them. Sure, I can drop them a DM on Slack, but they may not answer instantly. So, whenever I face one of those micro-decisions that I might have naturally delegated to my leader in the past, I may call a shot myself. It feels more convenient.

    What has just happened here was me grabbing a little bit more authority. I might have had it all the time, but I didn’t use it because it was easier to ask the leader. Now, the path of least resistance is making decisions myself.

    Multiply that by everyone in an organization, and suddenly, we have more distributed autonomy.

    The choice is between embracing and strengthening the change or resisting it. In the latter case, well, we tax ourselves on every single front, from productivity to employees’ mental health. Not really a choice, is it?

    Autonomy and AI

    The emergence of AI creates another shift in the nature of work. We get a relatively powerful co-pilot that can help us with many tasks that would be difficult or arduous in the past.

    Back then, we might have turned to the experts for help. Or drop the task altogether if it was non-essential.

    The experts would give us a suggestion, and we’d accept it as the decision. If we abandoned the task, there would be no decision to make whatsoever.

    But now, with our AI co-pilot, we have new capabilities at our fingertips. Yet it wouldn’t make any decision for us. Again, the path of least resistance is to grab some of that power, make a call, and move on.

    As an example, it’s often a challenge to dig up a relevant source to link in my writing. I often remember a research paper or article covering a useful reference. But its topic or author’s name? Beats me.

    Googling it was always a struggle, so I either turned to a human expert friend or gave up.

    But now? LLMs are pretty decent in digging up relevant options. Still, the work of reviewing suggested sources and choosing a valuable one is on me. I now face a decision that I earlier deferred to an expert or dodged entirely.

    More autonomy again.

    Adhocracy

    The changes coming from different directions align with a broader evolution of the nature of work. Julian Birkinshaw, in his book Fast/Forward, provides a neat big picture.

    Over the past century or so, the world has evolved from the industrial, through the information, to the post-information era. Each step changes the rules of the game.

    A hundred years ago, scaling was the biggest challenge, and the effective use of resources was advantageous. Thus, bureaucracy was a winning strategy.

    In the second half of the 20th century, we saw the increasing value of information, and its accessibility and effective use gave us an edge. Thus, meritocracy was gaining ground.

    Now, information is ubiquitous. In fact, with the help of LLMs, we can easily generate as much of it as we want. The world becomes less about who knows what. It’s about who can act upon (incomplete) data in a fast and effective manner. Thus, ad-hoc action gives an advantage.

    Coexistence of bureaucracy, meritocracy, and adhocracy over time.

    Birkinshaw coins the term adhocracy to describe this new mode of operation.

    A side note: one important part of the model is that all three modes of operation coexist. However, an organization will defer to the default mode whenever it faces uncertainty. We can’t expect a bureaucratic, hierarchical behemoth to act in an adhocratic way routinely.

    The coexistence of all modes will naturally create tension. The decision can’t be made at the same time made by:

    • a manager with the most positional power
    • an expert with the best data and most expertise
    • a line professional involved in the task hands-on

    If we want to embrace adhocracy, which Birkinshaw argues is a prerequisite for organizational survival, we necessarily must move authority down the hierarchy.

    We need to distribute more autonomy. Again.

    Common Part

    It’s not a surprise. When you go through the stories of companies successfully embracing non-orthodox management models, autonomy would be one shared part of all.

    Be it a turnaround story in David Marquet’s, unsurprisingly titled, Turn the Ship Around, or Michael Abrashoff’s It’s Your Ship, pushing autonomy down the hierarchy was crucial.

    And the fact that the military context would pop up so frequently in this discussion shouldn’t be a surprise. Decentralizing control was a pivotal part of the revolution of the 19th-century Prussian army. Its victory streak forced other armies to follow suit.

    Yes, the corporate world, despite all its inspirations from the military lingo, takes its sweet time to adopt the truly important inventions. And yes, our views of the military tend to be rooted more in Hollywood movies than in the actual realities of these gargantuan organizations.

    I often mention that we’d see more distributed autonomy in late 19th-century armies of the West than in many 21st-century corporations.

    We’ll arrive at the same conclusion if we stick to management theory. Take Holacracy, Sociocracy, Teal, or whatever generates the most buzz these days. The cornerstone of each of those will be autonomy. It may go into how we design roles (Holacracy), get to the principle list (self-government in Teal), or define the decision-making process (consent in Sociocracy). But it’s always there.

    When you think of it, it’s only natural. For hundreds of thousands of years, homo sapiens lived in small tribes of hunter-gatherers that were egalitarian and had very little to no hierarchy.

    Even when our species started evolving into bigger societies, adopting a strong hierarchy wasn’t given and was only one of the possible ways of coordination.

    I’d speculate that we are genetically predisposed to autonomy.

    Reinventing Autonomy

    Wherever we look, we seem to be reinventing the role of distributed autonomy. It’s critical to succeed on a battlefield. Staying relevant in business increasingly requires its presence. It sneaks along with the changes in the nature of work. We know it’s a prerequisite for engagement and motivation.

    Nothing should be easier than embracing the change and giving people more power.

    Sadly, it’s not the case. Power is a privilege. And as with every privilege, those in power will not give up easily. The good old bureaucracy will fight back.

    More importantly still, even if we have the means to overcome the resistance, the challenge is not as easy as “Let’s just give people more autonomy.”

    We need to take care of other things before we embark on this journey. But that’s the topic for another post.


    This is the first part of a short series of essays on autonomy and alignment. The following part(s) will be published on the blog and linked here during the next weeks.

  • Reinvent Your Daily Meetings

    Reinvent Your Daily Meetings

    Are you still doing daily meetings in the format suggested by Scrum? The three famous questions:

    1. What did you do yesterday?
    2. What are you going to do today?
    3. Are there any obstacles?

    Do yourself a favor and stop.

    It might have been a useful practice two decades ago. But it is not anymore. Not in that form.

    The Old Cure

    Here’s the thing. The ideas behind Scrum date as far back as the 80s, and its first applications happened in the 90s. Yes, it’s that old. But that itself doesn’t deserve criticism.

    However, when you look at the 2000s, when Scrum got its prominence, your average team’s tooling looked very different. The visual boards were yet to become popular. The state-of-the-art task management system was a filtered list.

    Jira dashboard from the 2000s

    No one in the IT industry seriously talked about limiting WIP then, so we were drowned in an excessive amount of ongoing tasks. That made navigating long, long lists of work items even more of a maze from nightmares.

    It’s no wonder that people saying what they were doing yesterday and what they plan to do today was a refresher.

    The New Situation

    Fast forward 10 years, and teams suddenly have very readable visual boards as a standard practice. Limiting work in progress may still be a challenge, but we’ve gotten progressively better.

    Also, new visualization standards allow for better comprehension of whatever is in flight. Even Jira caught up.

    Jira visual board from the 2010s

    As long as:

    • the board is up to date
    • there’s even remotely reasonable amount of work in progress

    we can clearly see who’s doing what.

    How? Just look at the board—it’s all there; thank you very much.

    And if, by any chance, the board would suggest that a person still works on 5 different things, then it’s not the form of the daily meetings that is the main problem.

    Who Cares?

    If you still think the old form of the daily updates makes any sense, look at people’s engagement during them. There’s precisely one person who’s interested in the entirety of the updates.

    The Scrum Master.

    For the rest of the team, it’s a ritual they follow out of habit. At best, they are interested in what a couple of people working on the most related tasks are doing, and then they’re off again.

    So it’s like a theater with many actors and just one attendee (the Scrum Master).

    It’s even worse. Almost all that information is readily available on the visual board. So that one person could have gotten it without getting everyone involved.

    Not Just a Status Update

    That’s the point where people tell me that I describe a daily meeting as a glorified status update, which it wasn’t meant to be.

    Fair enough. So what is it?

    A way for a remote team to get together and gel? Fine, get together and gel. I doubt that answering who does what is the best way to do it.

    So, maybe, it’s a place where we discuss obstacles and problems. Fantastic! That’s actually the only original question that is still useful. Then, ask only that bloody question and be done with it.

    Whatever the purpose of the meeting is, name it. I bet there’s a better format to address that very purpose.

    And yet, in 2025, people will still answer those three questions invented three decades ago.

    Daily Around the Board

    The intention behind the original standup format was a team sync-up. That goal is still worth pursuing. However, we have options that weren’t available a quarter of a century ago.

    My default way of running dailies relies on four elements:

    • We use blockers extensively to show any impediments
    • We keep the visual board up to date (it’s “the single source of truth”)
    • We run dailies around the board
    • We read the board from right to left (or from most to least done) and focus purely on blockers

    That’s it. It’s enough to focus on the important stuff. The rest is business as usual and not worth mentioning.

    You can easily cut the daily meeting time by half (or more), make it more engaging, and (a bonus) use it as an encouragement to keep the board up to date.

    There’s literally no coming back.

    The Purpose

    Sadly, we still stick to practices. They might have been visionary decades back, but somehow, we stopped asking about their purpose and blindly stuck to them.

    If we did, we would challenge many of the techniques we use.

    Ask yourself this question: If Agile were invented today, what practices would it devise? It would most definitely be different from what you’d find in a Scrum Guide.

    So do yourself a favor, stop answering the three standup questions, and, for a change, start using this daily hangout to make something useful.

    That’s what Agile intended us to do, after all.

  • Wholeness Is a Lie

    Wholeness Is a Lie

    Over the past decade, the idea of wholeness—to bring the whole self to work—got quite some traction. The sources are many. It rides on the wave of increasing acceptance of individualism. It appears to align with diversity, which has become a major topic across HR departments.

    Last but not least, it’s one of the pillars of Teal Organizations. Teal might be far from a household name that Fredrick Laloux, who coined the term, envisioned a dozen years back. However, it successfully grabbed the wider attention of many forward-looking companies (for better or worse).

    Why Wholeness?

    Laloux’s argument for wholeness is straightforward. He juxtaposes the old-school bringing our professional, ego-driven, masculine, rational selves to work with getting access to emotional, intuitive, and even spiritual resources. The latter, he argues, is not a simple upgrade. It’s a whole different game.

    This is not mathematical, but it is only 1/16 (of us) that’s showing up. When that is the case, we also show up with 1/16 of our energy, of our passion, of our creativity.

    Frederic Laloux

    One could easily envision the yields we’d gain if only we could access 10x as much creativity pool as we can now.

    Then, for many of us, it’s only a humanistic reaction. We might think of ourselves as tolerant, welcoming, inclusive people (I know I do), and accepting others’ whole selves seems like a natural consequence of that view.

    I didn’t need much more convincing to get on board and push the wholeness agenda at Lunar Logic.

    Truly Whole Selves

    Fast forward several years, and I was trying to understand what went wrong. I was looking at a workplace equivalent of a tribal war.

    People hurt each other. In extreme cases, they even refused to work together as a team.

    While it didn’t happen out of the blue, and circumstances added a lot to the mess, none of that would have occurred without years of fostering wholeness.

    You see, no matter how we don’t like that thought, our whole selves are not all roses. We carry our dark passengers within us. We view the world around us through the lens of our biases and our prejudices. We can’t leave that dark guy at home. It doesn’t work this way.

    If we let him out, we act out. That’s when we start hurting others. The worst part, we don’t even notice.

    The Case in Point

    A few years back, environmental activists from Extinction Rebellion stopped commuter trains to London by climbing on top of them. It made the news back then, including the social media wave of comments.

    It was interesting to see how people I knew sided with either the protesters or the commuters. Interesting enough to watch the footage of the events.

    If you’ve just watched the video, you might have had more sympathy and understanding for either side of the conflict.

    Make an experiment. Pretend you know nothing about environmental activists and their agenda. Pretend you don’t understand the anger of the crowd. Rewatch the video.

    What do you see?

    See the man on top of the train trying to kick a person climbing the train in the head. Notice the crowd pulling the man down on the platform and kicking him. At its most basic level, what you see is people physically hurting others.

    These are people who bring their whole selves to the scene.

    Intentions Matter

    – But Pawel, clearly, we can’t ignore actors’ intentions before judging their actions!
    – Fine, be my guest.

    The protesters’ agenda is clear. They aim to raise awareness of the ecological crisis that humankind is engineering for itself. The commuters? They want to sustain themselves and their loved ones. For some, the delay may just be an annoyance, but for others, it may trigger serious financial consequences.

    Their intentions are clear and pure.

    I bet no one came to the platform with the intention to hurt another human being physically. Not a single person.

    And yet, here we are. We want it or not, wholeness includes our dark passengers. What follows is that, when unbounded, it brings harm. Probably, more harm than good.

    Bounded Wholeness

    So, what was the grand finale of our tribal war? We lost 20% of the people who mentioned the situation as the primary trigger for their decisions to move on.

    We found an expert to help us organize rules and norms around non-discrimination and inclusion. A big part was learning what is and is not safe for work.

    It seems a hell lot of things are not safe for work. In other words, if we want to be inclusive and non-discriminatory, we must limit ourselves.

    It’s anything but wholeness.

    Well, you could call it bounded wholeness. However, it’s akin to bounded freedom, which essentially is not true freedom.

    – But Pawel, freedom is limited, too. You can’t do anything. Your freedom ends when you start violating someone else’s freedom.
    – Sure, the lines are blurry at best. But if you want to avoid people hurting others, they need to constrain their wholeness a lot. And I mean, a lot.

    Respect as Guidance

    Before the whole thing happened at Lunar, I believed it would be enough to follow a simple rule of thumb. Something that would tell us to respect one another.

    Make sure people give more consideration to others than they demand for themselves. It is more inconsiderate to prevent people from exercising their rights because you are offended by them than it is for them to do whatever it is what offends you. That said, it is inconsiderate not to weigh the impact of one’s actions on others, so we expect people to use sensible judgment and not doing obviously offensive things.

    Ray Dalio

    Or, as my favorite conference puts it in their code of conduct: “Don’t be a jerk, be excellent to each other.”

    The harsh lesson, though, is that it leaves too much space for interpretation. The closer we are to someone, the easier it is to empathize with them. As a result, the generic guidance will work for folks within our ingroup but not necessarily for those outside of it.

    The more different the outgroup from my circle, the harder it is to give them “more consideration” than I expect for myself. It’s even worse when we consider groups polarized against each other. Think modern politics.

    Suddenly, almost everything may theoretically offend someone.

    Clear Boundaries

    That’s why we need very clear boundaries. When my behavior is within those boundaries, I shall feel safe. Even if someone feels hurt, I am free to ignore it, and the other person has to get over it.

    However, when I violate the boundaries, the opposite is true. The other person has every right to expect me to stop doing whatever I’m doing, no matter whether or not I think it should be OK.

    As a team or an organization, we can negotiate these boundaries. We can bring them as far as we collectively agree upon. However, with a diverse team, we will necessarily constrain acceptable behaviors quite heavily.

    It’s not ideal. We still agree that some more sensitive individuals may feel hurt every now and then. That’s the price we pay for “unfreezing” everyone else.

    Otherwise, we’d be petrified that something we do may hypothetically harm somebody.

    Conclusion

    Wholeness, sold to us as “let’s freely bring more of ourselves to a professional context,” is a lie. As appealing as it sounds, it overlooks a critical part. While focusing on the upside, it entirely ignores the risks.

    I know it’s a hard pill to swallow, but in this case, the potential downside is more significant than the gains we get.

    If we reverse engineer the whole process and start with building a (relatively) safe work environment, we won’t end up with wholeness. At least not the kind we were sold in the first place.

    If I’ve succeeded in getting you interested, here’s a video that covers the topic in more depth. A fair warning, though. There might be triggering content inside.

  • Is Growth Necessary for Survival?

    Is Growth Necessary for Survival?

    I shared one of those quick thoughts on Bluesky as a knee-jerk reaction to yet another message encouraging startups to get on a fast-growth path.

    As luck would have it, Matt Barcomb challenged me on that remark. It turned into an exchange, where we quickly started uncovering deeper layers of strategy and portfolio decisions.

    Survival versus Growth

    The starting point is a basic observation that there are situations where survival and growth are aligned, even dependent on each other. However, there are also cases where this assertion doesn’t hold, up to a point where growth is harmful.

    As a metaphor, no species in nature grows infinitely. While a tree sapling’s survival may depend on its growth, making the process indefinite would compromise the tree’s resilience.

    Organizations work similarly, even if Bezoses and Musks of this world would deny it. That is, as long as we are willing to consider standard rules of the business game.

    There’s obviously the too big to fail phenomenon, which we’ve seen in action many times. However, it applies only to very few companies, and even when it applies, the subjects of the theory don’t end up any healthier at the end of treatment.

    For the rest of us, we may accept that growth and survivability are not always aligned.

    What follows is that when forced to choose, we should select survival over growth. If we live to see another day, we can return to growing tomorrow. The opposite doesn’t work nearly as well.

    Long-term versus Short-term

    However, as Matt points out, prioritizing survivability may lead us toward short-termism. We may always play it safe, and as a result, miss potential opportunities for big wins.

    Missing big opportunities may, in turn, be as well an existential threat, except it would develop in the long term. Consider the infamous Kodak digital photography fiasco as a perfect example.

    By the way, that example showcases that survivability is as much a short-term concern as a long-term one.

    Still, I understand that the “focus on survival first” mantra likely biases us toward what’s immediately visible in front of us. So, let’s explicitly consider survivability and time horizon as two separate dimensions.

    Opportunistic Thriving

    We can consider any combination of low or high survivability with either short-term or long-term focus.

    Any strategy threatening the company’s existence and falling into a short time frame would be suicidal (bottom left of the diagram). No sane organization would consciously venture into this territory.

    We may want, however, to stick with potentially dangerous plans with a long-term focus. This would be true when a risky move also has a huge potential upside (upper left part of the diagram). In this case, the scale of the possible gain would justify our risk-accepting strategy.

    That’s the latter part of the “sure things and wild swings” approach, also known as the barbell strategy proposed by Nassim Nicholas Taleb.

    However, if we go for the wild swings, we want to overcompensate them with sure things (Taleb suggests a 9:1 ratio). These safe bets consider primarily a predictable future and focus on preservation (bottom right of the diagram).

    If we combine the two, we land with something we can call opportunistic thriving (upper right of the diagram). We would mix some high-risk bets with a largely conservative strategy and exploit emerging chances for growth, new business, etc.

    At the end of the day, we align growth with survival, right?

    Hold your horses…

    Unfavorable Conditions

    We’re free to explore all options if the conditions are supportive, i.e., the company is already in a safe place and has resources to allocate freely among different options.

    But what if we had to make a choice? What if opportunistic thriving wasn’t an option? What if we had to make a trade-off between preservation and risky bets with huge potential upside?

    Such a situation would happen when we face unfavorable conditions. One classic example would be whether to retain the team when a downturn hits the company.

    Sticking with the proven team means maintaining options for the rebound once an opportunity arises. Here and now, however, we sustain the costs and, thus, incur financial losses.

    Playing the preservation scenario would mean layoffs and improving the financials in the short term. It would also trigger all the additional costs of rebuilding the team once the unfavorable condition is over.

    Sometimes, the trade-off is a point on a scale, e.g., how many people we lay off. Other times, it is binary, e.g., whether to engage in a risky endeavor.

    In either case, it would be a choice to prioritize long-termism over preservation or vice versa.

    Available Options

    If we assess that situation from a helicopter perspective, we realize that not all the options are available.

    To stick with the example of layoffs, we’d like to sustain the team and not incur losses. That would place us in the desired opportunistic thriving area.

    A simple fact that we consider what to do means it’s not an option. In other words, the part of the landscape becomes unavailable.

    We’d love to be as far into the upper-right part as possible, but that’s precisely the space that becomes inaccessible first. The greater the challenges an organization faces, the farther the unavailable area reaches.

    In other words, under unfavorable conditions, we’re forced to make these difficult trade-offs.

    Decision Portfolio

    But wait! While any single decision may force us to choose, the whole portfolio of decisions provides an opportunity for a diverse distribution. That way, we can hedge our risks and, through that, push the “unavailability line” back.

    That’s what the barbell strategy is all about. We actively distribute our investments across the landscape. It’s like Moneyball applied to business decisions.

    Whenever you can’t get one ideal bet (hire a star, in Moneyball terms), make a few non-ideal ones that, when combined, would deliver a comparable result (hire a few role-players with the right skills/stats, in Moneyball terms).

    Center of Gravity

    All the decisions (or bets) create a center of gravity. Interestingly, it won’t necessarily be a simple output of the weight of the bets (the size of the dots in the diagram) and their relative position.

    More forces are in play here.

    The right combination of investments may push the center of gravity in the desired direction (up and to the right). Again, in Moneyball terms, it’s like winning having a team of underdogs.

    From an organization’s perspective, what interests us most is how any decision in our portfolio affects the center of gravity. That one risky project with low chances of succeeding may be just what we need to improve our long-term relevance. Even if that swing is really wild.

    Cost of Too Many Commitments

    A brute force tactic of having a diverse enough decision portfolio may be considered. That, however, would create a whole different set of problems.

    In the past, I wrote about how too many projects at the portfolio level is a major issue for any organization. I considered how portfolio decisions are, in fact, commitments. I analyzed how overcommitment affects the Cost of Delay and can ruin the bottom line.

    It all boils down to the same conclusion: too many commitments are detrimental to (organizational) health.

    To visualize it in the landscape we created, we need to add a pulling force. It will move the center of gravity toward the bottom left corner of the diagram. Yes, straight down to our Death Valley.

    The strength of this pulling force will be proportional to the scale of overcommitment. And the relationship between the two will be exponential.

    The more bets we make, the lower the chances we’ll be able to deliver on any. Once we are already overloaded, adding more commitments will make the situation increasingly perilous.

    So, the balance we aim to strike is to have sufficient diversity in our decisions and, simultaneously, to have as few commitments as possible.

    The Startup’s Challenge

    The entire discussion with Matt began with my remark on the startup ecosystem, pushing aspiring entrepreneurs to grow at all costs.

    While the reasoning stands true for startups—especially early-stage startups—two observations make the consideration more challenging for them.

    First, by definition, they start under unfavorable conditions. And they stay so for a better part of their lifecycle. As a result, the simple shot for opportunistic thriving is unavailable for them from the outset.

    Second, the degree to which the conditions are unfavorable for early-stage startups is far greater than what established companies face. There’s no core business to rely on just yet. The runway is typically short as the availability of funding remains limited.

    The environment is challenging enough that the diversity of the bets portfolio must be compromised. And that’s precisely where my original thought falls into place.

    Fledgling enterprises, way more than established businesses, will be forced to choose between preservation and wild swings exclusively. The latter is typically characterized by a strong push for rapid growth.

    If that’s the choice, I’d go for survival. After all, dead companies don’t really grow.